corpusid
int64 110
268M
| title
stringlengths 0
8.56k
| abstract
stringlengths 0
18.4k
| citations
sequencelengths 0
142
| full_paper
stringlengths 0
635k
|
---|---|---|---|---|
257,985,710 | Situated Meaning in Multimodal Dialogue: Human-Robot and Human-Computer Interactions | The demand for more sophisticated natural human-computer and human-robot interactions is rapidly increasing, as users become more accustomed to conversation-like interactions with their devices. This requires not only the robust recognition and generation of expressions through multiple modalities (language, gesture, vision, action), but also the encoding of situated meaning: (a) the situated grounding of expressions in context; (b) an interpretation of the expression contextualized to the dynamics of the discourse; and (c) an appreciation of the actions and consequences associated with objects in the environment. In this paper, we introduce VoxWorld, a multimodal simulation platform for modeling human-computer interactions. It is built on the language VoxML, and offers a rich platform for studying the generation and interpretation of expressions, as conveyed through multiple modalities, including: language, gesture, and the visualization of objects moving and agents acting in their environment. | [
195063887,
11947761,
11841497,
12085916,
15349950,
212737151
] | Situated Meaning in Multimodal Dialogue: Human-Robot and Human-Computer Interactions
2020
James Pustejovsky
Department of Computer Science
Department of Computer Science
Brandeis University
Colorado State University
-Nikhil Krishnaswamy
Department of Computer Science
Department of Computer Science
Brandeis University
Colorado State University
Situated Meaning in Multimodal Dialogue: Human-Robot and Human-Computer Interactions
6132020Multimodal dialogueaffordancesqualia structurecontinuationsgesturesimula- tionscommon groundsituated meaningsemantic groundingreferring expressions MOTS-CLÉS : Dialogue multimodalaffordancesstructure qualiacontinuationsgestesimula- tionsterrain d'ententesens situéancrage sémantiqueexpressions de référence
The demand for more sophisticated natural human-computer and human-robot interactions is rapidly increasing, as users become more accustomed to conversation-like interactions with their devices. This requires not only the robust recognition and generation of expressions through multiple modalities (language, gesture, vision, action), but also the encoding of situated meaning: (a) the situated grounding of expressions in context; (b) an interpretation of the expression contextualized to the dynamics of the discourse; and (c) an appreciation of the actions and consequences associated with objects in the environment. In this paper, we introduce VoxWorld, a multimodal simulation platform for modeling human-computer interactions. It is built on the language VoxML, and offers a rich platform for studying the generation and interpretation of expressions, as conveyed through multiple modalities, including: language, gesture, and the visualization of objects moving and agents acting in their environment.
RÉSUMÉ. La demande d'interactions naturelles homme-ordinateur et homme-robot plus sophistiquées augmente rapidement, car les utilisateurs s'habituent davantage aux interactions de type conversation avec leurs appareils. Cela nécessite non seulement la reconnaissance et la génération robustes d'expressions à travers de multiples modalités (langage, geste, vision, action), mais aussi l'encodage du sens situé : (a) l'ancrage situé des expressions dans le contexte; (b) une interprétation de l'expression contextualisée à la dynamique du discours; et (c) une appréciation des actions et des conséquences associées aux objets dans l'environnement. Nous présentons VoxWorld, une plateforme de simulation multimodale pour la modélisation des interactions homme-machine. Il est construit sur le langage VoxML et offre une plate-forme riche pour étudier la génération et l'interprétation d'expressions, telles qu'elles sont véhiculées à travers de multiples modalités, notamment : le langage, le geste et la visualisation des objets en mouvement et des agents agissant dans leur environnement.
KEYWORDS: Multimodal dialogue, affordances, qualia structure, continuations, gesture, simulations, common ground, situated meaning, semantic grounding, referring expressions.
MOTS-CLÉS : Dialogue multimodal, affordances, structure qualia, continuations, geste, simulations, terrain d'entente, sens situé, ancrage sémantique, expressions de référence.
Introduction
When humans communicate with each other through language, there is a shared understanding of both an utterance meaning (content) and the speaker's meaning in the specific context (intent). The ability to link these two is the act of situationally grounding meaning to the local context, typically referred to as "establishing the common ground" between interlocutors (Stalnaker, 2002;Asher, 1998). Language use may reflect only a subset of all properties of the current situation, where a full description may be impossible or at least unwieldy. Some kinds of information may in fact be more efficiently communicated using other modalities, such as gesture (e.g., deixis for pointing), demonstration or action, images, or some other visual modality. A central component to the contextualized interpretation of meaning in a discourse is the situational determination of the meanings of expressions given the common ground. It is this notion of situated meaning that is missing in most current human-computer and human-robot interaction models, and the focus of the present paper.
In this paper, we argue that the problem of situational awareness and the creation of situated meaning in discourse involves at least three components: (a) the situated grounding of expressions in context; (b) an interpretation of the expression contextualized to the dynamics of the discourse; and (c) an appreciation of the actions and consequences associated with objects in the environment. In Section 2, we expand on these aspects of meaning in some detail, and then in Section 3, we adopt the modeling language, VoxML, designed to encode non-linguistic, multimodal aspects of meaning associated with concepts. In section 4, we present a computational framework, Vox-World, within which these components are operationalized to facilitate multimodal communication between humans and robots or computers. Section 5 outlines a framework within which to interpret multimodal expressions, while Section 6 presents experimental evidence from single and mixed modality dialogues, illustrating the different ways in which meaning is situated in goal-directed dialogues.
Interactions in the Common Ground
There has been a growing interest in the Human-Robot Interaction community on how to contextually resolve ambiguities that may arise from communication in situated dialogues, from earlier discussions on how HRI dialogues should be designed (Fischer, 2011;Scheutz et al., 2011), how perception and grounding can be integrated into language understanding (Landragin, 2006), to recent work on taskoriented dialogues (Williams et al., 2019). This is the problem of identifying and modifying the common ground between speakers (Clark and Brennan, 1991;Stalnaker, 2002;Asher, 1998). It has long been recognized that an utterance's meaning is subject to contextualized interpretation; this is also the case with gestures in taskoriented dialogues. E.g., depending on the situation, an oriented hand gesture could refer either to an action request ("move it") or a dismissive response ("forget it") (Williams et al., 2019). Even a request for action can be underspecified, denoting either a continuous movement or a movement to a specific location. Similarly, depending on the situation, the definite description in the command "Open the box." may uniquely refer or not, depending on how many boxes are in the context. These and similar miscommunications or the need for clarification in dialogue have been called situated grounding problems (Marge and Rudnicky, 2013), and can be viewed as problematic in a model that appeals to and encodes both a visual modality and situational information into the dialogue state. What the occurrence of these issues makes apparent is the complexity underlying the interpretation of referential expressions in actual situated dialogues. The richness provided by situationally grounding computer or robot behaviors brings to the surface interpretive questions similar to those of a human in the same scenario.
Some recent efforts have been made to provide contextual grounding to linguistic expressions. For example, work on "multimodal semantic grounding" within the natural language processing and image processing communities has resulted in a number of large corpora linking words or captions with images (cf. Chai et al. (2016)). In this paper, we argue that language understanding and linking to abstract instances of concepts in other modalities is insufficient; situated grounding entails knowledge of situation and contextual entities beyond that provided by a multimodal linking approach (cf. Kennington et al. (2013)). Actual situated meaning is much more involved than aligning captions and bounding boxes in an image: e.g., Hunter et al. (2018) discuss the contribution of nonlinguistic events in situated discourse, and also whether they can be the arguments to discourse relations. Similarly, it is acknowledged that gesture is part of either the direct content of the utterance (Stojnić et al., 2019) or cosuppositional content (Schlenker, 2020). Hence, we must assume that natural interactions with computers and robots have to account for interpreting and generating language and gesture.
Consider the joint activity shown in Fig. 1 above between a mother and her son, where they are engaged in icing cupcakes in a kitchen setting. The dialogue in Fig. 2 illustrates some possible multimodal expressions used in such a context of joint activity between two agents. Viewed as a multi-agent collaborative task interaction, there are some obvious elements constituting the common ground between the two agents in Fig. 1. These include reference to: the participants (agents); shared beliefs and assumptions; shared goals and intentions; the accompanying objects in the situation; the shared perception of these objects; and the surrounding space within which the situation unfolds. Some of these elements are given below in Fig. 3 From this example, it is apparent that we can identify three core aspects of meaning that contribute to the common ground in a multimodal dialogue:
SITUATED MEANING IN
1) co-situatedness and co-perception of the agents, such that they can interpret the same situation from their respective frames of reference. This might be a human and an avatar perceiving the same virtual scene from different perspectives; or a human sharing the perspective of a robot as it navigates through a disaster zone;
2) co-attention of a shared situated reference, which allows more expressiveness in referring to the environment (i.e., through language, gesture, visual presentation, etc.). The human and avatar might refer to objects in multiple modalities with a common model of differences in perspective-relative references (e.g., "your left, my right"); or the human sharing the robot's perspective might be able to direct its motion using reference in natural language ("go through the second door on the left") or gesture ("go this way," with pointing);
3) co-intent of a common goal, such that misaligned relationships between agents reflect a breakdown in the common ground. A human and avatar interacting around a table might seek to collaborate to build a structural pattern known to one or both of them; or the human and robot sharing perspective both have a goal to free someone trapped behind a door in a fire. The robot informs the human about the situation and the human helps the robot problem-solve in real time until the goal is achieved.
What this suggests is that any robust communication between humans and computers or robots will require at least three capabilities: (a) a robust recognition and generation within multiple modalities; (b) an understanding of contextual grounding and co-situatedness in the conversation; and (c) an appreciation of the consequences of behavior and actions taking place throughout the dialogue. To this end, in our work, we have developed a platform making use of semantically interpreted multimodal simulations, which provides an approach to modeling human-computer communication by both situating and contextualizing the interaction, thereby visually demonstrating what the co-agent computer or robot is hearing, seeing, thinking, and doing. This platform is based on VoxML, a modeling language for encoding traditionally nonlinguistic, multimodal, aspects of meaning associated with the objects that we encounter, manipulate, and explore in our environment. We turn to this discussion in the next section.
VoxML: Encoding Knowledge of Action and Behavior
Here we argue that a significant part of any model for situated communication is an encoding of the semantic type, functions, purposes, and uses introduced by the objects under discussion. I.e., a semantic model of perceived object teleology, as introduced by Qualia Structure, for example (Pustejovsky, 1995), as well as object affordances (Gibson, 1977) is needed to help ground expression meaning to speaker intent.
Objects under discussion in discourse (cf. Ginzburg (1996)) can be partially contextualized through their semantic type and their qualia structure: e.g., a food item has a TELIC value of eat, a pencil, a TELIC of write, a box, a CHAIR of sit_in, and so forth. However, while an artifact may be designed for a specific purpose, this can only be achieved under specific circumstances. To account for this context-dependence, Pustejovsky (2013) enriches the lexical semantics of words denoting artifacts (the TELIC role specifically) by introducing the notion of an object's habitat, which encodes these circumstances. For example, an object, x, within the appropriate context C, performing the action π will result in the intended or desired resulting state, R, i.e., C → [π]R. That is, if the habitat C (a set of contextual factors) is satisfied, then every time the activity of π is performed, the resulting state R will occur. The precondition context C is necessary to specify, since this enables the local modality to be satisfied.
The habitat for an object is situated within an embedding space and then contextualized within it. For example, in order to use a glass to drink from, the concavity has to be oriented upward, the interior must be accessible, and so on. Similarly, a chair must also be oriented up, the seat must be free and accessible, it must be large enough to support the user, etc. An example of what the resulting knowledge structure for the habitat of a chair is shown below, where these constraints are superscripted with " * ".
These distinctions in habitats facilitate both Gibsonian and telic affordances and transfer learning of Gibsonian affordances relies on information taken from telic affordances (its use for sitting), and vice versa (see Section 6.4): below, the F and C values specify size and part structure, respectively.
(1) λx
chair(x) F = [phys(x), on(x, y1) * , in(x, y2) * , clear(x1) * , orient(x, up) * , support(x1, y3) * ] C = [seat(x1), back(x2), legs(x3)] T = λzλe[C → [sit(e, z, x)]Rsit(x)] A = [made(e , w, x)]
The notion of habitat and the attached behaviors that are associated with an object are further developed in Pustejovsky and Krishnaswamy (2016), where an explicit connection to Gibson's ecological psychology is made, along with a direct encoding of the affordance structure for the object (Gibson, 1977). The affordance structure available to an agent, when presented with an object, is the set of actions that can be performed with it. We refer to these as GIBSONIAN affordances, and they include "grasp", "move", "hold", "turn", etc. This is to distinguish them from more goaldirected, intentionally situated activities, what we call TELIC affordances.
VoxML (Visual Object Concept Modeling Language) is a modeling language for constructing 3D visualizations of concepts denoted by natural language expressions, and is being used as the platform for creating multimodal semantic simulations in the context of human-computer and human-robot communication Krishnaswamy and Pustejovsky, 2016). It adopts the basic semantic typing for objects and properties from Generation Lexicon and the dynamic interpretation of event structure developed in Pustejovsky and Moszkowicz (2011), along with a continuation-based dynamic interpretation for both sentence and discourse composition (De Groote, 2001;Barker and Shan, 2014;Asher and Pogodalla, 2010).
VoxML forms the scaffolding we use to encode knowledge about objects, events, attributes, and functions by linking lexemes to their visual instantiations, termed the "visual object concept" or voxeme. Voxemes representing humans or IVAs are lexically typed as agents, but agents, due to their embodiments, ultimately inherit from physical objects and so fall under objects in the taxonomy. In parallel to a lexicon, a collection of voxemes is termed a voxicon. There is no requirement on a voxicon to have a one-to-one correspondence between its voxemes and the lexemes in the associated lexicon, which often results in a many-to-many correspondence. That is, the lexeme plate may be visualized as a [[SQUARE PLATE]], a [[ROUND PLATE]], or other voxemes, and those voxemes in turn may be linked to other lexemes such as dish or saucer. Each voxeme is linked to either an object geometry, a program in a dynamic semantics, an attribute set, or a transformation algorithm, which are all structures easily exploitable in a rendered simulation platform. An OBJECT voxeme's semantic structure provides habitats, which are situational contexts or environments conditioning the object's affordances, which may be either "Gibsonian" affordances (Gibson, 1977) or "Telic" affordances (Pustejovsky, 1995;Pustejovsky, 2013). A habitat specifies how an object typically occupies a space. When we are challenged with computing the embedding space for an event, the individual habitats associated with each participant in the event will both define and delineate the space required for the event to transpire. Affordances are used as attached behaviors, which the object either facilitates by its geometry (Gibsonian) or purposes for which it is intended to be used (Telic). For example, a Gibsonian affordance for [[CUP]] is "grasp," while a Telic affordance is "drink from." This allows procedural reasoning to be associated with habitats and affordances, executed in real time in the simulation, inferring the complete set of spatial relations between objects at each frame and tracking changes in the shared context between human and computer.
Indeed, object properties and the events they facilitate are a primary component of situational context. In Fig. 4, we understand that the cup in the orientation shown can be rolled by a human. Were it not in this orientation, it might be able to be only slid across its supporting surface (cf. (2)). This voxeme for [[CUP]] gives the object appropriate lexical predicate and typing (a cup is a PHYSICAL OBJECT and an ARTIFACT). It denotes that the cup is roughly cylindrical and concave, has a surface and an interior, is symmetrical around the Y-axis and across associated planes (VoxML adopts 3D graphics convention where the Y-axis is vertical), and is smaller than and movable by the agent. The remainder of VoxML typing structure is devoted to habitat and affordance structures, which we discuss below.
(2) Objects encoding semantic type, habitat, and affordances:
cup LEXICAL = PREDICATE = cup TYPE = physobj, artifact TYPE = HEAD = cylindroid[1] COMPONENTS = surface, interior CONCAVITY = concave ROTATIONAL_SYMMETRY = {Y } REFLECTION_SYMETRY = {XY, Y Z} HABITAT = INTRINSIC = [2] CONSTR = {Y > X, Y > Z} UP = align(Y, EY ) TOP = top(+Y ) EXTRINSIC = [3] UP = align(Y, E ⊥Y ) AFFORDANCE_STRUCTURE = A1 = H [2] → [put(x, on([1]))]support([1], x) A2 = H [2] → [put(x, in([1]))]contain([1], x) A3 = H [2] → [grasp(x, [1])]hold(x, [1]) A4 = H [3] → [roll(x, [1])]R EMBOD = SCALE = <agent MOVABLE = true
In VoxML encodings like 2, bracketed numbers, e.g., [1] are reentrancy indices, such that terms annotated with the same number refer to the same entity. For instance, in habitat 2 (H [2] ), the intrinsic habitat where the cup has an upward orientation, if an agent puts some x inside the cup's cylindroid geometry ([1]), the cup contains x.
One of the major improvements to the notion of habitat developed in VoxML over that given originally in Pustejovsky (2013) is how the preconditions to actions are encoded and scoped. Notice how in the example in (1), the constraint on relative size of the chair to its user (along with all constraints) is specified outside the modal context in the TELIC, while the VoxML representation using Habitats in (3) provides a reentrant binding for the situational variables.
(3) Habitat and affordance structure for chair:
chair HABITAT = INTR = [2] CONSTR = {Y > X, Y > Z} UP = align(Y, EY ) TOP = top(+Y ) AFFORD_STR = A1 = H [2] → [sit(y, on([1]))]support([1], y)
VoxML treats actions and events within a dynamic event semantics as programs (Pustejovsky and Moszkowicz, 2011;Mani and Pustejovsky, 2012). The advantage of adopting a dynamic interpretation of events is that one can map linguistic expressions directly into simulations through an operational semantics (Miller and Johnson-Laird, 1976). Models of processes using updating typically make reference to the notion of a state transition (Harel, 1984). Each event, such as put in (4), can be seen as a traced structure over a Labeled Transition System. The approach is similar in many respects to that developed in both Fernando (2009) andNaumann (2001).
This also allows the system to reason about objects and actions independently. When simulating the objects alone, the simulation presents how the objects change in the world. By removing the objects and presenting only the actions that the viewer would interpret as causing the intended object motion (i.e., an embodied agent pantomiming the object motion), the system presents a "decoupled" interpretation of the action, for example, as an animated gesture that traces the intended path of motion. By composing the two, it demonstrates a particular instantiation of the complete event. This allows an embodied situated simulation approach to easily compose objects with actions by directly interpreting at runtime how the two interact.
For the simulation to run, all parameters (e.g., object location, agent motion, etc.) must have values assigned. The simulation environment itself facilitates the calculation of these values, including a common path that the object and agent's manipulator must follow while completing an action; adhering to these common paths and positional values keeps the two synchronized.
(4) Events as Programs:
put LEX = PRED = put TYPE = transition_event TYPE = HEAD = transition ARGS = A1 = x:agent A2 = y:physobj A3 = z:location BODY = E1 = grasp(x, y) E2 = [while(hold(x, y), move(x, y)] E3 = [at(y, z) → ungrasp(x, y)]
The logic of event structure encodes only minimal temporal constraints on how the subevents interact or play out. The rendering engine itself maintains an internal clock and regulates frame rate, and therefore the time it takes to conduct movements, obviating the need to regularly model this temporal aspect in operationally defined events in VoxML, although scalara attributives like faster or slower can provide temporal modifiers.
VoxWorld: A Platform for Multimodal Simulations
In this section, we introduce a simulation framework, VoxWorld, that situates an embodied agent in a multimodal simulation, with the capability of understanding and generating language and gesture, and the ability to synthetically perceive an interlocutor human as well as objects in its virtual surroundings, and act on them through a limited inventory of actions.
Modes of Simulation
The concept of simulation has played an important role in both AI and cognitive science for over forty years. The two most common uses for the term simulation as used in computer science and AI include: (a) computational simulation modeling, where variables in a model are set, the model is run, and the consequences of all possible computable configurations become known; and (b) situated embodied simulations, where an environment allows a user to interact with objects in a "virtual or simulated world", where the agent is embodied as a dynamic point-of-view or avatar in a proxy situation. Such simulations are used for training humans in scripted scenarios, such as flight simulators, battle training, and of course, in video gaming, where the goal is to simulate an agent within a situation.
Simulation has yet another meaning, where starting with Craik (1943), we encounter the notion that agents carry a mental model of external reality in their heads. Johnson-Laird (1987) develops his own theory of a mental model, which represents a situational possibility, capturing what is common to all the different ways in which the situation may occur. This is used to drive inference and reasoning, both factual and counterfactual. Simulation Theory, as developed in philosophy of mind, has focused on the role "mind reading" plays in modeling the mental representations of other agents and the content of their communicative acts (Goldman, 2006). Simulation semantics (Feldman, 2010;Narayanan, 2010) argues that language comprehension is accomplished by means of such mind reading operations. Similarly, within psychology, there is an established body of work arguing for "mental simulations" of future or possible outcomes, as well as interpretations of perceptual input (Barsalou, 1999). These approaches we refer to as embodied theories of mind.
VoxWorld
VoxWorld integrates the functionality and the goals of all three approaches above. The platform situates an embodied agent in a multimodal simulation, with mindreading interpretive capabilities, facilitated through assignment and evaluation of object and context parameters within the environment being modeled.
Architecture
VoxWorld is based on the semantic scaffold provided by the VoxML modeling language , which provides a dynamic, interpretable model of objects, events, and their properties. This allows us to create visualized simulations of events and scenarios that are rendered analogues to the "mental simulations" discussed above. We can restrict mind-reading to events that are tangible and perceptually reflective or transparent. So, mental events (desires, beliefs by themselves, etc.) will not be modeled here as simulations themselves, but rather as modal signatures or propositional content of a common ground-that is an agent's desire for food may manifest as holding their stomach or opening the refrigerator, themselves modeled as distinct events stemming from that cause. VoxSim serves as the event simulator within which these simulations are created and rendered in real time, serving as the computer's method of visually presenting its interpretation of a situation or event. Because modalities are modes of presentation, a multimodal simulation entails as many presentational modes as there are modalities being modeled. The visual modality of presentation (as in embodied gaming) necessitates "situatedness" of the agent, as do the other perceptual modalities. Therefore, when we speak of multimodal simulations, they are inherently situated. In a human-computer interaction using such a simulation, the simulation is a demonstration of the computational agent's "mind-reading" capabilities (an agent simulation). If the two are the same (where the agent is a proxy for the player or user), then the "mind-reading" is just a demonstration of the scenario. It observes what the user can. The user observes the agent act as if they share the same perspective. If, on the other hand, the two are separate (agent is not proxy for the user), then the simulation/demonstration communicates the agent's understanding of the user and the interaction. In this case, this demonstration entails the illustration of both epistemic and perceptual content of the agent. The agent's actions within the scene facilitate the human's "mind-reading" based on the agent's demonstrated interpretation of propositional content within the scene. We assume an agent has present epistemic knowledge and the relevant inferences reasonably associated with/derivable from these propositions. The agent may know that an object is graspable and can be held in a certain way. This also means that the agent "knows" that it is touchable and moveable, similarly for propositional knowledge associated with logical entailments, etc.
The current architecture of the VoxWorld system is shown in Fig. 5. At the center is VoxSim, the software that handles visual event simulation in three dimensions, written with the Unity game engine. VoxSim connects to a number of other default VoxWorld components, including some native natural language processing capabilities, VoxML encodings/GL knowledge as interpreted through the multimodal semantics discussed in Section 5, and 3rd-party libraries, e.g., QSRLib (Gatsoulis et al., 2016). Individual agent, such as the interactive avatar Diana (discussed below), are arbitary output interfaces that can also connect to 3rd-party endpoints; in the case of Diana, this is custom gesture and affect recognition (Narayana et al., 2018).
Usage
VoxSim contains scenes in a Blocks World domain, plus a set of more complicated or interesting everyday objects (e.g., cups, plates, books, etc.). In scenes without an avatar, the user can direct the computer to manipulate objects in space or create an avatar that can act upon objects and respond to the user's input. VoxWorld includes other software, models, and interfaces, e.g., to consume input from CNN-based gesture recognizers (Narayana et al., 2018), or to track the agent's epistemic state or knowledge of what its interlocutor knows.
It is straightforward to create new scenes with 3D geometries with packaged code that creates and instantiates voxemes, handles their interactions and performs basic spatial reasoning over them. VoxWorld contains a library of basic motion predicates and methods to compose them into more complex actions using VoxML.
Situated Reasoning in VoxWorld
Situational embodiment takes place in real time, so in a situation where there may be too many variables to predict the state of the world at time t from initial conditions at time 0, situational embodiment within the simulation allows the agent to reason forward about a specific subset of consequences of actions taken at time t, given the agent's current conditions and surroundings. Situatedness and embodiment is required to arrive at a complete, tractable interpretation given any element of non-determinism. E.g., an agent trying to navigate a maze from start to finish could easily do so with a map that provides complete or sufficient information about the scenario. However, if the scene is disrupted (e.g., the floor crumbles, or doors open and shut randomly), the agent would be unable to plot a course to the goal. It would have to start moving, assess circumstances at every timestep, and choose the next move(s) based on them. Situated embodiment allows the agent to assess the next move based on the current set of relations between itself and the environment (e.g., ability to move forward but not leftward at the current state). This allows reasoning that saves computational resources and performs more analogously to human reasoning.
Given the continuous tracking of object parameters such as position and orientation, facilitated by a game engine or simulation, and the knowledge of object, event, and functional semantics facilitated by a formal model, an entity's interpretation at runtime can be computed in conjunction with the other entities it is currently interacting with and their properties. One such canonical example would be placing an object The mug has an intrinsic top, which is aligned with the upward Y-axis of the world or embedding space (denoted in VoxML as {align(Y, E Y ), top(+Y )}). The mug is a concave object, and the mug's geometry (the [[CUP]], excluding the handle) has reflectional symmetry across its inherent (object-relative) XY-and YZ-planes, and rotational symmetry around its inherent Y-axis such that when the object is situated in its inherent top habitat, its Y-axis is parallel to the world's. From this we can infer that the opening (e.g., access to the concavity) must be along the Y-axis. Encoding the object's concavity allows fast computation for physics and collisions using bounding boxes, while still facilitating reasoning over concave objects. An embodied simulation model such as VoxWorld is an approach that integrates all three aspects of simulation: a situated embodied environment built on a game engine platform. The computer, either as an embodied agent distinct from the viewer, or as the totality of the rendered environment itself, presents an interpretation (mindreading) of its internal model, down to specific parameter values, which are often assigned for the purposes of testing that model. As such, it provides a rich environment within which to experiment with task-oriented dialogues, such as those explored in Section 6, because of the requirement that the agent have a situated embodiment in which it interprets its environment and its interlocutor. This in turn requires the creation of common ground (CG) between the human and the AI that allows them to communicate. The parameters within this CG structure can be varied and set according to various experimental configurations, allowing us to both qualitatively and quantitatively measure the effect of different CG structures on the communication. For example, we can experiment with variable settings for the composition of multimodal referring descriptions as well as action or event predicates; that is, what aspects of the content of the expression are conveyed through each modality, speech or gesture? Another variation involves the degree of alignment of information in each modal channel; that is, whether a linguistic expression and gesture are synchronous or asynchronous when generated. The interaction in Fig. 7 illustrates a person directing an avatar to pick up a block, using an asynchronous multimodal expression. We assume that a simulation is a contextualized 3D virtual realization of both the situational environment and the co-situated agents, as well as the most salient content denoted by communicative acts in discourse between them. The encoding that VoxML provides for objects, with its rich semantic typing and action affordances, enables VoxWorld to describe agent actions as multimodal programs, as well as identifying and tracking the elements of the common ground that are revealed in the interaction between parties, be they humans or artificially intelligent agents.
Multimodal Semantics for Common Ground
The theory of common ground has a rich and diverse literature concerning what is shared or presupposed in human communication (Clark and Brennan, 1991;Stalnaker, 2002;Asher, 1998;Ginzburg and Fernández, 2010). With the presence of a common ground during shared experiences, embodied communication assumes agents can understand one another in a shared context, through the use of co-situational and co-perceptual anchors, and a means for identifying such anchors, such as gesture, gaze, intonation, and language. In this section, we develop a computational model of common ground for multimodal communication.
We assume generally a model of discourse semantics as proposed in Asher and Lascarides (2003), as it facilitates the adoption of a continuation-based semantics for our phrase-level compositional semantics (Barker and Shan, 2014), as well for discourse, as outlined in De Groote (2001) and Asher and Pogodalla (2010). For the present discussion, however, we will not refer to SDRT representations, but focus instead on the semantics integrated multimodal expressions in the context of task oriented dialogue, as presented first in (Pustejovsky, 2018) and extended here.
Here, we introduce the notion of a common ground structure, the information associated with a state in a dialogue or discourse. We model this as a state monad (Unger, 2011), as illustrated in (5).
(5) State Monad: Mα = State → (α × State)
A state monad corresponds to computations that read and modify a particular state, in this case a state in the discourse. M is a type constructor that constructs a function type taking a state as input and returns a pair of a value and a new or modified state as output. This monad consists of the following state information:
(6) a. the communicative act, C a , performed by an agent, a: a tuple of expressions from the modalities involved. For our present discussion, we restrict this to a linguistic utterance, S (speech) and a gesture, G. There are hence three possible configurations in performing a C: C a = {(G), (S), (S, G)}; b. A: the agents engaged in communication; c. B: the shared belief space; d. P: the objects and relations that are jointly perceived in the environment; e. E: the embedding space that both agents occupy in the communication. The common ground structure (CGS) can be represented graphically as in (7), where an agent, a i , makes a communicative act either through gesture, G in (7a), or linguistically, as in (7b.) 1 (7) a.
A:a 1 , a 2 B:
∆ P:b E : E G a1 b.
A:a 1 , a 2 B:∆ P:b E : E S a1 = "You a2 see it b " (7a) specifies that two agents, a 1 and a 2 , co-inhabiting an embedding space, E, within which the experience is embodied, share a set of beliefs, ∆, where they can both see the object, b. Given this representation, the gesture is now situated to refer to objects and knowledge within the CG structure. In (7b), the linguistic expression, S a1 , is grounded relative to the parameters of common ground, where the indexical you will denote the agent, a 2 , and the pronoun it will denote the object, b.
We have augmented and extended the approach taken in Kendon (2004) and Lascarides and Stone (2009), where gestures are simple schemas consisting of distinct sub-gestural phases, where Stroke is the content-bearing phase of the gesture.
(8) G → (P rep) (P re_stroke Hold) Stroke Retract
In the context of multimodal dialogues and interactions with computational agents and robots, gesture's Stroke will denote a range of primitive action types, ACT , e.g., grasp, pick up, move, throw, pull, push, separate, and put together. There are many ways to convey intent to carry out these actions, but they all involve two characteristics: (a) the action's object is an embodied reference in the common ground; and (b) the gesture sequence must be interpreted dynamically, to correctly compute the end state of the event. To this end, we model two kinds of gestures in our dialogues: (a) establishing a reference; and (b) depicting an action-object pair.
(9) a. Deixis: D obj → Dir Obj b. Action: G Af → Act Obj We introduce the notion of an interpreted gesture tree in (10a), which indicates that the gesture D obj functionally consists of a deictic orientation, Dir, with the demonstratum, d, and the referenced or denoting entity, Obj, denoting b 1 .
(10) Interpreted Gesture Tree:
As gesture is intended for visual interpretation, it is directly interpretable by the interlocutor in the context if and only if the value is clearly evident in the common ground, most likely through visual inspection. Directional or orientational information conveyed in a gesture identifies a distinct object or area of the embedding space, E, by directing attention to the End of the designated pointing ray (or cone) trace (Lascarides and Stone, 2009;Lücking et al., 2015;Pustejovsky, 2018).
(
11) [[D obj ]] = [[End(ray(d))]]
We model the interpretation function, [[.]], as fully determining the value of the deixis in the context, supplied by the common ground, which we discuss below. In (10b), the action gesture type, G Af , consists of an action-object pairing, where the action, a, is applied to the object, b 1 , in some prototypical manner. The strategies available are outlined in (12-14). b. GvP 3 → G Af D obj D dir As mentioned above, the deictic gesture in (9a) and (10a) actually serves to indicate both a location and objects within that location, suggesting that deixis denotes a dot object, viz., PHYSOBJ•LOCATION (Pustejovsky, 1995). Either of these type components may be exploited by the deictic reference, which is then interpreted in context, either as a selection (exploiting the PHYSOBJ) or as a destination (exploiting either). For example, should an object b 1 already be selected through a deixis d a , as in (10a), a subsequent deixis d b may be interpreted as selecting a destination location in isolation (in which case the interpretation exploits the LOCATION of d b ), or as selecting a location relative to another object (exploiting the PHYSOBJ type of d b ). We discuss this further below.
With conventional treatments of continuation-style passing within the utterance, all linguistic expressions are continuized within the sentence. This has a distinct advantage in multimodal processing, because it allows for an informational distribution among the expressions being used in composition to form larger meanings.
By treating the common ground as a state monad, as described above, we can continuize the composition above the level of the sentence as well. Following De Groote (2001), Asher and Pogodalla (2010) and further developments in Van Eijck and Unger (2010), we represent a context as a stack of items and the type of left contexts to be lists of entities, [e]. Right contexts will be interpreted as continuations: a discourse that requires a left context to yield a truth value. The type of a right context is therefore The discourse updating operation is accomplished through continuation-passing as well, as in (Asher and Pogodalla, 2010). We apply a CPS transformation to arrive at the continuized type for each expression, notated as an overlined expression (Van Eijck and Unger, 2010). Given the current discourse, T , and the new utterance, C, we take the integration of C into T as follows: Through its own continuation, the referent identified in the first deixis, D Obj , is passed to the action (λk.k([[Move]])), while the continuized interpretation of the action delays the computation of its argument until the appropriate binding has been identified. Finally, the goal location for the movement selected for by the move gesture is identified through the action of the continuized location deixis, D Loc . This is illustrated in (18), along with the common ground structure that is computed, shown in (17). Given a description of the gesture grammar as used in our multimodal dialogues, let us explore a communicative act that exploits a combination of both speech and gesture, (S, G). We identify three configurations for how a language-gesture ensemble can be interpreted, depending on which modality carries the majority of semantic content: (a) language with co-speech gesture, where language conveys the bulk of the propositional content and gesture adds situated grounding, affect, effect, and presuppositional force (Cassell et al., 2000;Lascarides and Stone, 2009;Schlenker, 2020); (b) co-gestural speech, where gesture plays this role (Pustejovsky, 2018); and (c) a truly mixed modal expression, where both language and gesture contribute equally to the meaning. In practice, while many of the interaction in our dialogues have this property, the discourse narrative is broadly guided by gesture. For this reason, we model the multimodal interactions as content-bearing gesture with co-gestural speech.
A multimodal communicative act, C, consists of a sequence of gesture-language ensembles, (g i , s i ), where an ensemble is temporally aligned in the common ground. Let us assume that a linguistic subexpression, s, is either a word or full phrase in the utterance, while a gesture, g, comports with the gesture grammar described above.
(19) Co-gestural Speech Ensemble:
G g 1 . . . g i . . . g n S s 1 . . . s i . . . s n We assume an aligned language-gesture syntactic structure, for which we provide a continuized semantic interpretation. Both of these are contained in the common ground state monad introduced above (6). For each temporally indexed and aligned gesture-speech pair, (g, s), we have a continuized interpretation, as shown below. Each modal expresssion carries a continuation, k g or k s , and we denote alignment of these two continuations as k s ⊗ k g , seen (20).
(20) λk s .k s ([[s]]) λk g .k g ([[g]]) λk s ⊗ k g .k s ⊗ k g ([[(s,g)]])
We bind co-gestural speech to specific gestures in the communicative act, within a common ground, CGS. A dashed line in (21) indicates that a co-gestural speech element, S, is aligned with a particular gesture, G. For example, consider the co-gestural speech expression below.
The CG structure for this expres-
sion, G D Obj Grab g S THAT ___ , is
shown in (21).
(21) [[ THAT, D Obj . , Grab ]] = λk s ⊗ k g .([[D Obj ]]; λj g .(([[Grab]]j g )k s ⊗ k g ))
Common ground updates will also include executing modal operations over the belief space B, where each new element from the discourse is introduced via a public announcement logic (PAL) formula, and each new perceived object or relation is introduced into P via an analogous public perception logic (PPL) formula (Plaza, 2007;Van Ditmarsch et al., 2007;Van Benthem, 2011). We will use [α]ϕ to denote that an agent "α knows ϕ". Public announcements are implemented as: [!φ 1 ]φ 2 . Any proposition, ϕ, in the common knowledge held by two agents, α and β, is computed as:
[(α ∪ β) * ]ϕ.
Similarly, an agent α's perception is encoded as sets of accessibility relations, α, between situations. What is seen in a situation is encoded as either a proposition, ϕ, or existential statement of an object, x,x.
[α] σ ϕ denotes that agent "α perceives that ϕ".
[α] σx denotes that agent "α perceives that there is an x." Given the theory of two-level affordances proposed here (Gibsonian/Telic), we can naturally think of objects as antecedents to the actions performable on them. For each object in (22), we identify attached behaviors. This naturally suggests that affordances are a subclass of continuations. For example, both cup and block have similar Gibsonian affordance values, but quite distinct Telic affordance values. This can be distinguished by the nature of their respective Telic continuation sets as follows, where sel is a function that selects a suitable discourse antecedent inside the continuation set (Asher and Pogodalla, 2010): λk Gib ⊗ k T elic .k Gib ⊗ k T elic (cup), grab ⊆ sel k Gib , drink ⊆ sel k T elic , λk Gib ⊗ k T elic .k Gib ⊗ k T elic (block), grab ⊆ sel k Gib , pick_up ⊆ sel k Gib , move ⊆ sel k Gib . This is the subject of ongoing research.
6. Experiments with Multimodal Dialogues 6.1.
Aspects of Multimodal Compositionality
In this section, we provide additional formal analysis of experimental data gathered from multimodal dialogues between a human and a computational agent, represented as an avatar in VoxWorld. We examine extracts from dialogues between humans and computational agents in various tasks, in order to examine the nature of the communicative act in the context of the common ground structure. We illustrate how the situated meaning of the multimodal expression is constructed in each case. In particular, we look at three aspects of multimodal compositionality in these examples:
(23) a. generating referring expressions using different modalities; b. generating and interpreting action and event expressions; c. generating full action descriptions using both gesture and language. Recall that a multimodal communicative act, C, consists of a sequence of gesturelanguage ensembles, (g i , s i ), where an ensemble is temporally aligned in the common ground. For the examples below, we annotate the dialogue with the contribution of both speech and gesture for each agent. Each dialogue turn encodes a multimodal ensemble, S G , which may or may not be realized in both modalities. In the annotation below, alignment between the modalities is indicated through a temporal indexing on the appropriate modal expression, e.g., t i . Since we can use speech and gesture to indicate objects, location, and actions, we bias our speech recognition toward syntactic categories that represent partial information (e.g., NPs for objects, PPs for locations, VPs for actions), using incrememntal predictivity (cf. Hough et al. (2015)). We parse input in both directions, so we can take inputs like "put a block on the purple block" without resolving "a block" to the purple block, to prevent the agent from putting the purple block on itself.
Multimodal Referring Expressions
The Embodied Multimodal Referring Expressions (EMRE) dataset (Krishnaswamy and Pustejovsky, 2019) consists of 1,500 visual simulated situations showing an agent (Diana) indicating various object in a scene each accompanied by a definite referring expression. Referring expressions may take the form of deictic gesture only, a spoken description only with no demonstratives (e.g., "the red block in front of the knife and left of the green block"), or a mixed-modality referring expression as in Fig. 9 (right). Fig. 9 (left) shows a sample still that accompanies the utterance, with an equivalent common ground structure one the right. Figure 9. Left: Sample still from the EMRE dataset (L), with CGS (R) and semantics of the RE (below), showing a continuation for each modality, k s and k g , which apply over the object subsequently in the dialogue.
Amazon Mechanical Turk workers evaluted the EMRE dataset on a Likert-type scale for naturalness of the depicted referring expression for the indicated object. We found a clear preference for the multimodal referring expressions, suggesting that the redundancy provided by co-occurring language and gesture made for the clearest, most natural references to objects.
In Krishnaswamy and Pustejovsky (2020), we extracted formal features from the data as one-hot vectors representing elements of common ground structures. If in one of visualized REs, the avatar points to b, one of the jointly perceived objects ∈ P, such that ∀b (b ∈ P → K α h P αa b ∧ K αa P α h ). This demonstrates the avatar can point, and knows that b is the target [C αa = P oint g → Dir b!]K α h K αa (P oint g ∧ target(b)), which is encoded as a single feature. An agent may introduce a new object into the dicussion, making common the knowledge of its existence. Or an agent a uses a term t to make public the knowledge of a's interpretation of t.
We used these CGS-extracted features to train a neural net to predict the naturalness of a given referring expression, using the naturalness judgments from the EMRE dataset as ground truth. The EMRE dataset contains situational information about the specific configuration in which the referring expression was generated, and the linguistic referring expression itself, so we tested the effects of including formal, CGSderived features by training classifiers on combinations of the symbolic situational features, embedding vectors of the linguistic RE, and the CGS-derived features.
We trained a multilayer preception, a simple, fast architecture that can distinguish dependencies in linearly-inseparable regions of data. This architecture consists of three fully-connected hidden layers of 32, 128, and 64, respectively, prior to a sof tmax output layer. The layers use tanh, ELU, and tanh activation, respectively, cross-entropy loss and Adam optimization, and is trained for 1,000 epochs with a batch size of 50. We perform 7-fold cross-validation in order to achieve a more balanced sample across all classes of annotator judgments. k = 7 is chosen here to approximate a leave-one-out cross-validation approach over the 8 annotator judgments on each visualized referring expression. The "most likely" annotator judgment in the EMRE dataset is a probability distribution so, we regard a "correct" prediction by the classifier as one that falls within the correct quintile of the distribution over all annotator judgments of that visualized referring expression. 10 shows that inclusion of formal features derived from the elements of common-ground structures improved classifier prediction accuracy by between 7% and 11% relative to baseline predictions that used the raw features of the EMRE dataset, plus sentence embedding representations of the referring expression itself. This suggests that common ground structures provide a dense, interpretable representation of the dialogue state, facilitating generation of natural, situation-appropriate referring expressions, and predicts the natural quality of a referring expression beyond other strong predictors of naturalness, e.g., modality. Establishing entities in a common ground structure so they can be recombined appropriately and interpreted in context allows us to build asynchronous agent behaviors capable of interruption and correction. Correction (Fig. 12) is currently implemented by performing three functions: (a) Undo, which recontinuizes an expression which has saturated its parameters, i.e., undo k = λk.k(grab); (b) Rewind, which reintroduces the previous monad; and (c) Reassign, which takes the corrected value and assigns it, resulting in M, cg 2 |= grab(white).
Interruptions and Corrections in Dialogue
In this manner, parameters can be unbound from either object or location argument, depending on the typing of the content communicated. Fig. 11 shows one such situation, where the replacement content "on the white one" is evaluated to a location. The state monad containing the location on the blue block is rewound, and the argument reassigned to the location on top of the white block. Had the utterance been "the The user ambiguously points to yellow and white blocks. Diana chooses the yellow block (λk.k(grab) ⇒ M, cg 1 |= grab(yellow)). The user corrects her, focus is unbound from the yellow block and assigned to the white block. Figure 12. Correcting deictic reference white one," the action would be reassigned with the white block as the theme, with the previously-existing target location, and Diana would put down the yellow block and put the white block on the blue block. Diana may come across objects with different affordances from the typical Blocks World scenario. In these cases, the semantics of each object provided by VoxML allows Diana to learn new gestures associated with specific affordances of specific objects. Fig. 13 specifies such an interaction.
Using a random forest classifier, the gesture the human makes to associate with the specific affordance is situated in the search space defining the existing known gestures. Those learned grasp semantics can then be propagated down to any other event containing [[GRASP]] as a subevent, as shown in (25).
while(C, A) states that an activity, A, is performed only if a constraint, C, is satisfied at the same moment.
Thus, if the agent encounters a [[SLIDE]] action with an outstanding variable (λy.slide(y, loc)), and the human makes a gesture denoting grasp(plate), the agent can directly lift grasp(plate) to the slide action and apply the argument plate to y: λy.slide(y, loc)@plate ⇒ λy.slide(y, loc).
(25) grasp(e 1 , AG, y); while(hold(AG, y) ∧ on(y, SURF) ∧ ¬at(y, LOC)), move_to(e 2 , AG, y, LOC)); if(at(y, LOC), ungrasp(e 3 , AG, y)) Affordance properties can also be transferred between objects. Given that similar habitats serve as necessary (but not sufficient) preconditions to behaviors (e.g., to be rolled, an apple, cup, and bottle must all be turned on their sides), the ability to assess an unknown object relative to known ones allows an agent to transfer properties between them, to gain a handle on interacting with and discussing a novel object. Consider Diana observes similarities in the cup's habitats and the bottle's (e.g., similar orientation, symmetry and size constraints), infers they may share behaviors, and so grasps one like the other. Links between habitats and affordances allows inference of similar objects and behaviors in the current situation.
Model
Over 17 VoxML objects (e.g., Fig.2), we trained 200D habitat and affordance embeddings using a Skip-Gram model for 50,000 epochs with window size 3. Objects were represented as averaged habitat or affordance vectors. These embeddings were run through a 7-layer MLP and a 4-layer (1D) CNN, that chose the known object most similar to the unlabeled vector. E.g., a vector representing a plate's affordances was predicted to be similar to a cup or bottle due to its containment affordance.
For each object, 8 annotators chose the 2 most similar objects in the vocabulary, in terms of their afforded behaviors, and we performed k-means clustering over these annotations. Our models trained on habitat or affordance embedding vectors successfully predicted an object in the correct cluster 80% of the time (Fig. 14). Diana then enacted known behaviors over novel objects (Fig. 15, top right). Further analysis of these models and their properties are ongoing but these early results show how affordances can be used to train useful models over small sample sizes.
Conclusion
Multimodal peer-to-peer interfaces require robust integration of conversational modalities in a naturalistic fashion. We have outlined the first steps toward such integration, based on the logic of our multimodal simulation semantics and 3D environment as the platform for shared common ground. We give our computational agent a framework for major faculties natively available to humans using computer vision techniques to recognize gesture and by laying the groundwork for a modal logic of synthetic vision. The result is a framework and platform that interweaves linguistic and non-linguistic modalities in the completion of a shared task by exploiting the relative strengths of linguistic and non-linguistic context to exchange information in a situated communication. We have also developed this framework into an interaction with a mobile robot mediated by a virtual rendition of the environment the robot sees as it explores. The human then gestures to objects and locations on the screen and gives the robot grounded instructions with spoken English and gesture.
We hope to have demonstrated that the notion of situatedness involves embedding linguistic expressions and their grounding within a multimodal semantics. This approach allows environmentally-aware models that can be validated; if one model of expression (e.g., gesture) is insufficiently communicative, another (e.g., language) can be used to examine where it went wrong. Each additional modality provides an avenue through which to validate models of other modalities.
Figure 1 .
1Mother and son interacting in a shared task of icing cupcakes.
A JOINT ACTIVITY -SON: Put it there (gesturing with co-attention)? -MOTHER: Yes, go down for about two inches. -MOTHER: OK, stop there. (co-attentional gaze) -SON: Okay. (stops action) -MOTHER: Now, start this one (pointing to another cupcake).
Figure 2 .
2Dialogue.
Figure 3 .
3Elements from the common ground forFigure 1.
Figure 4 .
4Cup in habitat allowing rolling.
Figure 5 .
5VoxWorld Architecture schematic.
[[SPOON]] in an [[IN]] relation with another object [[MUG]] (Fig. 6).
Figure 6
6Figure 6. [[SPOON IN MUG]].
Figure 7 .
7Asynchronous ensemble dialogue: Human grasping gesture precedes his linguistic utterance, "Grab it".
[e] → t. Hence, context transitions get the type [e] → [e] → t; they are characteristic functions of binary relations on contexts. The continuized semantics for gesture phrases is in (15). (15) a. S G → (NP) GvP [[S]] = ([[NP]][[GvP]]) b. GvP 1 → G af D Obj [[GvP 1 ]] = λj.([[D Obj ]]; λj .(([[G af ]]j )j)) c. GvP 2 → G af D Obj D Loc [[GvP 2 ]] = λk.([[D Loc ]]; λj.([[D Obj ]]; λj .(([[G af ]]j )j)k)) d. GvP 3 → G af D Obj D Dir [[GvP 3 ]] = λk.([[D Dir ]]; λj.([[D Obj ]]; λj .(([[G af ]]j )j)k))
( 16 )
16[[(T.C)]] M,cg = λk.[[T]](λn.[[C]](λm.k(m n))) To illustrate how continuations help in the interpretation of gesture sequences, consider a single modality gesture imperative. SINGLE MODALITY (GES-TURE) IMPERATIVE HUMAN 1 : G = [points to the purple block] t1 HUMAN 2 : G = [makes move gesture] t2 HUMAN 3 : G = [points to the red block] t3
) [[D Obj .Move.D Loc ]] = λk.([[D Loc ]]; λj.([[D Obj ]]; λj .(([[Move]]j )j)k))
(
22) a. block: Pick me up!, Move me! b. cup: Pick me up!, Drink what's in me! c. knife: Pick me up!, Cut that with me!
Figure 8 .
8Co-gestural speech imperative.
Figure 10 .
10Classification accuracy using formal features (mean and standard deviation).
Fig.
Fig. 10 shows that inclusion of formal features derived from the elements of common-ground structures improved classifier prediction accuracy by between 7% and 11% relative to baseline predictions that used the raw features of the EMRE dataset, plus sentence embedding representations of the referring expression itself. This suggests that common ground structures provide a dense, interpretable representation of the dialogue state, facilitating generation of natural, situation-appropriate referring expressions, and predicts the natural quality of a referring expression beyond other strong predictors of naturalness, e.g., modality.
Figure 11 .
11Correcting and undoing an action.
6. 4 .Figure 13 .
413Affordance Structure and Transfer Learning AFFORDANCE LEARNING IN KITCHENWORLD HUMAN: S = The plate. DIANA: S = Okay.t1 G = [points to the plate]t1 HUMAN: G = [makes "claw down" gesture] DIANA: S = Should I grasp it like thist2? G = [grasps plate from the side]t2 HUMAN: S = Yes. DIANA: S = Is there a gesture for that? HUMAN: G = [makes "grasp plate" gesture] Diana and human interacting.
Figure 14 .
14Prediction accuracy w/ 6 means.
Fig. 15, where Diana has no semantics for what we recognize as a bottle. TRANSFER OF OBJECT AFFOR-DANCES HUMAN: S = What is that?t1 G = [points to the cup] t1 DIANA: S = That's a cupt2 G = [points to the cup] t2 HUMAN: S = What is that?t3 G = [points to the bottle] t3 DIANA: S = I don't know, but I could grasp it like a cupt4. G = [points to the bottle] t4 HUMAN: S = Grab it. DIANA: G = [grasps bottle from the side] t5
Figure 15 .
15Transferring affordance properties through dialogue.
.Agents
mother, son
Shared goals
baking, icing
Beliefs, desires,
Mother knows how to ice, icing goes on cupcakes, etc.
intentions
Mother is teaching son
Objects
cupcakes, plate, knives, pastry bag, icing, gloves
Shared perception the objects on the table
Shared Space
kitchen
TAL. Volume 61 -n • 3/2020
TAL. Volume 61 -n • 3/2020
This is similar in many respects to the representations introduced in Cooper and,Ginzburg and Fernández (2010) andDobnik et al. (2013) for modeling action and control with robots.
TAL. Volume 61 -n • 3/2020
AcknowledgementsThis work was supported by Contract W911NF-15-C-0238 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO). Approved for Public Release, Distribution Unlimited. The views expressed herein are ours and do not reflect the official policy or position of the Department of Defense or the U.S. Government. We would like to thank Ken Lai, Bruce Draper, Ross Beveridge, Francisco Ortega, and Lucia Donatelli for their comments and suggestions.
Common ground, corrections and coordination. N Asher, Journal of Semantics. Asher N., "Common ground, corrections and coordination", Journal of Semantics, 1998.
N Asher, A Lascarides, Logics of conversation. Cambridge University PressAsher N., Lascarides A., Logics of conversation, Cambridge University Press, 2003.
SDRT and continuation semantics. N Asher, S Pogodalla, JSAI International Symposium on Artificial Intelligence. SpringerAsher N., Pogodalla S., "SDRT and continuation semantics", JSAI International Symposium on Artificial Intelligence, Springer, p. 3-15, 2010.
C Barker, C.-C Shan, Oxford studies in theoretical linguistics. 53Continuations and natural languageBarker C., Shan C.-c., Continuations and natural language, vol. 53, Oxford studies in theoreti- cal linguistics, 2014.
Perceptions of perceptual symbols. L W Barsalou, Behavioral and brain sciences. 224Barsalou L. W., "Perceptions of perceptual symbols", Behavioral and brain sciences, vol. 22, n o 4, p. 637-660, 1999.
Coordination and context-dependence in the generation of embodied conversation. J Cassell, M Stone, H Yan, Proc. of 1st Int. Conf. on NLG, ACL. of 1st Int. Conf. on NLG, ACLCassell J., Stone M., Yan H., "Coordination and context-dependence in the generation of em- bodied conversation", Proc. of 1st Int. Conf. on NLG, ACL, p. 171-178, 2000.
Collaborative language grounding toward situated humanrobot dialogue. J Y Chai, R Fang, C Liu, L She, 37AI MagazineChai J. Y., Fang R., Liu C., She L., "Collaborative language grounding toward situated human- robot dialogue", AI Magazine, vol. 37, n o 4, p. 32-45, 2016.
Grounding in communication. H H Clark, S E Brennan, Perspectives on socially shared cognition. 13Clark H. H., Brennan S. E., "Grounding in communication", Perspectives on socially shared cognition, vol. 13, n o 1991, p. 127-149, 1991.
Type Theory with Records for Natural Language Semantics. R Cooper, J Ginzburg, 375The handbook of contemporary semantic theorypCooper R., Ginzburg J., "Type Theory with Records for Natural Language Semantics", The handbook of contemporary semantic theoryp. 375, 2015.
The nature of explanation. K J W Craik, Cambridge UKCambridge UniversityCraik K. J. W., The nature of explanation, Cambridge University, Cambridge UK, 1943.
Type raising, continuations, and classical logic. P De Groote, Proceedings of the 13th Amsterdam Colloquium. the 13th Amsterdam ColloquiumDe Groote P., "Type raising, continuations, and classical logic", Proceedings of the 13th Ams- terdam Colloquium, p. 97-101, 2001.
Modelling language, action, and perception in type theory with records. S Dobnik, R Cooper, S Larsson, Constraint Solving and Language Processing. SpringerDobnik S., Cooper R., Larsson S., "Modelling language, action, and perception in type theory with records", Constraint Solving and Language Processing, Springer, p. 70-91, 2013.
Embodied language, best-fit analysis, and formal compositionality. J Feldman, Physics of life reviews. 74Feldman J., "Embodied language, best-fit analysis, and formal compositionality", Physics of life reviews, vol. 7, n o 4, p. 385-410, 2010.
Situations in LTL as strings. T Fernando, Information and Computation. 20710Fernando T., "Situations in LTL as strings", Information and Computation, vol. 207, n o 10, p. 980-999, 2009.
How people talk with robots: Designing dialog to reduce user uncertainty. K Fischer, 32AI MagazineFischer K., "How people talk with robots: Designing dialog to reduce user uncertainty", AI Magazine, vol. 32, n o 4, p. 31-38, 2011.
QSRlib: a software library for online acquisition of Qualitative Spatial Relations from Video. Y Gatsoulis, M Alomari, C Burbridge, C Dondrup, P Duckworth, P Lightbody, M Hanheide, N Hawes, D Hogg, A Cohn, Gatsoulis Y., Alomari M., Burbridge C., Dondrup C., Duckworth P., Lightbody P., Hanheide M., Hawes N., Hogg D., Cohn A. et al., "QSRlib: a software library for online acquisition of Qualitative Spatial Relations from Video", 2016.
The Theory of Affordances", Perceiving, Acting, and Knowing: Toward an ecological psychologyp. J J Gibson, Gibson J. J., "The Theory of Affordances", Perceiving, Acting, and Knowing: Toward an eco- logical psychologyp. 67-82, 1977.
Interrogatives: Questions, facts and dialogue. J Ginzburg, The handbook of contemporary semantic theorypGinzburg J., "Interrogatives: Questions, facts and dialogue", The handbook of contemporary semantic theoryp. 359-423, 1996.
Computational Models of Dialogue. J Ginzburg, R Fernández, The handbook of computational linguistics and natural language processing. 571Ginzburg J., Fernández R., "Computational Models of Dialogue", The handbook of computa- tional linguistics and natural language processing, vol. 57, p. 1, 2010.
Simulating minds: The philosophy, psychology, and neuroscience of mindreading. A I Goldman, Oxford University PressGoldman A. I., Simulating minds: The philosophy, psychology, and neuroscience of mindread- ing, Oxford University Press, 2006.
Dynamic Logic. D Harel, Handbook of Philosophical Logic. M. Gabbay, F. GunthnerIIExtensions of Classical LogicReidelHarel D., "Dynamic Logic", in M. Gabbay, F. Gunthner (eds), Handbook of Philosophical Logic, Volume II: Extensions of Classical Logic, Reidel, p. 497-604, 1984.
Incremental semantics for dialogue processing: Requirements, and a comparison of two approaches. J Hough, C Kennington, D Schlangen, J Ginzburg, Hough J., Kennington C., Schlangen D., Ginzburg J., "Incremental semantics for dialogue pro- cessing: Requirements, and a comparison of two approaches", 2015.
A formal semantics for situated conversation. J Hunter, N Asher, A Lascarides, Semantics and PragmaticsHunter J., Asher N., Lascarides A., "A formal semantics for situated conversation", Semantics and Pragmatics, 2018.
How could consciousness arise from the computations of the brain. P Johnson-Laird, Mindwaves. Oxford: Basil Blackwellp. Johnson-Laird P., "How could consciousness arise from the computations of the brain", Mind- waves. Oxford: Basil Blackwellp. 247-257, 1987.
Gesture: Visible action as utterance. A Kendon, Cambridge University PressKendon A., Gesture: Visible action as utterance, Cambridge University Press, 2004.
Interpreting situated dialogue utterances: an update model that uses speech, gaze, and gesture information. C Kennington, S Kousidis, D Schlangen, Proceedings of SigDial 2013. SigDial 2013Kennington C., Kousidis S., Schlangen D., "Interpreting situated dialogue utterances: an update model that uses speech, gaze, and gesture information", Proceedings of SigDial 2013, 2013.
VoxSim: A Visual Platform for Modeling Motion Language. N Krishnaswamy, J Pustejovsky, Proceedings of COLING 2016, ACL. COLING 2016, ACLKrishnaswamy N., Pustejovsky J., "VoxSim: A Visual Platform for Modeling Motion Lan- guage", Proceedings of COLING 2016, ACL, 2016.
Generating a Novel Dataset of Multimodal Referring Expressions. N Krishnaswamy, J Pustejovsky, Proc. of 13th Int. Conference on Computational Semantics. of 13th Int. Conference on Computational SemanticsKrishnaswamy N., Pustejovsky J., "Generating a Novel Dataset of Multimodal Referring Ex- pressions", Proc. of 13th Int. Conference on Computational Semantics, p. 44-51, 2019.
A Formal Analysis of Multimodal Referring Strategies Under Common Ground. N Krishnaswamy, J Pustejovsky, Proceedings of The 12th LREC. The 12th LRECKrishnaswamy N., Pustejovsky J., "A Formal Analysis of Multimodal Referring Strategies Un- der Common Ground", Proceedings of The 12th LREC, p. 5919-5927, 2020.
Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems. F Landragin, Signal Processing. 86Situated Meaning in Multimodal Dialogue 41Landragin F., "Visual perception, language and gesture: A model for their understanding in multimodal dialogue systems", Signal Processing, vol. 86, n o 12, p. 3578-3595, 2006. Situated Meaning in Multimodal Dialogue 41
A formal semantic analysis of gesture. A Lascarides, M Stone, Journal of Semanticsp. 004Lascarides A., Stone M., "A formal semantic analysis of gesture", Journal of Seman- ticsp. ffp004, 2009.
Pointing and reference reconsidered. A Lücking, T Pfeiffer, H Rieser, Journal of Pragmatics. 77Lücking A., Pfeiffer T., Rieser H., "Pointing and reference reconsidered", Journal of Pragmat- ics, vol. 77, p. 56-79, 2015.
Interpreting Motion: Grounded Representations for Spatial Language. I Mani, J Pustejovsky, Oxford University PressMani I., Pustejovsky J., Interpreting Motion: Grounded Representations for Spatial Language, Oxford University Press, 2012.
Towards evaluating recovery strategies for situated grounding problems in human-robot dialogue. M Marge, A I Rudnicky, IEEE RO-MAN. IEEEMarge M., Rudnicky A. I., "Towards evaluating recovery strategies for situated grounding prob- lems in human-robot dialogue", 2013 IEEE RO-MAN, IEEE, p. 340-341, 2013.
Language and perception. G A Miller, P N Johnson-Laird, Belknap PressMiller G. A., Johnson-Laird P. N., Language and perception., Belknap Press, 1976.
Cooperating with Avatars Through Gesture, Language and Action. P Narayana, N Krishnaswamy, I Wang, R Bangar, D Patil, G Mulay, K Rim, R Beveridge, J Ruiz, J Pustejovsky, B Draper, Intelligent Systems Conference (IntelliSys). Narayana P., Krishnaswamy N., Wang I., Bangar R., Patil D., Mulay G., Rim K., Beveridge R., Ruiz J., Pustejovsky J., Draper B., "Cooperating with Avatars Through Gesture, Language and Action", Intelligent Systems Conference (IntelliSys), 2018.
Mind changes: A simulation semantics account of counterfactuals. S Narayanan, Cognitive Science. Narayanan S., "Mind changes: A simulation semantics account of counterfactuals", Cognitive Science, 2010.
Aspects of changes: a dynamic event semantics. R Naumann, Journal of semantics. 18Naumann R., "Aspects of changes: a dynamic event semantics", Journal of semantics, vol. 18, p. 27-81, 2001.
Logics of public communications. J Plaza, The Generative Lexicon. MIT Press158SynthesePlaza J., "Logics of public communications", Synthese, vol. 158, n o 2, p. 165-179, 2007. Pustejovsky J., The Generative Lexicon, MIT Press, 1995.
Dynamic Event Structure and Habitat Theory. J Pustejovsky, Proc. of the 6th International Conference on Generative Approaches to the Lexicon (GL2013), ACL. of the 6th International Conference on Generative Approaches to the Lexicon (GL2013), ACLPustejovsky J., "Dynamic Event Structure and Habitat Theory", Proc. of the 6th International Conference on Generative Approaches to the Lexicon (GL2013), ACL, p. 1-10, 2013.
From actions to events: Communicating through language and gesture. J Pustejovsky, Interaction Studies. 192Pustejovsky J., "From actions to events: Communicating through language and gesture", Inter- action Studies, vol. 19, n o 1-2, p. 289-317, 2018.
VoxML: A Visualization Modeling Language. J Pustejovsky, N Krishnaswamy, Proceedings of LREC. LRECPustejovsky J., Krishnaswamy N., "VoxML: A Visualization Modeling Language", Proceed- ings of LREC, 2016.
The qualitative spatial dynamics of motion. J Pustejovsky, J Moszkowicz, The Journal of Spatial Cognition and Computation. Pustejovsky J., Moszkowicz J., "The qualitative spatial dynamics of motion", The Journal of Spatial Cognition and Computation, 2011.
Toward humanlike task-based dialogue processing for human robot interaction. M Scheutz, R Cantrell, P Schermerhorn, 32Ai MagazineScheutz M., Cantrell R., Schermerhorn P., "Toward humanlike task-based dialogue processing for human robot interaction", Ai Magazine, vol. 32, n o 4, p. 77-84, 2011.
Gestural grammar. P Schlenker, Natural Language & Linguistic Theoryp. 1-50. Schlenker P., "Gestural grammar", Natural Language & Linguistic Theoryp. 1-50, 2020.
Common ground. R Stalnaker, Linguistics and philosophy. 25Stalnaker R., "Common ground", Linguistics and philosophy, vol. 25, n o 5-6, p. 701-721, 2002.
Pointing things out: in defense of attention and coherence. U Stojnić, M Stone, E Lepore, Linguistics and Philosophyp. 1-10. Stojnić U., Stone M., Lepore E., "Pointing things out: in defense of attention and coherence", Linguistics and Philosophyp. 1-10, 2019.
Dynamic semantics as monadic computation. C Unger, JSAI International Symposium on Artificial Intelligence. SpringerUnger C., "Dynamic semantics as monadic computation", JSAI International Symposium on Artificial Intelligence, Springer, p. 68-81, 2011.
J Van Benthem, Logical dynamics of information and interaction. CambridgeVan Benthem J., Logical dynamics of information and interaction, Cambridge, 2011.
Dynamic epistemic logic. H Van Ditmarsch, W Van Der Hoek, B Kooi, Springer Science & Business Media337Van Ditmarsch H., van Der Hoek W., Kooi B., Dynamic epistemic logic, vol. 337, Springer Science & Business Media, 2007.
Computational semantics with functional programming. J Van Eijck, C Unger, CambridgeVan Eijck J., Unger C., Computational semantics with functional programming, Cambridge, 2010.
Mixed reality deictic gesture for multimodal robot communication. T Williams, M Bussing, S Cabrol, E Boyle, N Tran, IEEE Int'l Conf. on HRI. Williams T., Bussing M., Cabrol S., Boyle E., Tran N., "Mixed reality deictic gesture for multi- modal robot communication", IEEE Int'l Conf. on HRI, IEEE, p. 191-201, 2019. |
226,283,857 | Hierarchical Region Learning for Nested Named Entity Recognition | Named Entity Recognition (NER) is deeply explored and widely used in various tasks. Usually, some entity mentions are nested in other entities, which leads to the nested NER problem. Leading region based models face both the efficiency and effectiveness challenge due to the high subsequence enumeration complexity. To tackle these challenges, we propose a hierarchical region learning framework to automatically generate a tree hierarchy of candidate regions with nearly linear complexity and incorporate structure information into the region representation for better classification. Experiments on benchmark datasets ACE-2005, GENIA and JNLPBA demonstrate competitive or better results than state-of-theart baselines. | [
202784449,
174797955,
52916675,
182953033,
6628106,
14068874,
48352299,
7985741,
53080784,
195218673,
6042994,
195766996,
10573012,
1957433
] | Hierarchical Region Learning for Nested Named Entity Recognition
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 16 -20, 2020. 2020
Xinwei Long [email protected]
Institute of Software
Chinese Academy of Sciences
BeijingChina
University of Chinese Academy of Sciences
BeijingChina
Shuzi Niu
Institute of Software
Chinese Academy of Sciences
BeijingChina
Yucheng Li [email protected]
Institute of Software
Chinese Academy of Sciences
BeijingChina
Hierarchical Region Learning for Nested Named Entity Recognition
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings
the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsAssociation for Computational LinguisticsNovember 16 -20, 2020. 20204788
Named Entity Recognition (NER) is deeply explored and widely used in various tasks. Usually, some entity mentions are nested in other entities, which leads to the nested NER problem. Leading region based models face both the efficiency and effectiveness challenge due to the high subsequence enumeration complexity. To tackle these challenges, we propose a hierarchical region learning framework to automatically generate a tree hierarchy of candidate regions with nearly linear complexity and incorporate structure information into the region representation for better classification. Experiments on benchmark datasets ACE-2005, GENIA and JNLPBA demonstrate competitive or better results than state-of-theart baselines.
Introduction
As a fundamental information extraction task, Named Entity Recognition (NER) is widely used in various downstream tasks, such as entity linking and entity search. Most studies assigns a label to each token of the sequence for the flat NER problem (Lample et al., 2016). However, it is common that entities are embedded in other entities in many domains (Kim et al., 2003;Ringland et al., 2019). Example from ACE-2005 dataset shown in Fig. 1 illustrates that the top-level PER entity includes a nested entity with ORG label. How to recognize all entities recursively from innermost to outermost is referred to as the Nested NER problem. Existing approaches mainly solve the nested NER problem by classifying all candidate subsequences (a.k.a regions). The key to region based methods lies in candidate region detection. one kind is the brute force method (Sohrab and Miwa, 2018) to enumerate all possible O(n 2 ) subsequences for each sentence with n words. The other kind (Zheng et al., 2019) is to generate and classify candidate regions in a two-stage paradigm, often leading to cascaded errors. Thus region based methods face efficiency and effectiveness challenges.
To tackle these challenges, we propose a Hierarchical Region learning framework, referred to as HiRe. First, inspired by constituent parsing tree as the top of Fig. 1 and its neural syntactic distance (Shen et al., 2018), we introduce the coherence measure between adjacent regions. Then we generate a region tree for each sentence by merging two adjacent regions recursively based on this region coherence measure in a bottom-up manner. Finally, hierarchical regions are classified based on the boundary and merging word representation. We train the hierarchical region generation and classification tasks simultaneously.
Experimental results on three benchmark datasets ACE-2005, GENIA and JNLPBA demonstrate that HiRe shows the competitive or better performance than baselines. HiRe generates only O(n) candidate regions about 77.9% less than the brute-force method and achieves 98.1% true region recall in the GENIA dataset, a good trade-off between efficiency and effectiveness.
Related Work
Given a sentence of n words (w 1 , . . . , w n ), the nested named entity recognition task aims at identifying all the entities especially when one entity subsequence (w i , . . . , w j ), i < j contains others (w p , . . . , w q ), i ≤ p < q ≤ j. According to reduced different problems, existing nested NER models mainly fall into three categories.
Sequence labeling models assign multiple labels to each word assuming that one word may belong to multiple entities, such as linearization method (Straková et al., 2019) and layered CRF (Ju et al., 2018).
Structured label classification models capture the label relationship of a sentence for better performance. (Lu and Roth, 2015;Wang and Lu, 2018) proposed hyper-graphs models to describe the label relationship, and either human designed or latent features were adopted for classification.
Region based models were summarized by (Lin et al., 2019) as obtaining all possible regions and assigning labels to regions. The key to region classification models is how to obtain candidate regions from a sentence. One is the bruteforce method (Sohrab and Miwa, 2018;Xia et al., 2019), which enumerates all subsequences of a sentence for classification with high time complexity. The other is to formulate the task as a two-stage paradigm. (Zheng et al., 2019;Tan et al., 2020) detected a small set of candidate regions with high efficiency, but only about 80% entities could be found in the first stage, making a performance bottleneck. Some studies (Finkel and Manning, 2009) leveraged the external knowledge, such as constituent parsing tree, to guide the first step, which achieved impressive performance but suffered from the cubic time complexity and error propagation from external tools. Most methods above represented the region as the average or weighted sum of word representations, ignoring the region structure.
Methods
To tackle efficiency and effectiveness challenges in region based methods, we propose a Hierarchical Region learning framework for nested NER problem, namely HiRe in Fig. 2.
Overall Architecture
Specifically, we first obtain word representations through the encoder layer. Then, we introduce a word coherence measure based on word representations through word coherence layer. Next, region coherence measure is derived from the word coherence, two adjacent regions are recursively merged based on this measure, and a tree of regions is generated for each sentence. Finally, we use a ranking loss of region boundaries for region generation task and cross entropy loss of labeling candidate regions for entity recognition task in a multi-task framework. Encoder Layer. Consider the i-th word w i in a sentence with n words, we represent it by concatenating their word embedding x w i , part-ofspeech(POS) embedding x p i and character-level embedding x c i together. The character-level embeddings are generated by a convolutional neural network module with the same setting as (Yang et al., 2018) to capture the orthographic and morphological features of the word. Then, we employ a bi-directional LSTM to obtain the long-term context-aware representation as:
Encoder
x t i = [x w i ; x p i ; x c i ],(1)− → h i = LST M (x t i , − → h i−1 ), (2) ← − h i = LST M (x t i , ← − h i+1 ),(3)h i = [ − → h i ; ← − h i ],(4)
Word Coherence. Word context representations {h t } n−1 t=0 are fed to the convolutional kernel with window size 2 to obtain the local feature between adjacent words g 0 , g 2 , ...g n−2 = CON V (h 0 , h 1 , . . . , h n−1 ). Then these features are input into a 2-layers feed-forward network (FFN) to obtain the word coherence measure {d t } n−2 t=0 , where d t indicates the affinity between word w t and w t+1 . The higher this measure, the more coherent adjacent words.
Region Coherence. A subsequence of the sentence composed of consecutive words is called a region denoted as R i,j = (w i , . . . , w j ). Based on the word coherence measure, we define the region coherence based on adjacent words between two adjacent regions in Eq.(5). It indicates how likely two adjacent regions are to be a whole.
d(R i,j , R j+1,k ) = d j , i ≤ j < k,(5)
Hierarchical Region Generation. Based on region coherence measure, we build the region hierarchy from bottom to up recursively as follows. At 1-st level for initialization, each word is treated as a region and the leaf node in this tree. At t-th level, two regions R i,k and R k+1,j will be merged into R i,j at the merging
point k if d(R i,k , R k+1,j ) > d(R p,i−1 , R i,k ) and d(R i,k , R k+1,j ) > d(R k+1,j , R j+1,q )
. R i,j will be used at the following levels instead of R i,k and R k+1,j . Because each k has one chance to be the merging point, this merging operation will be repeated at most n − 1 times. The process will generate about O(n) candidate regions. Fig. 3 illustrates this generation process of the example sentence from Fig. 1, where blocks with the same color are of the same region. Practically, it is not essential to generate the whole tree with the restraint of maximum entity length, which further reduces the number of candidate regions. Region Classification. Here a region is composed of two sub-regions. For a region R i,j with its merging point k generated by the above steps, we adopt g k as the representation of its sub-regions R i,k and R k+1,j . To make the classifier more sensitive to entity boundaries, both boundary and merging word representations are concatenated as region
R i,j 's representation v [i,j] = [h i ; g k ; h j ], namely hierarchical region representation. If i = j, we set v [i,i] to [h i ; h i ; h i ].
Next, a 2-layer feed-forward network is to predict the probability that region R i,j belongs to entity category c as Eq.(6).
p(c|R i,j ) = Sof tmax (F F N(v [i,j] )) (6)
Learning and Inference
We train both the hierarchical region generation and classification tasks simultaneously in a multi-task framework as Eq. (7).
L = αL region + (1 − α)L label(7)
For the hierarchical region generation task, we propose to optimize the pairwise ranking loss L region in Eq.(8) to emphasize the partial order between inner and boundary word coherence instead of their values. The predicted partial order is determined by the learned boundary and inner word coherence scores. The loss function is reduced to each region difference between the predicted and ground truth region hierarchy.
However, The ground truth partial order is unavailable in datasets. To solve this problem, we generate the ground truth coherence scores based on the rule that the boundary word w i−1 and w j coherence is always smaller than the inner word {w t } j−1 t=i coherence for each ground truth entity region R i,j . Considered the hierarchy of entity, we define the ground truth word coherence as a logarithmic function of length. Specifically, Ground truth boundary word coherencesd i−1 andd j are defined as −( log 2 (j − i + 2) + 1). Ground truth inner word coherence {d m } j−1 m=i are randomly generated from [−1, − log 2 (j − i + 2) ]. Predicted word coherences {d t } j t=i−1 are derived through above layers.
∀R i,j l = i − 1, j m ∈ [i, j − 1] [1 − sign(d l −d m )(d l − d m )] +(
8) For the region classification task, the cross entropy loss function L label is utilized with a softmax classifier based on the probabilities in Eq.(6).
Experiments
To investigate the effectiveness and efficiency of our proposed method, we conduct comprehensive experiments on three benchmark NER datasets.
Experimental Setting
NER datasets with some nested entities are referred to as nested NER datasets, while NER (Kim et al., 2003), which contain 36.4% and 21.8% nested entities respectively. We follow the same dataset setup as previous work (Wang and Lu, 2018;Lin et al., 2019). We also conduct ablation experiments on the flat NER dataset JNLPBA (Collier and Kim, 2004), and pre-processed data is obtained from (Zheng et al., 2019). HiRe was implemented by Pytorch 3 . Stanford CoreNLP toolkit was used to split sentences and for POS tagging. We use ADAM(Kingma and Ba, 2015) for optimization with batch size 32 and learning rate 0.001. Word embeddings are initialized with pretrained 200dimension Glove vectors (Pennington et al., 2014) 4 . Dimensions of POS tag embedding, character embedding, LSTM layer and hidden units are 50, 100, 2 and 256 respectively. The dropout ratio is 0.2 and α is 0.4. We use BERT base for word representations and fine tune parameters with learning rate 3e − 5. The maximum number of hierarchical layer t is set as 8, 6, 6 on ACE, GENIA and JNLPBA separately. baselines. The performance gain on ACE-2005 is due to the high recall in the region generation step and the incorporation of region structure into its representation in region classification step. Higher performance on ACE-2005 means that HiRe performs better on datasets with more nested entities.
Effectiveness Analysis
Considering baselines with pre-trained language model, we replace LSTM encoder with BERT base in HiRe. Experimental results are listed in Table 2. Our model significantly outperforms baselines. As far as we know, the only reported higher F1 score on ACE-2005 is obtained from BERT large with three times parameter number of BERT base to learn and infer with low efficiency.
Efficiency Analysis
Given a sentence with n words, the brute force method enumerates O(n 2 ) candidate regions. HiRe generates O(n) candidate regions. (Zheng et al., 2019) finds candidate regions through a token-wise classification with O(n) time complexity. For sentences in GENIA, the number of candi-5 Due to different experimental settings, we reproduced (Sohrab and Miwa, 2018) under the same setting with other baselines and obtained performances similar to results in (Zheng et al., 2019). The other results were taken from their papers date regions generated by HiRe is 77.9% less than that of the enumeration method discarding 1.3% long entities and more than that of (Zheng et al., 2019). However, the true recall of candidate regions generated by the enumeration method and HiRe are 98.7% and 98.1%, respectively. The recall of the start/end boundary generated by (Zheng et al., 2019) is 84.3%/87.2%. In this sense, HiRe finds a relatively smaller (20% or so) but higher quality (true recall 98.1%) subset of all regions, which is a good trade-off between efficiency and effectiveness.
Ablation Study
To prove our model can also work on flat NER task, we conduct ablation experiments on JNLPBA dataset. We compare our model with a standard flat NER benchmark (Lample et al., 2016) and two nested NER methods. Our model achieves 74.0% in F1 measure, which outperforms these baselines showed in Table 3.
To analyze the role of Hierarchical Region Representation, denoted as HRR in HiRe, we compare performances of HiRe with and without it on ACE-2005. HiRe without HRR employs Average Word Representation (denoted as AWR) instead with precision 78.3%, recall 73.7% and F1 measure 75.9%. In contrast to HiRe AW R , the absolute F1 measure improvement of HiRe HRR is 0.6%. In all, HRR plays an essential part in HiRe.
The reason lies in that the HRR treats each region as a hierarchical structure composed of two subregions rather than a flat structure as AWR does. The hierarchical structure will put more emphasis on some words while the flat structure treats each word equally in AWR. For example, the minister of the department of education composed of the minister and of the department of education two regions should be labeled with PER but may be misclassified into ORG with AWR.
Conclusion
Leading region based approaches to nested NER face the efficiency and effectiveness challenges. We propose a hierarchical region framework to generate hierarchical regions and assign those regions with hierarchical representation an entity categorical label together. Experimental results demonstrate a significant improvement of our proposed framework in terms of efficiency and effectiveness than SoTA baselines. In future work, how to represent hierarchical regions better will be considered.
Figure 1 :
1Illustration of nested entities and constituent parsing tree * Corresponding author.
Figure 2 :
2Architecture of HiRe.
Figure 3 :
3Hierarchical Region Generation forFig. 1, where w i+l represents the (i + l)-th word in the sequence. The blue histograms on the bottom represent the coherence scores, and the blocks with the same color in each layer indicate they have been merged into a region.
Table 1
1shows the performance comparison between HiRe and baselines on ACE-2005 and GE-NIA datasets using Bi-LSTM as the encoder. On ACE-2005, F1 score of HiRe achieves 76.5% and is improved by 0.9% compared with SOTA. On GE-NIA, its F1 score is 75.6%, which is competitive to 1 https://catalog.ldc.upenn.edu/LDC2006T06 2 http://www.geniaproject.org/genia-corpus/term-corpus 3 https://pytorch.org/ 4 http://nlp.stanford.edu/data/glove.6B.zipModel
P
R
F
(Xia et al., 2019)
79.0 77.3 78.2
(Fisher and Vlachos, 2019) 82.7 82.1 82.4
(Tan et al., 2020)
83.8 83.9 83.9
Our Model
83.0 86.3 84.6
Table 2 :
2Experimental results on ACE-2005 with pre-
trained language models.(Xia et al., 2019) use ELMo,
and the others use uncased BERT-Base.
Table 3 :
3Experimental results on JNLPBA.
AcknowledgementsWe sincerely thank all reviewers and AC for their comments and suggestions. This research work was funded by the National Natural Science Foundation of China under Grant No. 62072447.
Introduction to the bio-entity recognition task at JNLPBA. Nigel Collier, Jin-Dong Kim, Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, NLPBA/BioNLP. the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, NLPBA/BioNLPGeneva, SwitzerlandNigel Collier and Jin-Dong Kim. 2004. Introduc- tion to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, NLPBA/BioNLP 2004, Geneva, Switzerland, August 28-29, 2004.
Nested named entity recognition. Jenny Rose Finkel, Christopher D Manning, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeA meeting of SIGDAT, a Special Interest Group of the ACLJenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2009, 6-7 August 2009, Singapore, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 141-150.
Merge and label: A novel neural network architecture for nested NER. Joseph Fisher, Andreas Vlachos, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyAssociation for Computational Linguistics1Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested NER. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 5840-5850. Association for Computational Linguistics.
A neural layered model for nested named entity recognition. Meizhi Ju, Makoto Miwa, Sophia Ananiadou, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLouisiana, USA1NAACL-. Long PapersMeizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named en- tity recognition. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2018, New Or- leans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1446-1459.
GENIA corpus -a semantically annotated corpus for bio-textmining. Jin-Dong Kim, Tomoko Ohta, Yuka Tateisi, Jun'ichi Tsujii, Proceedings of the Eleventh International Conference on Intelligent Systems for Molecular Biology. the Eleventh International Conference on Intelligent Systems for Molecular BiologyBrisbane, AustraliaJin-Dong Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. GENIA corpus -a semanti- cally annotated corpus for bio-textmining. In Pro- ceedings of the Eleventh International Conference on Intelligent Systems for Molecular Biology, June 29 -July 3, 2003, Brisbane, Australia, pages 180- 182.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego California, USANAACL HLT 2016Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, San Diego California, USA, June 12-17, 2016, pages 260-270.
A unified mrc framework for named entity recognition. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, Jiwei Li, Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2019. A unified mrc framework for named entity recognition.
Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, Italy; Long Papers1Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019. Sequence-to-nuggets: Nested entity men- tion detection via anchor-region networks. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 5182-5192.
Joint mention extraction and classification with mention hypergraphs. Wei Lu, Dan Roth, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalWei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 857-867.
The stanford corenlp natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, David Mc-Closky, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014. the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014Baltimore, MD, USA, System DemonstrationsChristopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demonstrations, pages 55-60.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarA meeting of SIGDAT, a Special Interest Group of the ACLJeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Inter- est Group of the ACL, pages 1532-1543.
NNE: A dataset for nested named entity recognition in english newswire. Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cécile Paris, James R Curran, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cécile Paris, and James R. Curran. 2019. NNE: A dataset for nested named entity recogni- tion in english newswire. In Proceedings of the 57th Conference of the Association for Computa- tional Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 5176-5181.
Straight to the tree: Constituency parsing with neural syntactic distance. Yikang Shen, Zhouhan Lin, Paul Jacob, Alessandro Sordoni, Aaron C Courville, Yoshua Bengio, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers1Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessan- dro Sordoni, Aaron C. Courville, and Yoshua Ben- gio. 2018. Straight to the tree: Constituency pars- ing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1171-1180.
Deep exhaustive model for nested named entity recognition. Golam Mohammad, Makoto Sohrab, Miwa, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumMohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, Brussels, Belgium, October 31 -November 4, 2018, pages 2843-2849.
Neural architectures for nested NER through linearization. Jana Straková, Milan Straka, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Jana Straková, Milan Straka, and Jan Hajic. 2019. Neu- ral architectures for nested NER through lineariza- tion. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 5326-5331.
Boundary enhanced neural span classification for nested named entity recognition. Chuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, Fei Huang, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence. New York, NY, USAAAAI Press2020The Thirty-Fourth AAAI Conference on Artificial IntelligenceChuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, and Fei Huang. 2020. Boundary enhanced neural span classification for nested named entity recognition. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Inno- vative Applications of Artificial Intelligence Confer- ence, IAAI 2020, The Tenth AAAI Symposium on Ed- ucational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9016-9023. AAAI Press.
Neural segmental hypergraphs for overlapping mention recognition. Bailin Wang, Wei Lu, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumBailin Wang and Wei Lu. 2018. Neural segmental hy- pergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 204-214.
Multi-grained named entity recognition. Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, Philip S Yu, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1Association for Computational LinguisticsCongying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, and Philip S. Yu. 2019. Multi-grained named entity recognition. In Proceedings of the 57th Confer- ence of the Association for Computational Linguis- tics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 1430-1440. As- sociation for Computational Linguistics.
Design challenges and misconceptions in neural sequence labeling. Jie Yang, Shuailong Liang, Yue Zhang, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAJie Yang, Shuailong Liang, and Yue Zhang. 2018. De- sign challenges and misconceptions in neural se- quence labeling. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 3879-3889.
A boundary-aware neural model for nested named entity recognition. Changmeng Zheng, Yi Cai, Jingyun Xu, Guandong Ho-Fung Leung, Xu, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaChangmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Le- ung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 357-366. |
2,002,825 | A Call for Executable Linguistics Research * | This paper mirrors my invited talk at PACLIC-22. It describes a call for a renewed emphasis in work on the logical semantics of languages. It lists some of the computational components needed for symbolic interpretations of language, and of automated reasoning within those semantics. It details existing components that meet those needs and provides short examples of how they might work. It also touches on how open source products can support collaboration, which is needed on a project that has the scope of creating a full semantics of language. | [] | A Call for Executable Linguistics Research *
Adam Pease
Articulate Software
420 College Ave Angwin94508CAUSA
A Call for Executable Linguistics Research *
ontologynatural language understandingcontrolled languagesautomated deductionfirst order logic
This paper mirrors my invited talk at PACLIC-22. It describes a call for a renewed emphasis in work on the logical semantics of languages. It lists some of the computational components needed for symbolic interpretations of language, and of automated reasoning within those semantics. It details existing components that meet those needs and provides short examples of how they might work. It also touches on how open source products can support collaboration, which is needed on a project that has the scope of creating a full semantics of language.
Introduction
Talks that state the obvious, or review well-known research are boring and risk losing an audience. Talks that give controversial positions often have the same result. But I'd rather take the dangerous route, in hopes of spurring some new ideas and new research. A further risk is that I'm a computer scientist by training, not a linguist. I may have substantial blind spots in computational linguistics, and there are undoubtedly people I'm not aware of already working in the direction I will advocate, but that provides a big opportunity for me to learn from your feedback on this talk. I'll focus on a broad goal of language understanding or language processing in Artificial Intelligence. I'll define this as a set of techniques for processing human language that show evidence of the same competencies or behaviors as human language processing. Fundamental to this is the ability to accept statements in language that affect future responses, and the ability to respond to questions that demonstrates prior assimilation of knowledge. I don't believe that a "tabula rasa" approach is feasible in this context; I will take it as a given that a great deal of knowledge must already reside in a practical language understanding system.
There has been considerable research in computational linguistics that takes particular linguistic features and subjects them to semantic analysis, with the goal of specifying a formal semantic interpretation derived from syntactic features. This entire area of research has waned however, in part because it was so difficult to combine these sorts of analyses into a single semantic theory. The need for providing some interpretation of all text has moved the computational linguistics field to robust shallow interpretations, rather than brittle and deep ones. A contributing factor has been the need for some very large resources to make it possible to have non-toy implementations of deep linguistic semantic processing. Another issue is that because the scope of linguistic semantics is so large, it's hard to tell if different component theories result in a harmonious total interpretation. We're now at the threshold of being able to address these issues. This is why I call for a new direction of Executable Linguistics Research.
Note that I'm not proposing an exclusive alternative to statistical linguistic methods. There are portions of this general problem that benefit from the marriage of statistical and logical approaches, most notably, word sense disambiguation. I'm proposing a shift in emphasis, recommending that more people concentrate on an approach to linguistics that has been somewhat neglected -a shift to focusing on a more difficult and longer term approach, because the utility of current robust and arguably shallow approaches to understanding are yielding less substantial incremental improvements as time goes on.
I'll first sketch an outline of the products that I think are needed. Then I'll discuss existing resources that can meet some of those needs. Next I'll provide some concrete examples to show how this all might work.
What's Needed
A fundamental component of any practical and large scale language understanding system is a large vocabulary. Any system that understands language must be able to identify words. It should know basic relationships among words that form some of the building blocks of meaning -synonyms, antonyms, and which words subsume others' meanings or entail them, at a minimum. Harder to acquire, though no less necessary, is a corpus of groups of words that are "tokens", which have a single and collective meaning that is more than the sum of their component words, and which repeatedly appear in group form.
For English at least, polysemy is a significant issue in language understanding. While better models and algorithms are certainly needed to handle word sense disambiguation the foundation of most algorithms of this sort is the availability of data on which to train the algorithms. Specifically, there is a need for both balanced and domain specific corpora where words have been manually disambiguated with respect to a lexicon.
All this may be relatively uncontroversial so far. Now for the more controversial part. While word meanings change over time, the vast majority of meanings are constant. While meanings can't be legislated, they can be discovered and described, and will largely remain fixed. Linguists often study language at the margins, where there are changes and differences and scientifically interesting features, but much of language is certainly stable, at least over decades. If our goal is machine understanding, it's not enough to know that one word is more specific than another, we must know in what way it is more specific, and what knowledge logically follows from using the more specific word instead of the more general one.
If we agree that this sort of specific information is needed about word meanings, then the next question is in what form it should be represented. There are broadly two options: statistical representations that specify approximate relationships learned automatically, and logical ones that at least at the moment must largely be crafted by humans. Each general approach has advantages.
Statistical approaches require human effort to create a good algorithm, but then can be run automatically on large data sets without human intervention. Such approaches are robust in terms of coverage, but decidedly not robust in terms of the precision of the data. We may be able to learn entailment relationships one inference deep, but I would venture that truthpreserving inferences from automatically acquired data are a long way away. The combinatorics of inference dictate that even if only 10% of the entailments learned are wrong (and the state of art is more like >50% even for simple entailments (TAC 2008), even a simple five step deduction will usually be wrong.
Logical representations can be truth-preserving, and the consistency of any logical theory can be automatically tested (subject to time limitations of course). The main problem is that the effort to craft theories requires human effort, and that effort is specialized and often expensive. I'll return to the issue of open source development later, but for now, let me just state as a given that the scope of such an effort requires open collaboration with many entities and individuals involved.
One issue with logical representations is that words are not logical terms with mathematically precise definitions (at least in most cases). If we treat words as logical terms, or logical terms as though they were words, we'll have an inaccurate model. I'd suggest that we need both a lexicon and a logical model, and relations between them. It is not enough simply to classify words with a small number of formal terms. Knowing that "water" is a "substance" is not sufficient. Any system for understanding must know the implications, uses and properties of water. It must know that water dissolves some other substances, that people can both swim and drown in it, that it can become ice or steam, and many more facts. So, the logical model must ultimately be as large as a dictionary, and it must be subject to continuous evolution as language changes and expands.
Language does not just consist of isolated words. English has many standard phrases in which the meaning of a phrase is more than the sum of its constituent words. A simple case of this is light or "helper" verbs like "take" as in "take a walk" in which the noun functions to modify the mostly meaningless verb. Note that one cannot simply replace the verb with the verbal form of the noun, since "take a hit" is not the same as "hit". Other examples such as "pay attention" are pairs in which the word choices are specific to a given phrase. We cannot simply make these multi-word lexicon entries since we also have examples like "take a long walk" and "a walk was taken". So we must have a corpus of phrases that have logical templates that are filled in by the remaining context of a given sentence. 1 Beyond a corpus of words and phrases we must have a way of interpreting the overall semantics of a sentence. The building block of a lexicon, phrase corpus, and logical definitions for each is not sufficient. We must be able to interpret the appearance of subject, object, negation, word morphology, conjunction and disjunction, conditionals, modals and many other features. We must handle a host of possibly more mundane linguistic elements like statements about metric time and numbers with units. There is a wealth of research in this area, but not to my knowledge much effort (with Fuch's ACE (Fuchs, 1999) and Kamp& Reyle's DRS work (Kamp&Reyle,1993) as notable exceptions) in systematizing interpretations of all of these different linguistic elements.
The next challenge once we have a system for converting language into logic is to do something useful with the logic. A key enabler is the availability of the same large logical theory that helped us to define individual words in the first place. Those definitions become the corpus of facts and rules that tie together individual logical assertions, and allow us to deduce new consequences. To cover a useful space of knowledge, this theory must be very large. The larger the theory however, the harder it is for a deductive system to process queries on that theory. We must have deductive theorem provers that use smart and adaptive techniques to order the relevance of knowledge, learn how to segment it, and process queries with great efficiency.
What We Have
The preceding discussion is not just an abstract exercise, but a guide to research that exists and research that is needed. For each proposed component we have a solution, but all components would benefit from significant focused, collaborative and open research and development.
The foundation of my current work is the Suggested Upper
Merged Ontology (Niles&Pease, 2001). It fulfills the need for the logical theory in my proposed model. Although large, it certainly is not large enough. It has some 20,000 terms, but that's much smaller than even a collegiate dictionary. It has 70,000 formal statements (axioms) but that's much smaller than the equivalent of all the definitions in a small dictionary. We use WordNet as our English lexicon (Fellbaum, 1991). SUMO has been mapped by hand (Niles&Pease 2003) to all of WordNet . This has been a massive effort, but much more is needed. There are 10946 mappings from WordNet instances to equivalent SUMO instances and 3774 mappings from synsets to equivalent SUMO classes. The remaining 100459 mappings are from specific WordNet synsets to more general SUMO terms. These mappings are the basis for future work, and are necessary but not sufficient. We need all mappings to be direct equivalences, but this requires a great effort in defining the remaining 100,000 synsets that lack a direct equivalent in SUMO. One might claim that this job is too big to be practical, but efforts of even greater size have happened once people realize a need (wikipedia for example), and join together on community projects. After all, what is the alternative? To persist with shallow understanding based on statistical similarities, and without the ability in computation to deduce logical conclusions from chains of facts as people do? Already, there are efforts such as the one to merge YAGO's 14 million facts (de Melo et al, 2008), which are derived from Wikipedia, into SUMO.
Word sense disambiguation is one of our biggest current challenges. We use WordNet SemCor (Landes et al, 1998), which is a corpus of manually disambiguated sentences from the Brown Corpus (Kucera&Francis, 1967). It is at least two orders of magnitude too small. Many synsets do not appear at all in the corpus, and those that do co-occur with even other common words so few times that there is rarely statistical significance for any given word sense pair. We have also not begun to use any particularly sophisticated methods for employing even the data that we have. There are larger corpora of manual disambiguations, but they are all proprietary. I'll return to that issue later in discussion of open source.
Although we've written in more detail proposing a corpus of phrases (Pease&Fellbaum,in press), the current implementation is a bit ad hoc, and an integral part of the overall English parsing and interpretation system called the Controlled English to Logic Translation system (CELT) (Pease&Murray 2003), (Pease&Li, 2008). Our parser relies on a simple definite clause grammar (Covington 1993) in Prolog augmented with Discourse Representation Theory (DRT) (Kamp & Reyle 1993) to handle anaphor and multiple sentence processing. We use WordNet's "Morphy" algorithm to handle morphology.
CELT takes a certain reductionist approach to handling English grammar. In particular, it does not attempt to handle all of English, as a full semantics of English is simply not possible at the moment. Instead, it is a constructed subset of English. We began with the simplest possible grammar of handling subjects and verbs, then added support for objects and indirect objects, then determiners and quantifiers, conjunction, prepositions etc. At each stage we looked at how we could add a given linguistic feature without creating ambiguity and breaking the understanding of the existing range of grammatical elements. After some five years of development, the scope of what CELT can handle is quite large, although a long way from the full complexity of English. We see CELT as an excellent testbed for theories of linguistic semantics, since any new theory can be tested computationally, and must necessarily interact with a range of theories about other linguistic constructions. As such, it is completely practical.
We use a suite of tools called Sigma (Pease, 2003) to handle processing the logical forms that CELT generates. For the past 6 years we have used the Vampire (Riazanov&Voronkov, 2002) theorem prover. However, as newer provers are now developed that are released open source, we have expanded the set of provers that Sigma includes. Recently, we sponsored a competition on theorem proving performance in SUMO that inspired the development of a new prover called SInE (Hoder, 2008). Theorem proving performance however remains a significant obstacle to practical implementations of question answering within a logical deductive framework.
Diversion: Open Source
While scientific achievement throughout history has often provided the potential for direct financial reward, that potential is great today, and is particular significant in computational linguistics. That profit potential unfortunately leads many researchers and their institutions to control the dissemination of their research, in the hopes of licensing it for profit, or creating a company that develops that research into a product. Profit can be a powerful motivator, but it can also prevent us from engaging in the sort of open collaboration that leads to large research projects and tangible products that engender significant research. Worse yet, the hope of a big financial return leads to much good research remaining unknown and unused, while the researchers also fail to turn the work into a profitable product.
I'll cite one major project I'm aware of in computational linguistics where after many years of US government funding, there is a significant body of work with great potential for reuse. It could result in even more great research, but it remains proprietary. Yet, after several years, the institution has only sold one license for a few thousand dollars. During that same time, they could have collaborated with others on grants, potentially resulting in hundreds of thousands of dollars in new funding, and research results that would have only enhanced the standing of the researchers and their work. This example can be contrasted with the example of WordNet, which having been free from its inception has resulted in near ubiquitous use in English-based computational linguistics, countless funded grants and collaborations for its developers and thousands of publications describing its use on a near unimaginable variety of topics.
Example: How it Works
Take the very simple example of "Robert has an orange." CELT interprets this as (Pease, 2008) format (left) and conventional logical notation CELT has a simple database of proper names so it interprets "Robert" correctly. The sense of "orange" as fruit is the most common, and in the absence of a longer sentence or set of sentences, CELT fortunately chooses the a priori most common sense and then retrieves the mapping to the SUMO term of OrangeFruit. Now we ask "Who has a fruit?" (note that the variable that gets the value for "who" is unbound).
(exists (?fruit) (and (instance ?fruit FruitOrVegetable) (instance ?who Human) (possesses ?who ?fruit)))
f FruitOrVegetable(f) ^ Human(w) ^ possesses(w,f) Posing this query to the theorem prover, along with SUMO and the statement asserted above will result in a very simple proof. It relies on the SUMO subclass hierarchy that defines OrangeFruit as a FruitOrVegetable. In this simple example, one could certainly imagine a statistical information retrieval system that uses WordNet's hypernym taxonomy and some simple query relaxation to get the right answer. Pose a more complex query like "What country is between France and Austria" and unless that fact is already explicitly stated, no IR system will find the answer. Of course, for complex queries on large knowledge bases, the logical approach is not guaranteed to find an answer either, but at least it can in theory and there is a clear objective of improving the speed of automated deduction to make reality fulfill the promise.
Why Use a Logical Framework
I've explained why this general approach of logical formalization is useful for language understanding software systems. I also believe it's useful for research in language itself. Take for example work on a semantic theory of possession relations in English and the phrases "Robert's nose", "The car's color", "Tom's father". If one is required to state such an interpretation logically, and with recourse to a large theory, some mistakes are easily found and corrected. If we map all possessions to a general relation "owns" and then we ask questions like "What does a car own?" a theorem prover will find automatically that a car owns its color, which is nonsensical. While such mistakes may seem obvious and amenable to human inspection and discovery for an isolated theory of possession, the potential for errors on a much larger theory is much greater. A linguistic theory that is fully implemented with a non-trivial logical theory and lexicon can be tested on a variety of sentences not generated by the creators of the theory.
By collaborating on linguistic research with a common logical theory, linguistics researchers open up the possibility of doing work that directly builds on each others' progress, without the need to create a new harmonization their target representations or notation each time. Working together on a common target semantics enables large-scale executable and testable research in deep linguistic semantics.
Take for example Terence Parsons' excellent book (Parsons, 1990). How do we know that his theories fit with Kamp&Reyle's? While Parsons book has a tighter focus just on event semantics, Kamp&Reyle also cover that area in detail. In neither book is there a formalization (in logic) for relations like subject(x,y) or object(x,y), or the deictic "now". While these are common notions understood by all linguistics, there are undoubtedly different intuitions at the boundaries. Without a logical theory built on a common logical semantics, we are left to test their compatibility by inspection rather than automation. I cite these books because they are some of the best examples, in my view, of comprehensive linguistic theories with semantics in formal logic. For other work, even more is needed, in my view.
I look forward to hearing from you now about all the good research that I may have missed that meets these goals, and to working with you in the future to help drive linguistics research closer to the ideals that I have described.
Figure 1 :
1Proposed System Architecture
(exists (?orange) (and (attribute Robert-1 Male) (instance Robert-1 Human) (instance ?orange OrangeFruit) (possesses Robert-1 ?orange)))
Table 1 :
1"Robert has an orange" in SUO-KIF
Table 2 :
2"Who has a fruit?" in SUO-KIF format (left) and conventional logical notation
Examples taken from(Pease&Fellbaum, in press)
Integrating YAGO into the Suggested Upper Merged Ontology. M Covington, G De Melo, Fabian Suchanek, Adam Pease, Prentice HallNatural Language Processing for Prolog Programmers. To appearCovington, M. (1993) Natural Language Processing for Prolog Programmers. Prentice Hall. de Melo, G., Fabian Suchanek and Adam Pease (2008). Integrating YAGO into the Suggested Upper Merged Ontology. To appear.
WordNet: An Electronic Lexical Database. C Fellbaum, MIT PressFellbaum, C. (ed. ) (1998) WordNet: An Electronic Lexical Database. MIT Press.
Attempto Controlled English (ACE) Language Manual. N Fuchs, U Schwertel, R Schwitter, 99.03Department of Computer Science, University of ZurichTechnical ReportFuchs, N., U. Schwertel, R. Schwitter. (1999). Attempto Controlled English (ACE) Language Manual, Version 3.0, Technical Report 99.03, Department of Computer Science, University of Zurich, August 1999.
Online description at. K Hoder, SinE.0.3Hoder, K., (2008). SinE.0.3. Online description at http://www.cs.miami.edu/~tptp/CASC/J4/SystemDescriptions.html#SInE---0.3
Computational Analysis of Present-Day American English. H Kamp, U Reyle, Francis, W.N.Brown University PressProvidenceFrom Discourse to LogicKamp, H., Reyle, U. (1993). From Discourse to Logic. Kluwer Academic Publishers. Kucera and Francis, W.N. (1967). Computational Analysis of Present-Day American English. Providence: Brown University Press.
WordNet: An Electronic Lexical Database. S Landes, C Leacock, R I Tengi, Fellbaum, C.The MIT PressCambridge (Mass.Building semantic concordancesLandes S., Leacock C., and Tengi, R.I. (1998) "Building semantic concordances". In Fellbaum, C. (ed.) (1998) WordNet: An Electronic Lexical Database. Cambridge (Mass.): The MIT Press.
Towards A Standard Upper Ontology. Niles, A Pease, Proceedings of Formal Ontology in Information Systems (FOIS 2001). Formal Ontology in Information Systems (FOIS 2001)Ogunquit, Maine, USANiles, I & Pease A., (2001). "Towards A Standard Upper Ontology." In Proceedings of Formal Ontology in Information Systems (FOIS 2001), October 17-19, Ogunquit, Maine, USA, pp 2-9.
Linking Lexicons and Ontologies: Mapping WordNet to the Suggested Upper Merged Ontology. I Niles, A Pease, Proceedings of the IEEE International Conference on Information and Knowledge Engineering. the IEEE International Conference on Information and Knowledge EngineeringNiles, I., and Pease, A. (2003) Linking Lexicons and Ontologies: Mapping WordNet to the Suggested Upper Merged Ontology, Proceedings of the IEEE International Conference on Information and Knowledge Engineering, pp 412-416.
Events in the Semantics of English: A Study in Subatomic Semantics. T Parsons, MIT PressParsons, T., (1990). Events in the Semantics of English: A Study in Subatomic Semantics, MIT Press.
A Pease, C Fellbaum, press) Formal Ontology as Interlingua: The SUMO and WordNet Linking Project and GlobalWordNet. Huang, C. R. and Prevot, L.CambridgeCambridge University PressOntologies and Lexical ResourcesPease, A., and Fellbaum, C., (in press) Formal Ontology as Interlingua: The SUMO and WordNet Linking Project and GlobalWordNet, In: Huang, C. R. and Prevot, L. (Eds.) Ontologies and Lexical Resources. Cambridge: Cambridge University Press.
Controlled English to Logic Translation. A Pease, Li , J , Theory and Applications of Ontology. Michael Healy, Achilles Kameas, and Roberto Polito appearPease, A., and Li, J. (2008) Controlled English to Logic Translation. In Theory and Applications of Ontology, ed. Michael Healy, Achilles Kameas, and Roberto Poli, to appear.
An English to Logic Translator for Ontology-based Knowledge Representation Languages. A Pease, Murray , W , Proceedings of the 2003 IEEE International Conference on Natural Language Processing and Knowledge Engineering. the 2003 IEEE International Conference on Natural Language Processing and Knowledge EngineeringBeijing, ChinaPease, A., and Murray, W., (2003). An English to Logic Translator for Ontology-based Knowledge Representation Languages. In Proceedings of the 2003 IEEE International Conference on Natural Language Processing and Knowledge Engineering, Beijing, China, pp 777-783.
The Annual SUMO Reasoning Prizes at CASC. A Pease, G Sutcliffe, N Siegel, S Trac, PAAR-2008Proceedings of IJCAR '08 Workshop on Practical Aspects of Automated Reasoning. IJCAR '08 Workshop on Practical Aspects of Automated Reasoning373of the CEUR Workshop ProceedingsPease, A., Sutcliffe, G., Siegel, N., and Trac, S., (2008) The Annual SUMO Reasoning Prizes at CASC. Proceedings of IJCAR '08 Workshop on Practical Aspects of Automated Reasoning (PAAR-2008). Volume 373 of the CEUR Workshop Proceedings.
The Sigma Ontology Development Environment. A Pease, Working Notes of the IJCAI-2003 Workshop on Ontology and Distributed Systems. 71of CEUR Workshop Proceeding seriesPease, A., (2003). The Sigma Ontology Development Environment, in Working Notes of the IJCAI-2003 Workshop on Ontology and Distributed Systems. Volume 71 of CEUR Workshop Proceeding series.
The Standard Upper Ontology Knowledge Interchange Format (SUO-KIF). A Pease, Available at. Pease, A., (2008). The Standard Upper Ontology Knowledge Interchange Format (SUO-KIF). Available at http://sigmakee.cvs.sourceforge.net/*checkout*/sigmakee/sigma/suo-kif.pdf TAC (2008).
The Design and Implementation of Vampire. A Riazanov, A Voronkov, AI Communications. 152-3Riazanov A., Voronkov A. (2002). The Design and Implementation of Vampire. AI Communications, 15(2-3), pp. 91-110. |
35,250,137 | 21 ème Traitement Automatique des Langues Naturelles | Dans cet article, nous testons deux approches distinctes pour chunker un corpus oral transcrit, en cherchant à minimiser les étapes de correction manuelle. Nous ré-utilisons tout d'abord un chunker appris sur des données écrites, puis nous tentons de ré-apprendre un chunker spécifique de l'oral à partir de données annotées et corrigées manuellement, mais en faible quantité. L'objectif est d'atteindre les meilleurs résultats possibles pour le chunker en se passant autant que possible de la correction manuelle des étiquettes POS. Nos expériences montrent qu'il est possible d'apprendre un nouveau chunker performant pour l'oral à partir d'un corpus de référence annoté de petite taille, sans intervention sur les étiquettes POS.Abstract. In this paper, we test two distinct approaches to chunk transcribed oral data, trying to minimize the phases of manual correction. First, we use an existing chunker, learned from written texts, then we try to learn a new specific chunker from a small amount of manually corrected labeled oral data. The purpose is to reach the best possible results for the chunker with as few manual corrections of the POS labels as possible. Our experiments show that it is possible to learn a new effective chunker for oral data from a labeled reference corpus of small size, without any manual correction of POS labels Mots-clés : chunker, étiquetage POS, apprentissage automatique, corpus oral, disfluences | [
33814436,
9455854,
12660751,
13936575,
18187835,
10181055
] | 21 ème Traitement Automatique des Langues Naturelles
2014
Isabelle Tellier [email protected]
UMR 8094
(1) université Paris 3 -Sorbonne Nouvelle (2) Lattice
Iris Eshkol-Taravella
UMR 8094
(1) université Paris 3 -Sorbonne Nouvelle (2) Lattice
Yoann Dupont [email protected]
UMR 8094
(1) université Paris 3 -Sorbonne Nouvelle (2) Lattice
Ilaine Wang [email protected]
UMR 8094
(1) université Paris 3 -Sorbonne Nouvelle (2) Lattice
21 ème Traitement Automatique des Langues Naturelles
Marseille2014[O-E.2] 125chunkerPOS labelingmachine learningoral corpusdisfluencies
Dans cet article, nous testons deux approches distinctes pour chunker un corpus oral transcrit, en cherchant à minimiser les étapes de correction manuelle. Nous ré-utilisons tout d'abord un chunker appris sur des données écrites, puis nous tentons de ré-apprendre un chunker spécifique de l'oral à partir de données annotées et corrigées manuellement, mais en faible quantité. L'objectif est d'atteindre les meilleurs résultats possibles pour le chunker en se passant autant que possible de la correction manuelle des étiquettes POS. Nos expériences montrent qu'il est possible d'apprendre un nouveau chunker performant pour l'oral à partir d'un corpus de référence annoté de petite taille, sans intervention sur les étiquettes POS.Abstract. In this paper, we test two distinct approaches to chunk transcribed oral data, trying to minimize the phases of manual correction. First, we use an existing chunker, learned from written texts, then we try to learn a new specific chunker from a small amount of manually corrected labeled oral data. The purpose is to reach the best possible results for the chunker with as few manual corrections of the POS labels as possible. Our experiments show that it is possible to learn a new effective chunker for oral data from a labeled reference corpus of small size, without any manual correction of POS labels Mots-clés : chunker, étiquetage POS, apprentissage automatique, corpus oral, disfluences
Introduction
Nous nous intéressons dans cet article au processus de segmentation de textes en chunks, c'est-à-dire en constituants continus non-récursifs (Abney, 1991). La tâche de chunking vise en effet à identifier la structure syntaxique superficielle d'un énoncé, c'est-à-dire à reconnaître ses constituants minimaux, sans pour autant spécifier leur structure interne ni leur fonction syntaxique. Elle s'appuie sur un étiquetage morpho-syntaxique (ou POS) préalable, donnant ainsi lieu à une séquence d'annotations successives.
Plusieurs stratégies sont possibles pour construire un chunker. L'apprentissage automatique supervisé est particulièrement performant sur cette tâche (Sha et Pereira, 2003), surtout si l'étiquetage POS sur lequel il repose est de bonne qualité. Mais le résultat d'un processus d'apprentissage n'est pas toujours adapté à des textes différant sensiblement de ceux ayant servi à apprendre. Nous supposons être dans la situation suivante : nous disposons d'un étiqueteur POS et d'un chunker appris à partir d'une assez grande quantité de données annotées (les données sources), homogènes en termes de style. Nous souhaitons maintenant chunker des textes nouveaux (les données cibles), initialement non annotés, présentant de grandes différences de style avec les données sources. En particulier, l'annotation POS produite sur les données cibles par le modèle résultant de l'apprentissage sur les données sources n'est pas de bonne qualité, mais nous ne souhaitons pas consacrer du temps à apprendre un nouvel étiqueteur morphosyntaxique spécifique pour le corpus cible. Dans ce cas, est-il utile de corriger manuellement les étiquettes POS du I. TELLIER, I. ESHKOL, Y. DUPONT, I. WANG corpus cible pour faciliter la tâche au chunker qui opère sur elles, ou vaut-il mieux se concentrer sur le seul niveau du chunking ? C'est la principale question que nous abordons dans cet article.
Dans le cas exploré ici, les données sources sont des textes journalistiques, et les données cibles des transcriptions de l'oral. L'oral se caractérise par des phénomènes linguistiques qui lui sont propres, regroupés sous l'appellation générale de disfluences, qui compliquent son annotation et son chunking. L'intérêt du chunking de l'oral est pourtant indéniable : il représente un degré d'analyse adapté pour certains énoncés où l'on constate des libertés prises par rapport à une syntaxe standard. Il a par exemple été montré que les chunks sont le lieu de réalisation privilégié des réparations à l'oral (Blanche-Benveniste C., 1997 : 47).
Notre objectif est donc de chunker le mieux possible nos données orales cibles, en minimisant l'intervention manuelle. Nous souhaitons notamment voir s'il est possible d'acquérir un chunker de l'oral de bonne qualité à partir de peu de données annotées, sans pour autant apprendre un étiqueteur POS spécifique. Apprendre un chunker est en effet moins coûteux qu'apprendre un étiqueteur POS car la variabilité des données servant d'appui (les étiquettes POS dans un cas, les mots dans l'autre) est moindre. Une situation similaire peut survenir dans d'autres contextes, par exemple pour adapter un reconnaisseur d'entités nommées (lui aussi largement fondé sur un étiquetage POS préalable) acquis sur des textes écrits à des données orales. Et la même problématique d'adaptation se pose aussi si, au lieu que ce soit la modalité (écrit/oral) qui change entre les données sources et cibles, c'est leur domaine, leur genre, voire leur langue.
L'article suit la structure suivante. Tout d'abord, nous évoquons la tâche de chunking, ses spécificités dans le cas de l'oral ainsi que les corpus source et cible à notre disposition : le corpus annoté de textes écrits (French Treebank) de Paris 7 et un extrait du corpus oral transcrit ESLO 1 (section 2). Nous décrivons ensuite (en section 3) les différents chunkers utilisés : ils proviennent tous de la même technique d'apprentissage automatique supervisée, mais partant de données annotées différentes. Nous exposons enfin dans la dernière partie (la section 4) les résultats de diverses stratégies utilisées pour chunker les données orales transcrites, nécessitant différents degrés de corrections manuelles.
La tâche et les données
Chunking des données orales transcrites
Les chunkers, aussi appelés « shallow parsers », sont bien adaptés aux données orales transcrites, dont les énoncés ne sont pas souvent « finalisés ». Deux problèmes majeurs se posent aux outils annotant l'oral : les disfluences, qui rompent la linéarité du discours, et le manque de ponctuation dans les transcriptions. Pour (Dister, 2007), les disfluences sont les « marques typiques des énoncés en cours d'élaboration » qui « constituent un piétinement sur l'axe syntagmatique de l'énoncé et […] nécessitent d'être prises en compte par le système d'étiquetage. ». Les disfluences typiques sont les suivantes (extraits du corpus ESLO, décrit plus loin) :
les hésitations : madame euh comment vous faîtes une omelette les faux-départs : il va y avoir encore des encore mais les répétitions : le le les autocorrections : juste après le la fin du premier cycle les reformulations : on fait ce que l'on appelle un carton c'est-à-dire le le ce dessin-là agrandi les amorces : vous v-vous êtes in-institutrice etc.
Elles représentent un vrai problème pour l'analyse automatique de l'oral (Adda-Decker et al., 2003, Antoine et al., 2003, Benzitoun, 2004, Valli et Véronis 1999 et réduisent considérablement les performances des outils construits pour l'écrit standard. Nos propres expériences confirmeront ce constat (cf. section 4.1). La notion de phrase, essentiellement graphique, a rapidement été abandonnée par les linguistes qui s'intéressent à l'oral ; les transcriptions ne sont donc en général pas ponctuées pour éviter l'anticipation de l'interprétation (Blanche-Benveniste, Jeanjean, 1987 les groupes prépositionnels ou PP, incluant les groupes nominaux introduits par une préposition (P, P+D, P+PRO) ;
les groupes adjectivaux ou AP, incluant les éventuels adverbes modifieurs d'adjectifs (ADJ, ADJWH) ;
les groupes adverbiaux ou AdP, incluant les modifieurs de phrases (ADV, ADVWH, I) ;
les groupes de conjonction ou CONJ (CC, CS).
ESLO 1
Le deuxième corpus utilisé est un tout petit extrait du corpus oral transcrit ESLO 1 (Enquête Sociolinguistique d'Orléans) 2 (Eshkol-Taravella et al. 2012) constitué de 8093 mots correspondant à 852 tours de parole (3 entretiens face-à-face). Les conventions de transcription dans ESLO respectent deux principes : l'adoption de l'orthographe standard et le non-recours à la ponctuation de l'écrit. Les marques typographiques comme le point, la virgule, le point d'exclamation ou encore la majuscule en début d'énoncé sont absentes. La segmentation a été faite soit sur une unité intuitive de type « groupe de souffle » repérée par le transcripteur humain, soit sur un « tour de parole », défini simplement par le changement de locuteur. Les données traitées dans le cadre de ce travail correspondent au corpus transcrit brut non annoté et non lemmatisé.
Etiqueteur et chunkers utilisés
SEM, un étiqueteur-chunker appris sur le FTB
Technique d'apprentissage de nouveaux chunkers
Nous ne chercherons pas à apprendre un nouvel étiqueteur POS spécifique de l'oral mais plutôt, dans certaines expériences à apprendre un nouveau chunker à partir de données orales annotées à la fois en POS et en chunks. Pour apprendre ce nouveau chunker (en fait, il y en aura plusieurs, suivant la nature des étiquettes utilisées), nous utiliserons, comme cela avait été fait pour apprendre SEM, des CRF linéaires.
Les CRF sont des modèles graphiques probabilistes non dirigés, discriminants et particulièrement efficaces pour la prédiction d'étiquettes. Dans le cas des modèles linéaires, ils cherchent la meilleure séquence d'étiquettes y à associer à la séquence de données d'entrée x, en maximisant une probabilité P(y|x). La probabilité P(y|x) s'exprime dans un CRF par une combinaison pondérée (les poids étant les paramètres de l'apprentissage) de fonctions caractéristiques ( Pour évaluer la qualité du chunking produit par SEM sur l'oral, il faut constituer un corpus de référence en corrigeant l'annotation en chunks proposée par SEM sur l'extrait d'ESLO 1, avec les étiquettes qu'il utilise (colonne IV dans la Table 2). Découper en chunks la transcription de l'oral pose des problèmes spécifiques, à cause, entre autres, des disfluences. Nous explicitons ici les choix faits pour cette correction manuelle.
L'exemple de l'énoncé annoté dans la Table 2 (euh l-dans ma classe) montre bien le type de difficultés rencontrées. Les euh d'hésitation, ne pouvant pas être une tête de chunk, constituent des chunks adverbiaux (AdP). Cette décision concerne également les interjections (d'étiquette POS I) comme dans l'exemple ci-dessous :
(on/CLS)NP (peut/V)VN (commencer/VINF)VN (bon/I)AdP (alors/I)AdP
Les faux départs et les amorces (comme l-dans l'exemple de la Table 2), quand ils sont impossibles à interpréter, font également partie de chunks adverbiaux (AdP). Dans les cas où une interprétation est possible, l'annotation se fait selon le contexte. Dans l'exemple :
(vous/PRO)NP (êtesV)VN (in-/NC)NP (institutrice/NC)NP
l'amorce in-semble correspondre exactement au début du mot suivant institutrice, elle est donc annotée en tant que nom commun (NC) et forme par conséquent un chunk nominal autonome (NP). Dans l'exemple suivant :
(chez/P vous/PRO)PP (chez/P v-/PRO)PP la répétition de la même préposition chez et l'équivalence entre l'amorce v-et le début du pronom vous, laisse supposer qu'il s'agit de la répétition du même groupe prépositionnel.
Les répétitions de type « faits de parole », font partie des disfluences de l'oral (contrairement aux « faits de langue » où la répétition est due à la syntaxe (Henry, 2005)). Deux possibilités se présentent alors pour le chunking :
I. TELLIER, I. ESHKOL, Y. DUPONT, I. WANG
-Si l'élément répété est la tête du groupe syntaxique, il est nécessaire de distinguer deux chunks, car un chunk ne peut pas contenir deux têtes distinctes :
(et/CC)CONJ (et/CC)CONJ (elle/CLS)NP (me/CLO)NP (disait/V)VN
-Si la répétition ne porte pas sur une tête, les deux éléments appartiennent au même chunk :
(la/DET la/DET belle/ADJ jeune/ADJ fille/NC)NP
Le chunking produit par SEM sans aucune adaptation est évalué relativement à cette référence avec une micro-précision de 77,24 et une macro-précision de 76. Plus de 20 points de F-mesure en moyenne (micro-average) sont donc perdus en appliquant un programme appris avec des textes écrits sources sur des données transcrites de l'oral. Ce mauvais résultat est le point de départ de différentes tentatives d'amélioration. L'objectif des expériences qui suivent est de corriger le minimum de données manuellement pour améliorer au maximum les performances du chunker.
Utilisation de SEM après correction de l'étiquetage POS
Le chunking précédent était appliqué en cascade après un étiquetage POS du corpus qui était lui-même sans doute médiocre. La première idée pour améliorer le chunking est donc de corriger manuellement l'étiquetage POS de l'oral avant de lui appliquer la phase de chunking. Ce processus a permis par la même occasion d'évaluer la qualité de l'étiquetage POS de SEM sur l'oral : son exactitude atteint 80,98%, soit 17% de moins environ que sur des données similaires à celles qui ont servi à apprendre. La fonction « chunker seul » de SEM peut ensuite s'appliquer sur le corpus avec des étiquettes POS corrigées à la main (les colonnes I et III).
Pour corriger les étiquettes POS, certaines conventions ont été adoptées concernant les disfluences de l'oral (voir les colonnes II : les étiquettes POS annotées par SEM et III : les étiquettes POS corrigées selon les conventions établies).
Les faux départs et les amorces (comme l-dans l'exemple de la Table 2) ont reçu une étiquette (UNKNOWN) qui correspond aux mots étrangers et aux néologismes dans le FTB. Les marqueurs discursifs ainsi que les euh d'hésitation ont été étiquetés en tant qu'interjection (I). C'est, parmi les étiquettes disponibles dans SEM, celle qui correspond le mieux à ces unités caractéristiques de l'oral.
La correction des erreurs de l'étiquetage POS porte surtout sur les différences entre l'écrit et l'oral. Par exemple, la forme bon est utilisée en tant qu'adjectif dans 99% des cas dans le FTB, alors qu'elle est beaucoup plus fréquente dans le corpus oral en tant qu'interjection (83%).
La nouvelle micro-average du chunker est maintenant de 87,74 alors que sa nouvelle macro-average est de 88,43. Ces résultats sont en quelque sorte à mi-chemin des précédents : à peu près la moitié des erreurs de chunking sur l'oral peut donc être imputée à des erreurs d'étiquetage POS.
Deuxième approche : Apprentissage d'un chunker spécifique de l'oral
La deuxième approche consiste à apprendre un nouveau chunker à partir du seul corpus extrait d'ESLO 1, en tenant compte autant que possible des spécificités de l'oral. Nous avons choisi de ne pas ré-appprendre un étiqueteur POS spécifique sur les données cibles (ni à en appliquer un autre que SEM), pour nous concentrer sur la phase de chunking. Tant qu'à ré-apprendre un nouveau chunker, nous en avons aussi profité pour définir un jeu de chunks adapté.
Modification des étiquettes de chunks
Pour tenir compte des spécificités de l'oral, nous avons choisi d'ajouter deux types de chunks nouveaux qui lui sont propres (voir la colonne V du
Apprentissage et test avec les étiquettes POS corrigées
La première expérience consiste à apprendre un chunker à partir des données cibles annotées en POS corrigées (la colonne III de la Table 2) et des chunks adaptés à l'oral (la colonne V). Un protocole de validation croisée à 10 plis a été utilisé pour évaluer la qualité du chunker ainsi obtenu, quand il est appliqué à des données de nouveau parfaitement annotées en POS. La micro-average des F-mesures atteint alors 96,65 alors que leur macro-average vaut 96,08. Les résultats se sont donc significativement améliorés, et rejoignent ceux qui avaient été constatés pour SEM sur FTB.
Si on observe de plus près les F-mesures des différents types de chunks, en comparaison avec les expériences précédentes, on constate une forte progression de l'annotation des chunks adverbiaux (AdP). Ces chunks sont très nombreux dans notre corpus au cours des premières expériences, car ils regroupent les adverbes, les marqueurs discursifs, les euh d'hésitation et les interjections. L'introduction d'un nouveau chunk (IntP) annotant ces différents phénomènes (sauf les adverbes) a considérablement réduit le nombre de chunks adverbiaux dans le corpus de référence, ce qui modifie significativement leur F-mesure. Lors de ces premières expériences, la F-mesure du chunk (AdP) varie entre 58,14 (avec les POS non corrigées) et 71,87 (avec les POS corrigées). Désormais, la F-mesure atteint 95,76 pour le chunk (AdP) et 99,4 pour le chunk (IntP). L'apprentissage a donc bien réussi à distinguer les deux types de chunks.
Les erreurs constatées concernent souvent des « exceptions » aux règles générales. C'est le cas des verbes qui forment d'habitude un chunk verbal (VN) sauf quand ils suivent une préposition. Ainsi, dans l'exemple ci-dessous, le verbe est annoté comme la tête d'un chunk verbal :
(l'/DET école/NC euh/I publique/ADJ)NP
Enfin, en cas de répétition de deux étiquettes morphosyntaxiques, le chunker inclut parfois les deux mots dans le même chunk, violant ainsi la contrainte qui voudrait que chaque chunk ne devrait contenir qu'une seule tête. Il annote ainsi :
(
ils/CLS)NP (réfléchissaient/V)VN (pensaient/V)VN (beaucoup/ADV)AdP
Mais les très bons résultats de ce nouveau chunker ne sont atteints que sur des données qui ont elles-mêmes reçu un étiquetage POS parfaitement correct. Or, aucun étiqueteur POS de l'oral n'ayant été appris, notre nouveau chunker risque de voir ses performances se dégrader significativement en situation réelle, c'est-à-dire avec de mauvaises étiquettes POS. Pour quantifier ce problème et essayer d'y remédier, nous avons mené deux nouvelles expériences qui ne font pas l'hypothèse de disposer d'un étiquetage POS corrigé lors de la phase d'utilisation du chunker.
Apprentissage avec les étiquettes POS corrigées, test sur les étiquettes non corrigées
La deuxième expérience de cette série vise à évaluer la dégradation de performance subie quand le chunker appris sur des étiquettes POS corrigées (
Apprentissage et test avec les étiquettes POS non corrigées
La dernière expérience vise à apprendre le chunker de l'oral en se servant uniquement des étiquettes POS fournies par SEM, sans aucune correction (ni en apprentissage ni en test) sur ces POS. Cette fois, notre validation croisée emploie donc les colonnes I, II et V de la
(à/P)PP (me/CLR)B-NP (marier/VINF)B-VNalors qu'il fait partie ici d'un chunk prépositionnel (PP) : (à/P me/CLR marier/VINF)PP Les cas où les interjections et les marqueurs formant généralement un chunk (IntP) sont inclus dans un autre chunk posent aussi problème. Le chunker appris propose : (l'/DET école/NC)NP (euh/I) IntP (publique/ADJ)AP à la place de :
et/CC parce_que/CS)CONJ (ils/CLS)NP (réfléchissaient/V pensaient/V)VN (beaucoup/ADV)AdP à la place de (et/CC)CONJ (parce_que/CS)CONJ (
) .
)Il existe des solutions spécifiques pour le chunking du français transcrit : PEUT-ON BIEN CHUNKER AVEC DE MAUVAISES ETIQUETTES POS ? -(Blanc et al., 2008, 2010) ont essayé d'annoter un corpus oral français en « super-chunks » (chunks contenantles multi-mots complexes), en appliquant des cascades de transducteurs utilisant des ressources lexicales et
syntaxiques. Le processus est fondé sur une étape de prétraitement des données consistant dans le reformatage
et l'étiquetage des disfluences. Une approche similaire a été adoptée par (Valli et Véronis 1999) pour
l'étiquetage morphosyntaxique de l'oral.
-(Antoine et al., 2008) ont proposé une autre stratégie incluant une étape de post-correction pour traiter les
erreurs liées aux disfluences.
Suite à (Blanche-Benveniste, 2005), nous considérons quant à nous que les phénomènes de disfluences doivent être
inclus dans l'analyse linguistique, même s'ils exigent des traitements spécifiques. Pour faire face aux données réelles et
éviter les programmes ad hoc écrits à la main, nous privilégions les techniques issues de l'apprentissage automatique.
2.2 Le French TreeBank (FTB) et ses étiquettes
Le premier corpus dont nous devons tenir compte, notamment parce qu'il a fixé les jeux d'étiquettes que nous utilisons
(aussi bien au niveau des POS qu'à celui des chunks), est le FTB (le French TreeBank) 1 . Il s'agit d'un corpus de phrases
écrites syntaxiquement analysées qui peut être facilement transformé en phrases annotées en POS et en chunks (Abeillé
et al., 2003). Le jeu réduit de 30 étiquettes POS est décrit dans (Crabbé et Candito 2008). Les six types de
chunks extraits de ces données, avec les étiquettes POS correspondant à leur tête, sont les suivants :
-les groupes nominaux ou NP (incluant CLO, CLR, CLS, NC, NPP, PRO, PROREL, PROWH : notons que les
pronoms sont ici considérés comme des chunks nominaux autonomes et pas inclus dans les noyaux verbaux) ;
-les groupes verbaux ou VN, incluant les formes interrogatives, infinitives et modales (V, VIMP, VINF, VPP,
VPR, VS) ;
uniquement à partir du FTB, et ses étiquettes sont donc celles présentées précédemment. Il permet soit de chunker un texte déjà annoté en POS, soit d'enchaîner « annotation POS + chunks » sur du texte brut. Nous exploiterons par la suite ces deux usages distincts.L'outil d'annotation utilisé dans un premier temps est SEM 3 (Tellier et al., 2012), un segmenteur-étiqueteur capable
d'enchaîner plusieurs annotations successives. SEM est spécialisé dans l'analyse des textes écrits, puisqu'il a été appris
SEM a été appris à l'aide d'un CRF (Conditional Random Fields) linéaire (Lafferty et al. 2001), implémenté dans le
logiciel Wapiti 4 (Lavergne et al. 2010). Pour l'étiquetage en POS, SEM utilise une ressource extérieure : le LeFFF
(Lexique des Formes Fléchies du Français) (Sagot 2010) intégré dans les données sous la forme d'attributs booléens.
Pour le chunker, le modèle CRF s'appuie à la fois sur l'étiquetage POS et sur les tokens initiaux.
Le découpage en chunks est traduit par une annotation qui suit le format standard BIO (B pour Beginning, I pour In, O
pour Out). Avec SEM, chaque mot (ou token) du corpus reçoit donc, outre son étiquette POS, une étiquette qui est la
concaténation du type de chunk dont il fait partie et d'une étiquette (B ou I) qui indique la position qu'il y occupe.
ou features), qui caractérisent des configurations locales de données et d'étiquettes. Pour définir l'ensemble des features de son modèle, l'utilisateur d'un programme comme Wapiti spécifie des patrons (ou templates) : sortes d'expressions régulières pouvant faire intervenir n'importe quelle propriété des données d'entrée, et une (dans le cas des patrons unigrammes) ou deux (patrons bigrammes) étiquettes successives. Les patrons seront instanciés sur l'ensemble d'apprentissage, constitué de couples (x,y), en autant de features que de positions où ils peuvent s'appliquer.Dans le cas de l'apprentissage d'un chunker, les données d'entrée x sont constituées des séquences de tokens (ou mots) du texte et des étiquettes POS associées, la suite des étiquettes cibles y est constituée des différents types de chunks associés à B ou I. Les patrons que nous utiliserons pour apprendre ce(s) nouveau(x) chunker(s) ont été copiés sur ceux utilisés pour l'apprentissage de SEM, et seront toujours les mêmes pour chaque expérience. Ils figurent dans laTable 1 :Attribut
Fenêtre sur x
Type de feature sur y
token
[-2, 0]
unigramme
POS
[-2, 1]
unigramme et bigramme
Couple de POS
{-2, 0} et {-1, 0}
unigramme
TABLE 1 :
1spécification des patrons (templates) définissant les features des modèles CRF de chunking 4 Deux séries d'expériences Nous décrivons dans cette section les deux séries d'expériences réalisées avec le corpus oral transcrit et les résultats obtenus. La Table 2 montre l'annotation du même exemple extrait d'ESLO 1 par différents processus, qui incluent (en gras : colonnes III, IV et V) ou non (colonne II) une phase de correction manuelle. Les corrections manuelles ont toutes été assurées par une unique personne experte. Les différentes colonnes de ce tableau serviront soit de données d'entrée PEUT-ON BIEN CHUNKER AVEC DE MAUVAISES ETIQUETTES POS ?soit de données de référence à nos différentes expériences. Leurs contenus seront décrits en détail au fur et à mesure que nous les présenterons. Pour nos évaluations, deux chunks seront considérés comme égaux lorsqu'ils partagent exactement les mêmes frontières et le même type. Nous évaluerons les résultats du chunking avec la micro-average des F-mesures des différents chunks (moyenne des F-mesures de ces chunks pondérées par leurs effectifs) et leur macroaverage (moyenne sans pondération des F-mesures). Notons que sur le FTB, en validation croisée à 10 plis, SEM a été évalué avec une exactitude 97,33% pour l'étiquetage en POS, une micro-average de 97,53 et une macro-average de 90,4 pour le chunker. Les Tables 3 et 5 (en fin d'article) donnent respectivement les proportions des différents types de chunks et la synthèse de l'ensemble de nos résultats.I
II
III
IV
V
Tokens
POS proposés
par SEM
POS corrigés
à la main
Chunks « type FTB »
corrects
Chunks adaptés à l'oral
corrects
euh
DET
I
AdP-B
IntP-B
l-
DET
UNKNOWN
AdP-B
UNK-B
dans
P
P
PP-B
PP-B
ma
DET
DET
PP-I
PP-I
classe
NC
NC
PP-I
PP-I
TABLE 2 :
2Le premier test consiste à appliquer SEM, sans aucune adaptation ni ré-apprentissage, sur les données transcrites cibles de l'oral. SEM est utilisé sur le texte brut, et produit en cascade l'étiquetage en POS et celui correspondant au chunking. Dans laTable 2, cela correspond à prendre comme données d'entrée pour le POS la colonne I (les tokens), et comme données d'entrée pour le chunker les colonnes I et II (les étiquettes POS fournies par SEM sur ESLO 1).les différentes données d'entrée/de référence utilisées
4.1 Première approche : utilisation d'un chunker appris sur l'écrit
4.1.1
Utilisation directe de SEM
Table 2 )
2. La liste des chunks a ainsi été élargie par deux nouveaux venus :chunk UNKNOWN L'étiquette UNKNOWN existe dans le jeu d'étiquettes POS du FTB, où elle est attribuée aux mots étrangers. Nous l'avons utilisée aussi pour désigner les chunks correspondant aux erreurs de transcriptions, aux faux départs ou aux amorces dont l'interprétation est impossible. Dans notre exemple de laTable 2, la forme l-est difficile à comprendre.PEUT-ON BIEN CHUNKER AVEC DE MAUVAISES ETIQUETTES POS ?S'agit-il d'un pronom, d'un déterminant ou d'une amorce ? L'étiquette UNKNOWN, déjà choisie pour cette forme au niveau POS, est donc étendue dans ce cas au chunk.-chunk d'interjection (IntP)
Nous avons déjà signalé le problème que posent les marqueurs discursifs et les euh d'hésitation qui ont été classés, faute
d'avoir une autre étiquette davantage adaptée dans SEM, dans les chunks adverbiaux. L'ajout d'un nouveau chunk IntP
(chunk interjection) destiné à accueillir tous ces phénomènes, résout (au moins partiellement) ce problème :
(des/DET idées/NC laïques/ADJ)NP (quoi/I) IntP
Cependant, lorsque les interjections se trouvent à l'intérieur d'un groupe syntaxique, ils s'intègrent dans le chunk
correspondant :
-(l'/DET école/NC euh/I publique/ADJ)NP
-(des/DET hm/I inconvénients/NC)NP
Dans les deux exemples ci-dessus, le euh d'hésitation et l'interjection hm appartiennent à un chunk nominal.
Ce nouvel étiquetage en chunks a été manuellement validé sur nos données ESLO (colonne V de la Table 2), et
constitue la nouvelle référence grâce à laquelle nous allons à la fois apprendre et évaluer notre nouveau chunker.
colonnes III et V) est utilisé sur des données avec des étiquettes POS non corrigées (colonne II). Etant donné le faible volume de données dont nous disposons, nous avons pour cela reconduit l'expérience précédente en validation croisée à 10 plis, en prenant soin lors de chaque étape de respecter le protocole suivant : -l'apprentissage est réalisé à l'aide des colonnes I, III et V IntP, car très peu d'étiquettes POS I sont correctement attribuées par SEM dans ESLO 1. En effet, dans le FTB, les seules interjections présentes correspondent à des phrases d'un seul mot suivi d'une ponctuation. Or ESLO 1 ne contient pas de ponctuation, et cet indice n'aide donc en rien le chunker. La plupart des interjections de ESLO 1 comme bon, bien, enfin, alors, etc. sont étiquetées par SEM comme des adverbes ou des adjectifs lors de l'étiquetage POS. Le nouveau chunker appris les rattache alors à un chunk adverbial plutôt qu'à un chunk IntP. Un seul chunk IntP a été reconnu lors de cette expérience, et cela semble-t-il de-le chunker appris est appliqué en test sur les colonnes I et II
-le résultat obtenu est comparé à la colonne de référence V
Nous obtenons ainsi une micro-average des F-mesure de 73,81, et une macro-average de 59,62, ce qui représente une
grosse dégradation (cf. les détails des valeurs des différents chunks dans la Table 3). Les performances sont
particulièrement mauvaises pour le nouveau type de chunk façon quasiment « fortuite ». Les représentants du nouveau chunk (UNKNOWN) n'ont pas non plus été identifiés, ce
qui s'explique naturellement par le fait que SEM n'a pas attribué l'étiquette POS UNKNOWN là où notre correction
manuelle l'avait fait (sur les disfluences en particulier). Le problème mentionné précédemment et concernant la forme
bon persiste également dans les résultats de ce test. Ayant une étiquette POS ADJ, bon est étiqueté en tant que chunk
adjectival (AP) et non comme chunk interjection (IntP). Dans l'exemple suivant (où la dernière colonne donne la
proposition du nouveau chunker, tandis que l'avant-dernière donne la bonne étiquette), les deux unités bon et alors
reçoivent une mauvaise étiquette de chunk :
on
CLS
B-NP B-NP
peut
V
B-VN B-VN
commencer
VINF B-VN B-VN
bon
ADJ
B-IntP B-AP
alors
ADV
B-IntP B-AdP
L'absence de correction des POS cause donc ici des erreurs prévisibles de chunking, surtout pour les nouveaux types de
chunks qui s'appuient sur des propriétés de l'oral que les étiquettes POS non corrigées de SEM ne prennent pas en
compte. Il reste à voir si un chunker appris directement sur des étiquettes POS non corrigées se comporterait mieux.
PEUT-ON BIEN CHUNKER AVEC DE MAUVAISES ETIQUETTES POS ?
Table 2 ,
2en cherchant à obtenir la dernière de ces colonnes à partir des deux autres. L'objectif de cette dernière expérience est de voir s'il est possible d'apprendre un bon chunker en se fondant sur des étiquettes POS médiocres. Existe-t-il des régularités dans les erreurs au niveau morpho-syntaxique dont l'apprentissage pourrait tirer parti ? Pourrait-on donc se passer d'une correction manuelle de l'étiquetage POS (et d'un ré-apprentissage d'un étiqueteur POS de l'oral) pour obtenir tout de même in fine un chunker de l'oral correct ? C'est tout l'enjeu de cet ultime test.Nous obtenons dans cette expérience une micro-average de 88,84, et une macro-average de 81,76, soit des résultats (comme on pouvait s'y attendre) intermédiaires entre les deux précédents (cf. les détails dans laTable 4). Cette fois, on constate que les chunks (IntP) sont très bien reconnus (plus de 93 de F-mesure), alors que SEM substitue à l'étiquette POS correcte I des étiquettes assez variées (typiquement ADV, ADJ, NC et V). Mais les interjections sont à la fois fréquentes et assez peu variées dans notre corpus de l'oral (euh, hm, oui, non, etc.) et celles présentes dans l'ensemble d'apprentissage suffisent apparemment au chunker appris (qui a aussi accès aux mots ou tokens et pas uniquement aux POS) à les identifier. Ainsi, l'exemple précédent reçoit cette fois l'étiquetage :La forme bon est étiquetée ici correctement au niveau des chunks (B-IntP) malgré une erreur d'étiquetage POS où elle est reconnue comme un adjectif. Dans le corpus d'apprentissage, ce mot est le plus souvent employé comme marqueur discursif, ce qui facilite sa désambiguïsation. Les unités oui, non, aussi très fréquentes dans le corpus, reçoivent maintenant aussi une bonne étiquette de chunk, quelle que soit leur étiquette POS.Sur le chunk (UNKNOWN), le nouveau chunker obtient une bonne précision (92,86%) : mais un mauvais rappel (18,57%). Cela tient sans doute au fait que les chunks inconnus peuvent parfois correspondre à des mots connus mais employés dans un mauvais contexte, comme dans l'exemple suivant :En outre, les amorces présentent une bien plus grande variabilité que les interjections ; toutes ne peuvent pas être présentes dans l'ensemble d'apprentissage et l'accès aux tokens ne suffit donc pas à compenser le mauvais étiquetage POS. Il ne semble pas y avoir de règle évidente quant aux chunks (UNKNOWN) bien identifiés. L'hypothèse la plus probable est que SEM a reconnu uniquement les mots qu'il a déjà vus dans son ensemble d'apprentissage. On pourrait sans doute largement améliorer les capacités de notre nouveau chunker à reconnaître les amorces en lui donnant accès à certaines propriétés des tokens : dans ESLO 1, les amorces sont en effet systématiquement terminées par un tiret -: ajouter cette propriété aux attributs pris en compte dans les features devrait permettre de les identifier bien plus sûrement que par leur contexte. Mais nous voulions utiliser le même ensemble de features (copiées sur celles utilisées pour l'apprentissage de SEM) pour toutes nos expériences, pour ne pas biaiser les comparaisons. I. TELLIER, I. ESHKOL, Y. DUPONT, I. WANGon
CLS
B-NP B-NP
peut
V
B-VN B-VN
commencer
VINF B-VN B-VN
bon
ADJ
B-IntP B-IntP
alors
ADV
B-IntP B-AdP
vous
DET
B-NP
B-NP
êtes
NC
B-VN
B-VN
in-
ADJ
B-UNKNOWN B-UNKNOWN
institutrice
NC
B-NP
B-AdP
n-
ADV
B-UNKNOWN B-UNKNOWN
peut-être
VINF B-AdP
B-AdP
non
ADV
B-IntP
B-IntP
euh
V
B-IntP
B-IntP
les
DET
B-UNKNOWN B-NP
dans
P
B-PP
B-PP
ma
DET
I-PP
I-PP
classe
NC
I-PP
I-PP
TABLE 3 :
3proportions des différents types de chunks dans les différents corpus Les détails des résultats obtenus sur les différents types de chunks pour les deux dernières expériences sont présentés dans laTable 4.Type de chunk
Expérience 4
Expérience 5
Précision
Rappel
F-mesure
Précision
Rappel
F-mesure
AP
50,73%
71,23%
59,26
71,76%
64,38%
67,87
AdP
55,9%
79,48%
65,64
83,78%
85,83%
84,79
CONJ
89,42%
89,42%
89,42
89,8%
91,42%
90,6
IntP
33,33%
0,12%
0,24
95,82%
91,87%
93,8
NP
81,16%
85,34%
83,2
91,93%
90,6%
91,26
PP
71,99%
81,55%
76,48
81,57%
82,41%
81,99
UNKNOWN
N/A
N/A
N/A
92,86%
18,57%
30,95
VN
78,13%
87,23%
82,43
89,75%
90,89%
90,32
TABLE 4 :
4Résultats des différents types de chunks dans les deux dernières expériencesLa synthèse des résultats de l'ensemble de nos expériences est présentée dans laTable 5.Apprentissage d'un chunker spécifique de l'oral (référence : la colonne V)Expériences
Première approche :
utilisation d'un chunker appris
sur l'écrit (référence : la
colonne IV)
Deuxième approche :
Evaluation
POS
non corrigées
POS
corrigées
POS corrigées
Apprentissage
sur POS corrigés,
test sur POS non
corrigés
POS non corrigés
Exactitude des POS (%)
80,98
100
100
80,98
80,98
Micro-average
77,24
87,74
96,65
73,81
88,84
Macro-average
76
88,43
96,08
59,62
81,76
TABLE 5 :
5Synthèse des résultats des micro et macro-averages des F-mesures dans l'ensemble de nos expériences Tout d'abord, notre première série d'expériences montre qu'un étiqueteur morphosyntaxique associé à un chunker, tous deux appris sur un corpus source écrit fait environ 17% d'erreurs supplémentaires en POS, et 20% en chunking, sur des données cibles orales transcrites. Cet écart important justifie de trouver des stratégies d'adaptation ou de contournement5 Conclusion
http://www.llf.cnrs.fr/Gens/Abeille/French-Treebank-fr.php 2 http://eslo.tge-adonis.fr/ 3 http://www.lattice.cnrs.fr/sites/itellier/SEM.html [O-E.2]
http://wapiti.limsi.fr
Comme l'erreur en chunking n'est pas beaucoup plus importante que l'erreur en POS, la solution de corriger les POS apparaît a priori comme la plus « naturelle ». Cette correction manuelle des POS améliore le résultat du chunking de 10 points de F-mesure en moyenne, mais reste 10 points en dessous des performances moyennes du chunker sur l'écrit. Même avec un étiquetage POS parfait, l'écart entre l'écrit et l'oral en matière de chunking se mesure. Peut-On Bien Chunker Avec De Mauvaises Etiquettes Pos, pour traiter les corpus oraux. avec ces 10 points d'écart en moyennePEUT-ON BIEN CHUNKER AVEC DE MAUVAISES ETIQUETTES POS ? pour traiter les corpus oraux. Comme l'erreur en chunking n'est pas beaucoup plus importante que l'erreur en POS, la solution de corriger les POS apparaît a priori comme la plus « naturelle ». Cette correction manuelle des POS améliore le résultat du chunking de 10 points de F-mesure en moyenne, mais reste 10 points en dessous des performances moyennes du chunker sur l'écrit. Même avec un étiquetage POS parfait, l'écart entre l'écrit et l'oral en matière de chunking se mesure avec ces 10 points d'écart en moyenne.
Nous avons pour cela choisi de coller aux propriétés de l'oral plutôt que de chercher à faire entrer à tout prix les données orales dans le cadre défini pour l'écrit, d'où le choix des deux nouveaux types de chunks introduits. Corriger directement les étiquettes de chunks apparaît donc comme la suite logique de cette approche. Pour l'apprentissage automatique d'un nouveau chunker spécifique de l'oral, le pari a été fait de se consacrer au seul niveau des chunks. pour lequel un petit nombre de données d'apprentissage peut suffireCorriger directement les étiquettes de chunks apparaît donc comme la suite logique de cette approche. Nous avons pour cela choisi de coller aux propriétés de l'oral plutôt que de chercher à faire entrer à tout prix les données orales dans le cadre défini pour l'écrit, d'où le choix des deux nouveaux types de chunks introduits. Ce faisant, nous n'avons pas choisi la facilité car la tâche de chunking devient plus complexe (il faut désormais discriminer parmi huit types de chunks au lieu de six). Pour l'apprentissage automatique d'un nouveau chunker spécifique de l'oral, le pari a été fait de se consacrer au seul niveau des chunks, pour lequel un petit nombre de données d'apprentissage peut suffire.
En présence d'étiquettes POS correctes et cohérentes avec les chunks (première expérience), l'apprentissage automatique joue parfaitement son rôle, et permet d'apprendre un chunker d'aussi bonne qualité que celui qui avait été appris sur l'écrit avec beaucoup plus de données. Il n'y a donc pas de malédiction propre à l'oral en matière de chunking : même les disfluences peuvent y être bien traitées, à condition de disposer d'exemples de référence, même en quantité restreinte. En revanche, un tel chunker dépend fortement des étiquettes POS sur lesquelles il s'appuie : l'absence de correction manuelle (deuxième expérience de la série) fait chuter ses performances. Il n'est donc pas réellement exploitable en conditions réelles : en effet. Les trois expériences de la deuxième approche permettent de caractériser assez finement l'apport des étiquettes POS à la phase de chunking. tant qu'à corriger les étiquettes POS, autant réapprendre dans ce cas un étiqueteur POS de l'oral…Les trois expériences de la deuxième approche permettent de caractériser assez finement l'apport des étiquettes POS à la phase de chunking. En présence d'étiquettes POS correctes et cohérentes avec les chunks (première expérience), l'apprentissage automatique joue parfaitement son rôle, et permet d'apprendre un chunker d'aussi bonne qualité que celui qui avait été appris sur l'écrit avec beaucoup plus de données. Il n'y a donc pas de malédiction propre à l'oral en matière de chunking : même les disfluences peuvent y être bien traitées, à condition de disposer d'exemples de référence, même en quantité restreinte. En revanche, un tel chunker dépend fortement des étiquettes POS sur lesquelles il s'appuie : l'absence de correction manuelle (deuxième expérience de la série) fait chuter ses performances. Il n'est donc pas réellement exploitable en conditions réelles : en effet, tant qu'à corriger les étiquettes POS, autant ré- apprendre dans ce cas un étiqueteur POS de l'oral…
on peut apprendre un chunker spécifique de l'oral (y compris pour la reconnaissance des interjections par exemple) d'assez bonne qualité, en s'appuyant uniquement sur un petit nombre de données annotées, qui plus est avec des étiquettes POS médiocres (et non adaptées à l'oral). Les erreurs du POS ont bien été compensées par l'apprentissage du chunker, qui fait en moyenne moins d'erreurs de chunking qu'il n'y a d'erreurs d'étiquetage POS. Les mots, même en petites quantités, permettent cette compensation, et sans doute aussi le fait que les erreurs de POS sont suffisamment « régulières. La dernière expérience est la plus prometteuse : elle montre qu. pour que le chunker puisse les « rectifierLa dernière expérience est la plus prometteuse : elle montre qu'on peut apprendre un chunker spécifique de l'oral (y compris pour la reconnaissance des interjections par exemple) d'assez bonne qualité, en s'appuyant uniquement sur un petit nombre de données annotées, qui plus est avec des étiquettes POS médiocres (et non adaptées à l'oral). Les erreurs du POS ont bien été compensées par l'apprentissage du chunker, qui fait en moyenne moins d'erreurs de chunking qu'il n'y a d'erreurs d'étiquetage POS. Les mots, même en petites quantités, permettent cette compensation, et sans doute aussi le fait que les erreurs de POS sont suffisamment « régulières » pour que le chunker puisse les « rectifier ».
Il est intéressant de constater que les résultats obtenus pour le chunker dans la dernière expérience sont très proches de ceux de la deuxième expérience de la première approche, c'est-à-dire en appliquant SEM sur des étiquettes POS corrigées manuellement. La différence est que le nouveau chunker obtenu avec la dernière expérience est applicable sans plus de correction manuelle sur de nouvelles données orales, ce qui n'est pas le cas de ce que proposait l'autre expérience. Ainsi, tant qu'à corriger des données, il vaut semble-t-il mieux s'attaquer aux données qui servent à l'apprentissage (les nouveaux chunks dans la dernière expérience) qu'aux données qui servent de. L , support à un programme déjà appris (les POS dans l'expérience 2L'apprentissage automatique d'un chunker spécifique de l'oral semble donc pouvoir assez bien se passer d'un étiquetage POS correct. Il est intéressant de constater que les résultats obtenus pour le chunker dans la dernière expérience sont très proches de ceux de la deuxième expérience de la première approche, c'est-à-dire en appliquant SEM sur des étiquettes POS corrigées manuellement. La différence est que le nouveau chunker obtenu avec la dernière expérience est applicable sans plus de correction manuelle sur de nouvelles données orales, ce qui n'est pas le cas de ce que proposait l'autre expérience. Ainsi, tant qu'à corriger des données, il vaut semble-t-il mieux s'attaquer aux données qui servent à l'apprentissage (les nouveaux chunks dans la dernière expérience) qu'aux données qui servent de support à un programme déjà appris (les POS dans l'expérience 2).
ou en changeant la variation écrit/oral par une autre, comme un changement de domaine ou de type d'écriture (les tweets pourraient par exemple remplacer l'oral). Le fait que l'apprentissage direct d'un nouvel étiqueteur focalisé sur une tâche cible est préférable à une séquence d'apprentissages intermédiaires avait par ailleurs déjà été constaté. Eshkol, Il reste bien sûr à confirmer que le même genre de démarche peut être valable dans d'autres contextes, par exemple pour d'autres tâches (la reconnaissance des entités nommées pouvant se substituer à celle de chunking). Le caractère cumulatif des erreurs n'est donc pas une fatalité : il semble qu'on puisse réussir une tâche de « haut niveau » en s'appuyant sur des informations de « niveau inférieur. de qualité médiocre par apprentissage automatique, du moment que la correction des erreurs d'un niveau à un autre suive une certaine régularitéIl reste bien sûr à confirmer que le même genre de démarche peut être valable dans d'autres contextes, par exemple pour d'autres tâches (la reconnaissance des entités nommées pouvant se substituer à celle de chunking), ou en changeant la variation écrit/oral par une autre, comme un changement de domaine ou de type d'écriture (les tweets pourraient par exemple remplacer l'oral). Le fait que l'apprentissage direct d'un nouvel étiqueteur focalisé sur une tâche cible est préférable à une séquence d'apprentissages intermédiaires avait par ailleurs déjà été constaté (Eshkol et al., 2010). Le caractère cumulatif des erreurs n'est donc pas une fatalité : il semble qu'on puisse réussir une tâche de « haut niveau » en s'appuyant sur des informations de « niveau inférieur » de qualité médiocre par apprentissage automatique, du moment que la correction des erreurs d'un niveau à un autre suive une certaine régularité.
Parsing by chunks. Abney S, Principle-based Parsing. R. Berwick, R. Abney, and C. TennyKluwer Academic PublisherABNEY S. (1991). Parsing by chunks. In R. Berwick, R. Abney, and C. Tenny, editors, Principle-based Parsing. Kluwer Academic Publisher.
Building a treebank for french. Abeille A, Clement L, Toussenel F Et, A. Abeillé, editor, Treebanks. KluwerDordrechtABEILLE A., CLEMENT L., et TOUSSENEL F. (2003). Building a treebank for french. In A. Abeillé, editor, Treebanks. Kluwer, Dordrecht.
A disfluency study for cleaning spontaneous speech automatic transcripts and improving speech language models. Adda-Decker M, Habert B, C Barras, G Adda, Boula De Mareüil P, P Paroubek, Proceedings of Isca tutorial and research workshop on disfluency in spontaneous speech (diss'03). Isca tutorial and research workshop on disfluency in spontaneous speech (diss'03)ADDA-DECKER M., HABERT B., BARRAS C., ADDA G., BOULA DE MAREÜIL P., PAROUBEK P. (2003). A disfluency study for cleaning spontaneous speech automatic transcripts and improving speech language models. In Proceedings of Isca tutorial and research workshop on disfluency in spontaneous speech (diss'03), 67-70.
. I Tellier, I Eshkol, Y Dupont, I Wang, I. TELLIER, I. ESHKOL, Y. DUPONT, I. WANG
Quand le TAL robuste s'attaque au langage parlé : analyse incrémentale pour la compréhension de la parole spontanée. Antoine J-Y, J Goulian, J Villaneau, Actes de TALN. ANTOINE J-Y., GOULIAN J., VILLANEAU J. (2003). Quand le TAL robuste s'attaque au langage parlé : analyse incrémentale pour la compréhension de la parole spontanée. Actes de TALN 2003, 25-34.
Automatic rich annotation of large corpus of conversational transcribed speech: the chunking task of the epac project. Antoine J-Y, Mokrane A, Friburger N Et, Proceedings of LREC. LRECANTOINE J-Y., MOKRANE A., et FRIBURGER N. (2008) Automatic rich annotation of large corpus of conversational transcribed speech: the chunking task of the epac project. In Proceedings of LREC'2008.
L'annotation syntaxique de corpus oraux constitue-t-elle un problème spécifique ? Actes de RÉCITAL. C Benzitoun, BENZITOUN C. (2004). L'annotation syntaxique de corpus oraux constitue-t-elle un problème spécifique ? Actes de RÉCITAL.
Blanc O, M Constant, Dister A. Et, Watrin P, Corpus oraux et chunking. Actes de Journées d'étude sur la parole (JEP). Avignon, FranceBLANC O., CONSTANT M., DISTER A. et WATRIN P. (2008). Corpus oraux et chunking. Actes de Journées d'étude sur la parole (JEP), Avignon, France.
Partial parsing of spontaneous spoken French. Blanc O, M Constant, Dister A, Et Watrin P, Proceedings of 7th International Conference on Language Resources and Evaluation (LREC'10). 7th International Conference on Language Resources and Evaluation (LREC'10)BLANC O., CONSTANT M., DISTER A. ET WATRIN P. (2010). Partial parsing of spontaneous spoken French. In Proceedings of 7th International Conference on Language Resources and Evaluation (LREC'10).
Les aspects dynamiques de la composition sémantique de l'oral. Sémantique et corpus. A. Condamines (dir. Blanche-Benveniste C, Londres, HermesBLANCHE-BENVENISTE C. (2005). Les aspects dynamiques de la composition sémantique de l'oral. Sémantique et corpus. A. Condamines (dir.), Londres, Hermes, 40-73.
Le français parlé, transcription et édition. Blanche-Benveniste C, C Jeanjean, Paris, Didier EruditionBLANCHE-BENVENISTE C., JEANJEAN C. (1987). Le français parlé, transcription et édition. Paris, Didier Erudition.
Approches de la langue parlée en français. Blanche-Benveniste C, Paris, OphrysBLANCHE-BENVENISTE C. (1997). Approches de la langue parlée en français. Paris, Ophrys.
Transcription de l'oral et morphologie. Romania Una et diversa. Blanche-Benveniste C, Philologische Studien für Theodor Berchem. Gille M. et Kiesler R. EdsTübingenGunter NarrBLANCHE-BENVENISTE C. (2000). Transcription de l'oral et morphologie. Romania Una et diversa, Philologische Studien für Theodor Berchem (Gille M. et Kiesler R. Eds). Tübingen : Gunter Narr, 61-74.
Evaluating the impact of external lexical ressources unto a crf-based multiword segmenter and part-of-speech tagger. M Constant, Tellier I, Proceedings of LREC 2012. LREC 2012CONSTANT M., TELLIER I. (2012) Evaluating the impact of external lexical ressources unto a crf-based multiword segmenter and part-of-speech tagger. In Proceedings of LREC 2012.
Expériences d'analyse syntaxique du français. Crabbe B, Candito M, AvignonActes de Traitement Automatique des Langues NaturellesCRABBE B, CANDITO M (2008). Expériences d'analyse syntaxique du français. Actes de Traitement Automatique des Langues Naturelles (TALN 2008), Avignon.
De la transcription à l'étiquetage morphosyntaxique. Le cas de la banque de données textuelle orale VALIBEL. Dister A, Université de LouvainThèse de DoctoratDISTER A. (2007). De la transcription à l'étiquetage morphosyntaxique. Le cas de la banque de données textuelle orale VALIBEL. Thèse de Doctorat, Université de Louvain.
Étiqueter un corpus oral par apprentissage automatique à l'aide de connaissances linguistiques. Eshkol I, Taalab S Tellier I, Billot S, Actes de 10es Journées Internationales d'analyse statistique des données textuelles. s de 10es Journées Internationales d'analyse statistique des données textuellesESHKOL I., TELLIER I, TAALAB S., BILLOT S., (2010). Étiqueter un corpus oral par apprentissage automatique à l'aide de connaissances linguistiques. Actes de 10es Journées Internationales d'analyse statistique des données textuelles (JADT 2010).
Un grand corpus oral « disponible » : le corpus d'Orléans. Eshkol-Taravella I, O Baude, D Maurel, L Hriba, C Dugua, Tellier I, Ressources linguistiques libres. 52ESHKOL-TARAVELLA I., BAUDE O., MAUREL D., HRIBA L., DUGUA C., TELLIER I., (2012) Un grand corpus oral « disponible » : le corpus d'Orléans 1968-2012. Dans Ressources linguistiques libres, TAL 52, n° 3, 17-46.
Quelles répétitions à l'oral ? Esquisse d'une typologie. Henry S , La Linguistique de corpus. Rennes, Presses universitaires de RennesHENRY S. (2005). Quelles répétitions à l'oral ? Esquisse d'une typologie, G. Williams (Éd.), La Linguistique de corpus, Rennes, Presses universitaires de Rennes, 81-92.
Practical very large scale CRFs. Lavergne T, E T Cappe O, Yvon F, Proceedings of ACL'2010. ACL'2010LAVERGNE T, CAPPE O, ET YVON F. (2010). Practical very large scale CRFs. In Proceedings of ACL'2010, 504-513.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, E T Mccallum A, Pereira F, Proceedings of ICML 2001. ICML 2001LAFFERTY J, MCCALLUM A, ET PEREIRA F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML 2001, 282-289.
The Lefff, a freely available, accurate and large-coverage lexicon for French. Sagot B, Proceedings of 7th International Conference on Language Resources and Evaluation (LREC'10). 7th International Conference on Language Resources and Evaluation (LREC'10)SAGOT B. (2010). The Lefff, a freely available, accurate and large-coverage lexicon for French. In Proceedings of 7th International Conference on Language Resources and Evaluation (LREC'10).
Shallow parsing with conditional random fields. Sha F, P Pereira, Proceedings of HLT-NAACL. HLT-NAACLSHA F, PEREIRA P. (2003). Shallow parsing with conditional random fields. In Proceedings of HLT-NAACL, 213-220.
Apprentissage automatique d'un chunker pour le français. Tellier I, D Duchier, Eshkol I, Courmet A, Martinet M, Actes de Traitement Automatique des Langues Naturelles. s de Traitement Automatique des Langues NaturellesTALNTELLIER I., DUCHIER D., ESHKOL I., COURMET A., MARTINET M. (2012), Apprentissage automatique d'un chunker pour le français, Actes de Traitement Automatique des Langues Naturelles (TALN 2012).
POS-tagging for Oral Texts with CRF and Category Decomposition. Tellier I, Eshkol I, S Taalab, Prost J-P, Research in Computer Science, special issue : Natural Language Processing and its Applications. TELLIER I., ESHKOL I., TAALAB S., PROST J-P. (2010). POS-tagging for Oral Texts with CRF and Category Decomposition. Research in Computer Science, special issue : Natural Language Processing and its Applications, 79- 90.
Etiquetage grammatical des corpus de parole : problèmes et perspectives. L'oral spontané. Revue Française de Linguistique Appliquée IV-2. Valli A Veronis, J , VALLI A., VERONIS J. (1999). Etiquetage grammatical des corpus de parole : problèmes et perspectives. L'oral spontané. Revue Française de Linguistique Appliquée IV-2, 113-133. |
29,382,717 | SentiHeros at SemEval-2017 Task 5: An application of Sentiment Analysis on Financial Tweets | Sentiment analysis is the process of identifying the opinion expressed in text. Recently it has been used to study behavioral finance, and in particular the effect of opinions and emotions on economic or financial decisions. SemEval-2017 task 5 focuses on the financial market as the domain for sentiment analysis of text; specifically, task 5, subtask 1 focuses on financial tweets about stock symbols. In this paper, we describe a machine learning classifier for binary classification of financial tweets. We used natural language processing techniques and the random forest algorithm to train our model, and tuned it for the training dataset of Task 5, subtask 1. Our system achieves the 7th rank on the leaderboard of the task. ACL 2017 Submission 116. Confidential review Copy. DO NOT DISTRIBUTE. | [
1957433
] | SentiHeros at SemEval-2017 Task 5: An application of Sentiment Analysis on Financial Tweets
August 3 -4, 2017
Narges Tabari
Armin Seyeditabari
Wlodek Zadrozny
SentiHeros at SemEval-2017 Task 5: An application of Sentiment Analysis on Financial Tweets
Proceedings of the 11th International Workshop on Semantic Evaluations
the 11th International Workshop on Semantic EvaluationsVancouver, CanadaAugust 3 -4, 2017ACL 2017 Submission 116. Confidential review Copy. DO NOT DISTRIBUTE. 099 Narges Tabari: [email protected] Armin Seyeditabari: [email protected] Wlodek Zadrozny: [email protected]
Sentiment analysis is the process of identifying the opinion expressed in text. Recently it has been used to study behavioral finance, and in particular the effect of opinions and emotions on economic or financial decisions. SemEval-2017 task 5 focuses on the financial market as the domain for sentiment analysis of text; specifically, task 5, subtask 1 focuses on financial tweets about stock symbols. In this paper, we describe a machine learning classifier for binary classification of financial tweets. We used natural language processing techniques and the random forest algorithm to train our model, and tuned it for the training dataset of Task 5, subtask 1. Our system achieves the 7th rank on the leaderboard of the task. ACL 2017 Submission 116. Confidential review Copy. DO NOT DISTRIBUTE.
Introduction
The recent explosion of textual data creates an unprecedented opportunity for investigating people's emotions and opinions, and for understanding human behavior. Although there are several methods to do this, sentiment analysis is an especially effective method of text categorization that assigns emotions to text (positive, negative, neutral, etc.). Sentiment analysis methods have been used widely on blogs, news, documents and microblogging platforms such as Twitter.
Although social media and blogging are popular and widely used platforms to discuss many different topics, they are challenging to analyze. This is to large extent due to the specific of vocabulary and syntax, which are dependent on topics, with the same words possibly expressing different sentiments in different contexts. For example, a word in a casual context might have positive or neutral sentiment (e.g., crush), while the same word generally has a negative sentiment in fi-nance. Therefore, with the absence of general natural language understanding, context-dependent and domain-specific approaches allow us to increase the accuracy of sentiment analysis at a relatively low implementation cost.
Domain-specific sentiment analysis is being used to analyze or investigate various areas in finance, such as corporate finance and financial markets, investment and banking, asset and derivative pricing. Ultimately, the goal is to understand the impact of social media and news on financial markets and to predict the future prices of assets and stocks.
The proposed task in SemEval-2017 targets a sentiment analysis task, which we should identify a range of negative to positive affect on the stock of certain companies. The objective of the task was to predict the sentiment associated with companies and stock with floating point values in the interval from -1 to 1.
Previous research on textual analysis in a financial context has primarily relied on the use of bag of words methods, to measure tone (Tetlock, 2007) (Loughran & McDonald, 2011) which is one of the prominent efforts to improve sentiment analysis in financial domain, showed that using non-financial word lists for sentiment analysis will produce misclassifications and misleading results. To illustrate this, they used the Harvard-IV-4 list on financial reports, and found that 73.8% of the negative word counts were attributable to words that were not actually negative in a financial context.
Recently, there has been an increasing interest towards the use of machine learning techniques to get better sentiment result; e.g., naïve Bayesian classifier (Saif, He and Alani 2012) with various features got the accuracy of 83.90%. Other reported results include the use of support vector machines (SVMs) with the accuracy of 59.4% (O'Hare et al., 2009), and multiple-classifier voting systems with the 72% accuracy (Das & Chen, 2007).
In this paper, we describe our approach to building a supervised classifier predicting the sentiment scores of financial tweets provided by SemEval-2017. The classifier is fed preprocessed tweets as input and it predicts the binary labels of the tweets. Once tweets were preprocess and features were extracted, various classification models were applied using Weka tool (Hall et al., 2009). This environment contains a collection of machine learning-based algorithms for data mining tasks, such as, classification, regression, clustering, association rules, and visualization. We ultimately used Random Forest as our classifier as in our various tests it showed the best and accuracy in classifying the tweets. After predicting the binary labels, we then use the probability of the tweets being correctly classified to create a range of predictions from -1 to 1 as it was requested in the task.
Method
Preprocessing the data
SemEval task 5, subtask 1 provided a training dataset with 1800 tweets. Every tweet had a sentiment score between -1 to 1 and it showed its sentiment toward the stock symbol that was assigned to that tweet. Table 1 To prepare the dataset for classification, we first converted the sentiment scores to -1, 0 and 1. Tweets with sentiments between -0.01 and 0.01 were labeled as zero, positive sentiments labeled as 1 and negative tweets were labeled as -1. We then disregarded the tweets with neutral sentiment, which left us 1560 tweets to train our mod-el. Some tweets had multiple Spans, describing the sentiment toward the Cashtag. To keep things simple, we concatenated the spans of each tweet with each other. Then using the Python NLTK 1 library we deleted the punctuations, tokenized the spans, and deleted the stop words.
Since certain stop words in financial context can have impact on the sentiment of the tweets, we excluded them from the stop word list. Words like "up", and "down" were not removed from tweets. We also removed the negations from the stop word lists, as we later handle the negations on our own when creating the features.
Feature Selection Process
To add features to our training dataset, we used the McDonald's wordlist (Loughran & McDonald, 2011). This is a list of positive and negative words for financial 10-K reports containing the summary of the company's performance.
We calculated number of positive or negative words in each Span, using the McDonald's wordlist in the added features. There were some words, such as "short" which was not in any wordlist as a negative word, yet shorting a stock expresses a negative sentiment toward that stock. For this reason, we manually added positive or negative words to each list that to our best knowledge carry those sentiments. Adding these words to the wordlist improved our results. Then we realized in context of finance, co-occurrence of some words with each other in one tweet changes the sentiment of the tweet completely. For example, "short" and "sell" are both negative words in context of finance, but selling a short contains a positive sentiment in stock market context. Another example would be the co-occurrence of "go" and "down", or "pull" and "back" in our tweets. In a similar fashion we also we handled the negations. Once we found these patterns, we normalized our data, i.e. we replaced the combinations of words in the tweet with a single positive or negative label, which we treated just as another positive or negative word. We then re-counted the number of positive or negative words in the tweet and updated our feature vectors. Table 3 shows examples of patterns we found in the tweet to have changed the sentiment of the word. The normalization had a benefit of increasing the counts of rarely occurring ex Table 4. Results of different Weka classifiers using 10-fold cross validation and default settings.
Sentiment Prediction
After pre-processing our data and creating all our features (Tweet, Positive-Count, Negative-Count), we used WEKA to classify our tweets. Our feature vectors were the combination of document vectors generated by Weka's StringToWordVector filter, followed by the features extracted from the data as explained above. Among all the classification methods that we used, Random Forest did give us the best result with accuracy of 91.2%. Table 4 shows results from various classifiers using our training data. The random forest model in WEKA provided both a class prediction and class probability for each tweet in the training and test set.
Since the final float score needed to be between -1 and 1, for tweets classified as negative we made the sentiment score the negative of the class probability; for positive classifications, the sentiment score was simply the class probability.
Other Experiments
We have done several other experiments first to find a promising approach, and to gauge alternative methods of classification and data preprocessing.
In our initial experiment, after pre-processing the tweets, we first ran the tweets on WEKA to classify using only the feature vector, WEKA's StringToWordVector which is a term document matrix. Random forest and Logistic regression had the highest accuracy of 83.3% and 85.3% respectively. This experiment shows the impact of our additional features to be around 6%.
Before deciding on the final features of the model, we tried other types of features. Although many of them did not improve the model, we still thought they were worth mentioning, with description of them following:
Bigrams: In the first experiment, bigrams were used. (Kouloumpis, Wilson, & Moore, 2011) showed that using unigrams and bigrams are effective in improving sentiment analysis. (Dave et al., 2003) reported that bigrams and trigrams worked better than unigrams for polarity classification of product reviews. Unfortunately, bigrams reduced accuracy of Random Forest and Logistic regression to 76.7% and 73.9% respectively. We imagine that with a larger data set, bigrams might be valuable.
Feature selection using logistic regression: In another experiment, we used logistic regression to produce a list of words with the higher odds ratio. We then removed other words from tweets, in an attempt to amplify the stronger signals. However, applying filtered tweets, with various ranges of odds ratio did not help with improving the results. The best result was when words only with odds ratio of [-5, 5] stayed in our training set; this gave us the accuracy of 83.5%.
Using word embedding (GloVe vectors): GloVe vectors (Pennington, Socher, & Manning, 2014) are vector representations of the words. In two separate experiments, we used vectors based on the Common Crawl (840B tokens, 2.2M vocab, cased, 300 dimensions), and the pre-trained word vectors for Twitter (2B tweets, 27B tokens, 1.2M vocab, 200 dimensions). We represented every word in each tweet by a corresponding vector. We then calculated the tweet vector, using the mean of word vectors of the tweet. In this expe-riment, McDonald's (Loughran & McDonald, 2011) positive and negative wordlist again were used. That is, we created a positive and negative vector using words in those lists. Comparing the cosine similarity of tweet vectors with positive and negative vector, we classified the tweets. The accuracy of this method was 72% and 73.8% for tweet and common crawl respectively.
Conclusion
The purpose of this paper was to create a classification method for SemEval-2017 task 5, subtask 1. In our approach after pre-processing the data, negation handling, and feature selection approaches, we used Weka to classify our data using Random Forest algorithm. Our classifier was ranked 7th and achieved accuracy of 91.26%.
In the next step, we think it is important to capture more complex linguistic structure, irony, idioms, and poorly structured sentences in financial domain. To this regard, we would like to apply dependency parser trees for tweets to see if that would improve our results; it might also be necessary to capture some of the idiomatic constructions in this domain.
Also, SemEval-2017 training dataset was a relatively small dataset, which would prevent us from implementing any neural network models for prediction. Therefore, we think a step to create a better model is to increase the size of training dataset.
describes variables in the training dataset we used for analyzing the tweets:Label
Description
ID
Each tweet was assigned a unique
ID
Span
Part of tweet that was considered
to carry the sentiment toward the
company or stock.
Sentiment
Score provided to us with num-
bers between -1 to 1.
Cashtag
Stock symbol that was the target
of each tweet, e.g. $GE.
Table 1. Attributes used to create the sentiment
classification model.
Table 2
2shows some of the words were added to McDonald's wordlist:Word
Sentiment
Profit
Positive
Long
Positive
Short
Negative
Decay
Negative
Table 2 .
2Example of the words added to McDonald's wordlist. (See full list in Appendix A)
Table 3 .
3Example of the word couples and their
replacements used to normalize the data (tweets).
(See full list in Appendix B.)
http://www.nltk.org/
Appendix A. Words Added to McDonald's Wordlist.Negative words: cult, brutal, fucked, suck,decay, bubble, bounce, bounced, low, lower, selloff, disgust, meltdown, downtrend, bullshit, shit, breakup, dropping, cry, dumped, torture, short, shorts, shorting, fall, falling, sell, selling, sells, bearish, slipping, slip, sink, sinked, sinking, pain, shortput, nervous, damn, downtrends, censored, toppy, scam, censor, garbage, risk, steal, retreat, retreats, sad, dirt, flush, dump, plunge, crush, crushed, crying, unhappy, drop, broke, overbought.Positive words: epic, highs, recover, profit, long, upside, love, interesting, loved, dip, dipping, secure, longs, longput, rise, able, buy, buying.Appendix B. Full List of Word Couples toDetect the Semantic of a Tweet.Positive word couples: (go, up), (short, trap), (exit, short), (sell, exhaust), (didnt, stop), (short, cover), (close, short), (short, break), (cant, risk), (not, sell), (dont, fall), (sold, call), (dont, short), (exit, bankruptcy), (not, bad), (short, nervous), (dont, underestimate), (not, slowdown), (aint, bad).Negative word couples: (high, down), (lipstick, pig), (doesnt, well), (bounce, buy), (isnt, cheap), (fear, sell), (cant, down), (not, good), (wont, buy), (dont, trade), (buy, back), (didnt, like), (profit, exit), (go, down), (not, guaranteed), (not, profitable), (doesn't, upward), (not, dip), (pull, back), (not, optimistic).
Yahoo! for Amazon: Sentiment Extraction from Small Talk on the Web. S R Das, M Y Chen, 10.1287/mnsc.1070.0704Management Science. 539Das, S. R., & Chen, M. Y. (2007). Yahoo! for Amazon: Sentiment Extraction from Small Talk on the Web. Management Science, 53(9), 1375-1388. http://doi.org/10.1287/mnsc.1070.0704
. K Dave, S Lawrence, D M Pennock, Dave, K., Lawrence, S. & Pennock, D. M. (2003).
Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. Proceedings of the 12th International Conference on World Wide Web. the 12th International Conference on World Wide WebMining the peanut gallery: Opinion extraction and semantic classification of product reviews. Proceedings of the 12th International Conference on World Wide Web, 519-528.
. 10.1145/775152.775226http://doi.org/10.1145/775152.775226
Twitter sentiment analysis. E Kouloumpis, T Wilson, J Moore, The good the bad and the omg! Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media. ICWSM 11Kouloumpis, E., Wilson, T., & Moore, J. (2011). Twitter sentiment analysis: The good the bad and the omg! Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM 11), 538-541. Retrieved from http://www.aaai.org/ocs/index.php/ICWSM/ICWSM1
When is a Liability not a Liability? Textual Analysis , Dictionaries , and 10-Ks. T I M Loughran, B Mcdonald, Journal of Finance. 166Loughran, T. I. M., & McDonald, B. (2011). When is a Liability not a Liability? Textual Analysis , Dictionaries , and 10-Ks. Journal of Finance, 66(1).
. N O'hare, M Davy, A Bermingham, P Ferguson, P P Sheridan, C Gurrin, N Ohare, O'Hare, N., Davy, M., Bermingham, A., Ferguson, P., Sheridan, P. P., Gurrin, C., … OHare, N. (2009).
10.1145/1651461.1651464Topic-Dependent Sentiment Analysis of Financial Blogs. International CIKM Workshop on Topic-Sentiment Analysis for Mass Opinion Measurement. Topic-Dependent Sentiment Analysis of Financial Blogs. International CIKM Workshop on Topic- Sentiment Analysis for Mass Opinion Measurement, 9-16. http://doi.org/10.1145/1651461.1651464
GloVe: Global Vectors for Word Representation. J Pennington, R Socher, C D Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingPennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 1532- 1543. http://doi.org/10.3115/v1/D14-1162
Semantic sentiment analysis of twitter. H Saif, Y He, H Alani, 10.1007/978-3-642-35176-1-32Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics. LNCS. PART 1Saif, H., He, Y., & Alani, H. (2012). Semantic sentiment analysis of twitter. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7649 LNCS(PART 1), 508-524. http://doi.org/10.1007/978-3-642-35176-1-32
Giving content to investor sentiment: The role of media in the stock market. P C Tetlock, Journal of Finance. 623Tetlock, P. C. (2007). Giving content to investor sentiment: The role of media in the stock market. Journal of Finance, 62(3), 1139-1168.
. 10.1111/j.1540-6261.2007.01232.xhttp://doi.org/10.1111/j.1540-6261.2007.01232.x |
7,864,490 | AUTOMATIC COMPILATION OF MODERN CHINESE CONCORDANCES | [] | AUTOMATIC COMPILATION OF MODERN CHINESE CONCORDANCES
Syunsuke Uemura
Electrotechnical Laboratory
i-i-4 Umezono305SakuraIbarakiJAPAN
Yasuo Sugawara
Electrotechnical Laboratory
i-i-4 Umezono305SakuraIbarakiJAPAN
Mantaro J Hashimoto
Tokyo University of Foreign Studies
4-51-21, 114Nishigahara, KitaTokyoJAPAN
Akihiro Furuya
Tokyo Metropolitan University
152i-I-i Yakumo, MeguroTokyoJAPAN
AUTOMATIC COMPILATION OF MODERN CHINESE CONCORDANCES
INTRODUCTION
This paper describes an experiment to compile Chinese concordances automatically.
A very large volume of KWIC indexes for modern Chinese (one million lines per set) has been compiled successfully with a kanji printer for Japanese. This paper discusses the purposes of the experiment, selection and input of the Chinese data, some statistics on Chinese characters (vs. kanji) and the concordance compilation process.
Finally, examples from the computer-generated concordances are shown.
THE PURPOSES
The idea of machine-processingmodern Chinese data originally came from Professor Yuen Ren Chao, Agassiz Professor Emeritus of Oriental Languages at the University of California at Berkeley, before one of the authors (Hashimoto) took over the directorship of the Princeton Chinese linguistics project.
Chao served as the chief of the advisory committee to the project since its foundation.
The idea, in short, was: so much has been said about the Chinese pai-hua-wen --a written language of modern China --yet nobody has ever clarified what it really was, i.e.; what the basic vocabulary was, what the major syntactic structure was, etc.: in other words the every detail of the reality of pai-hua-wen.
Certain quantitative surveys were done before us, but even the most extensive one in those days was based on data consisting of no more than i00,000 characters.
In addition, the selection was very poorly done --most of the materials were primary school textbooks.
We did not believe that school textbooks reflected the reality of the language, even in its written form. We chose one digit more than the previous one, namely 1,000,000 characters, though for various reasons, the actual data contained in our tape include several thousands more than one million [i, 2].
After completion of the computer input and editing of the million-character file at Princeton, researches towards statistical aspects of the data have been conducted [4]. As stated in [4], tables of character frequency can tell us various aspects of the Chinese, such as the basic character set, transient states of character strings and so on. This can be summarized as the first step of computerprocessing modern Chinese data. However, in order to understand the reality of a language, besides statistics, concordances are the necessities which illustrate the contexts where and how those characters are used.
On the other hand, computer applications to Chinese have very limited background so for. No computer-generated concordances on Chinese have been reported yet. Thus the concordance genaration project would not only be valuable to the understanding of Chinese pai-hua-wen, but also contribute to the development of the methodology to manipulate Chinese automatically. Consequently, a project to compile concordances of the Princeton million-character file was conducted at the Electrotechnical Laboratory during [1977][1978][1979]. This constitutes the second important stage of computer-processing modern Chinese.
THE CHINESE DATA
The Input of the Original Data
The first phase of the data input was done in Taiwan during [1969][1970][1971][1972] with a Chinese character keyboard, designed by Cheng Chin Kao --a Chinese teletype Machine (manufactured by the Oki Denki Co., Ltd.). The code was converted into the Chinese standard telegraphic code in Walthum, Massachusetts at a computer company. The greatest difficulty, in addition to ordinary proofreading, consisted in the conversion of the so-called "combination characters" of the C.C.Kao system: any character not found in the Kao keyboard was punched so that part of it (normally the "radical") was represented by a character having the same radical in the keyboard, and another by a character having the same "signific".
Necessary flags were of course attached to these "combination characters", yet the key punchers selected those constituent characters quite at random, sometimes disregarding the position of a radical within a character, so that the results were often a hopeless mess.
The Selection of the Data It was tried, at the selection of the data, to cover every conceivable category and style of writings in China since her modernization, the so-called May 5 Movement period, from ordinary novels to philosophical writings, from political speeches to newspaper articles, etc. etc. These categories and styles were classified and were assigned appropriate marks to show the genre. The partial list of these writings follow: For a complete list of all these writings and of the genre marks, see [3]. All the proper nouns were so marked, as they may not correctly contribute to any statistical measurement of the written language except for these proper nouns themselves. These nouns were marked in the original texts by research assistants with enough command of the language to make correct judgment. Anything else, including punctuation marks of all sorts, in the texts were properly processed.
~
Every sentence, including some vocative phrases, was numbered within the writing piece quite mechanicaly, though occasionally it was necessary for specialists to make certain judgment for segmenting sentences.
The Code System
The Chinese standard telegraphic code system includes some 9500 codes for Chinese characters. A code consists of a set of 4 digits, which represents one Chinese character. Among those 9500, 5231 have been used.
Statistics
Statistical analysis of this million-character file can be found in [4]. Some additional statistics are provided here. Fig. 1 shows the i0 most frequently used characters with their frequencies.
These I0 characters occupy 17.1% of the total amount. Fig. 2 is a table of character frequencies vs. the number of character types. Fig. 3 shows the cumulative percentage of character occurrences as a function of the number of character types (in descending order of frequency). It indicates, for example) only 92 characters represent 47% of the entire data. There are 1170 characters each Of which are used more than I00 times and they occupy 92.8 % of the whole data.
CHINESE CHARACTERS VS. KANJI
Chinese characters were imported into Japan sometime in the 5th century° Since then, they have been extensively used with a few additional characters created in Japan (this modified set of Chinese characters is called "kanji"), although hiragana and katakana (two sets of pure Japanese characters with their origin also in the forms of Chinese characters) were invented early in the 9th century.
"Chinese characters for daily use" established by the Ministry of Education for modern Japanese includes a 18S0 kanji set, however several thousand more are still in use especially for proper nouns. The Japanese Industrial Standard (JIS) "Code of the Japanese Graphic Character Set for Information Exchange (C6226)" established in 1978 includes a 6349 kanji set, hiragana, katakana, Roman alphabet, Greek letters, Russian letters and other symbols. The kanji set is grouped into 2 levels, the first level a 2965 kanji set and the second level a 3384 kanji set. This means some 3000 kanji are considered to be enough for basic information exchange in Japanese.
In this experiment, the kanji printer system T4100
Number of Cherocter ~pes
Cumulative Percentage of Character Occurrences as a Function of the Number of Character Types (Syowa Zyoho, Co., Ltd.) was used. A total of 8182 characters was available for this printer including 7360 kanji, hiragana, katakana, Roman alphabet, and other miscellaneous symbols. The system was developed 5 years before the establishment of JIS C6226. As mentioned before, the million-character file included 5231 different Chinese characters° Among them, 295 were found to be unprintable (because they were not found in the T4100 system). The fonts of those 295 characters were designed and incorporated into the T4100 system. Later, when JIS C6226 was established, some of those 295 characters were found in the second level of the kan~i set, namely ~(frequency 773), ~(581), ~'(563), ~(345),-~(343), ~189),~(178), and .~%(158). Fig. 4 shows the frequency of the remaining 287 characters. Their total frequency numbers II00, which is 0.1% of the million-character file. This fact indicates that Chinese characters and kanji still overlap closely in modern Chinese and Japanese.
(It should be noticed that the simplified Chinese characters are out of this scope since they did not exist at the so-called May 5 Movement period.)
THE CONCORDANCES
Besides the text itself, the Princeton millioncharacter file contained information on the title, the author, the sentence numbers, and other miscellaneous editorial symbols (such as Extensive work had to be done to interpret and reform editorial symbols. Fig. 5 shows the edited text sentences from the million-character file. After this editorial step and incorporation of Chinese character fonts to the T4100 kanji printing ststem, the concordance compilation process was started. Since we have had experience with the automatic compilation of one-million line concordances in Japanese [S], not many technical difficulties were encountered, except some malfunctions of our old kanji printer. Discussions on the salient features of those Chinese concordances follow.
Key Words
KWIC index style has been adopted as the form of Chinese concordances, since it is one of the most fundamental styles for computer-generated concordances°
Because there is no clear segmentation of words in Chinese, and because one character represents a fairly sizable amount of information, each character was chosen as a "key word". Furthermore, no elimination of "non-key words" were made. Every character (including punctuation) was chosen as a key character. In this sense, the concordance may he named as "All characters in context" index. Consequently, one million character data required one million lines of index.
Contexts
One of the deficiencies of the KWIC index style is that the context each line can show is limited to its line length. We could afford 55 characters for the context. Since one or two Chinese characters represent a word, this length can accommodate more than 30 words of information in English.
Reverse Sorted Index
Two types of KWIC index have been produced. One is for the normal type, in which all lines are sorted in the ascending order of the Chinese standard telegraphic code of key characters (plus 7 succeeding characters). Fig. 6 shows an example page from this type of index. The other is the so called "reverse sorted" index. The major key for this type is the same as that of the normal type. The minor sort keys are, the characters immediately preceding the major key. Thus all lines for one key character are listed in the ascending order of the code for the character immediately preceding the key character and so on. Fig. 7 shows an example page from the reverse sorted concordance.
CONCLUDING REMARKS
The two sets of modern Chinese concordances can be reached at the National Inter-university Research Institute of Asian and African Languages and Cultures, Tokyo University of Foreign Studies. It should be noted that a concordance of one million lines amounts to over 25,000 pages (actually it counts for 27,341) or 50 volumes of a 5cm-width paper file. Before printing the whole index, engineers recommended linguists to use COM technique, but in vain. A microfiche version should have been produced for portability. Analysis of the concordances have just got off the ground. The resulting papers are expected to follow.
Fig. 3.
Fig. 4 .
4Frequency Distribution of Chinese Characters which are not Found in the Kanji Set marks to indicate proper nouns).
Fig. 5 .
5An Example from the Edited Text , ,II~ l:f"hll~o 7.~li:~, ;& d, tltttl~4.',.JF.)ll'itl, o 1! ~A f#jtill fl". h iik.lJ Illf,I ~ I_5 I7 fx T. f~-/7 ~gt~l, ti!liCxli?, ~Yx~, X-f,'iff~F, P']g;J'FPI~Z~, Example from the Chinese Concordance (Normal Style)
NN, ~NN4l.l~--frblb~lNo N~-4 fff. 279NN, ~NN4l.l~--frblb~lNo N~-4 fff,,I#PlTgl?9 ~N~(ff_g 2 79
C , ~f , It9 ~t:hg, 0~J~.~IJN~% JcJ~q~KPkjjo {N~fl~j.N~N~3~t/~. IN ,~!ii~N~.~ 146 ~i? ~A, ~f-6~o ~9. %° ~uJD;~-2~ ~.~t~It9 ~t:hg, 0~J~.~IJN~% JcJ~q~KPkjjo {N~fl~j.N~N~3~t/~. IN ,~!ii~N~.~ 146 ~i? ~A, ~f-6~o ~9,C,~F -;%° ~uJD;~-2~ ~.~t~
A Page Example from the Chinese Concordance (Reverse Sorted Style) REFERENCES. Fig. 7. A Page Example from the Chinese Concordance (Reverse Sorted Style) REFERENCES
Computers and Chinese linguistics. F A Kierman, E Barber, Kierman, F.A. and Barber, E.: "Computers and Chinese linguistics", Unicorn, No. 3 (1968)
Progress report on Pai-hua-wen computer count and analysis. W G Boltz, E Barber, F Kierman, Boltz, W.G., Barber, E. and Kierman, F.A: "Progress report on Pai-hua-wen computer count and analysis", Unicorn, No. 7, pp. 94-138 (1971)
A grammatical analysis of the Princeton million-character computer file. M J Hashimoto, Bulletin of the Chinese Language Society of Japan. 36222Hashimoto, M.J., et al.: A grammatical analysis of the Princeton million-character computer file", Bulletin of the Chinese Language Society of Japan, No.222, pp. 1-16,36 (1975)
Computer count of modern Chinese morphemes. M J Hashimoto, Computational Analysis of Asian and African Languages. 7Hashimoto, M.J.,: "Computer count of modern Chinese morphemes", Computational Analysis of Asian and African Languages, No. 7, pp. 29-41 (1977)
Automatic Compilation and Retrieval of Modern Japanese Concordances. S Uemura, Journal of Information Processing. i4Uemura, S.: "Automatic Compilation and Retrieval of Modern Japanese Concordances", Journal of Information Processing, Vol. i, No. 4, pp. 172-179 (1979) |
|
11,668,866 | Finite State Machines from Feature Grammars | A b stractThis paper describes the conversion of a set of feature grammar rules into a deterministic finite state machine that accepts the same language (or at least a well-defined related language). First the reasoning behind why this is an interesting thing to do within the Edinburgh speech recogniser project, is discussed. Then details about the compilation algorithm are given. Finally, there is some discussion of the advantages and disadvantages of this method of implementing feature based grammar formalisms.BackgroundReal-tim e continuous speech recognition is still not possible but is becom ing more possible each year. One of the many problems in recognition is doing sym bolic analysis in the higher levels of the system in a reasonable time.W ithin C STR , we are investigating analyses using high level G PSG -type formalisms (like that in [Gazdar85]) to describe the grammar of various restricted dom ains. This high level notation is then autom atically compiled into a basic feature grammar formalism called FBF ([Thompson89]) thus com piling out aliases, feature passing conventions etc. This FBF grammar is then used directly in the run-tim e recogniser within a chart parser.However, at run tim e, the many hypotheses predicted by the lower levels of the system give rise to many partial constituents in the chart. Thus a large am ount of tim e was spent in the chart doing unification. However, when we look at the real requirements of the lower level of the system (lexical access), we note that what is required in the majority of cases is merely a simple prediction of the next possible sym bol in a sentence from a given state.Consequently we started to think about ways to provide this information as quickly as possible. O bviously representing the grammar as a Finite State Machine would make lexical access prediction significantly faster. As we currently write our grammars in a high level formalism it seems wrong to throw that information away and start again, so we hope to find some form of com pilation from feature grammars to finite state grammars.Of course, the first theoretical point to note is that feature grammars are, in essence, contextfree thus allowing more com plex languages to be described than FSG s. For exam ple, there does | [] | Finite State Machines from Feature Grammars
Alan W Black
Centre for Speech Technology Research and Dept of Artificial Intelligence
University of Edinburgh
80 South Bridge Edinburgh EH l 1HN
Finite State Machines from Feature Grammars
A b stractThis paper describes the conversion of a set of feature grammar rules into a deterministic finite state machine that accepts the same language (or at least a well-defined related language). First the reasoning behind why this is an interesting thing to do within the Edinburgh speech recogniser project, is discussed. Then details about the compilation algorithm are given. Finally, there is some discussion of the advantages and disadvantages of this method of implementing feature based grammar formalisms.BackgroundReal-tim e continuous speech recognition is still not possible but is becom ing more possible each year. One of the many problems in recognition is doing sym bolic analysis in the higher levels of the system in a reasonable time.W ithin C STR , we are investigating analyses using high level G PSG -type formalisms (like that in [Gazdar85]) to describe the grammar of various restricted dom ains. This high level notation is then autom atically compiled into a basic feature grammar formalism called FBF ([Thompson89]) thus com piling out aliases, feature passing conventions etc. This FBF grammar is then used directly in the run-tim e recogniser within a chart parser.However, at run tim e, the many hypotheses predicted by the lower levels of the system give rise to many partial constituents in the chart. Thus a large am ount of tim e was spent in the chart doing unification. However, when we look at the real requirements of the lower level of the system (lexical access), we note that what is required in the majority of cases is merely a simple prediction of the next possible sym bol in a sentence from a given state.Consequently we started to think about ways to provide this information as quickly as possible. O bviously representing the grammar as a Finite State Machine would make lexical access prediction significantly faster. As we currently write our grammars in a high level formalism it seems wrong to throw that information away and start again, so we hope to find some form of com pilation from feature grammars to finite state grammars.Of course, the first theoretical point to note is that feature grammars are, in essence, contextfree thus allowing more com plex languages to be described than FSG s. For exam ple, there does
not exist an equivalent finite state grammar for the (context-free) grammar
S -► a S b S -* a b
Which describes the language anbn where n is greater than or equal to 1. However if we set a finite lim it on n then there does exist a (possibly very large but finite) FSM. Thus we could accept anbn only where n is greater than or equal to one but less than some finite number d.
In terms of natural language, an equivalent example is the restriction that you can only have up to n levels of centre embedding within a language. This seems to be no less a restriction on a language than the restrictions you are imposing on that language when you try to write a grammar for it in the first place, irrespective of the grammar formalism.
Practically, there may be other problems in writing a compilation function from feature gram mars to finite state grammars. There is of course the problem of the size of FSM created, as well as the time that is needed to generate it. Both these question were open at the start of our investigation.
Because we hoped that this com pilation need only be run occasionally and that the high level formalism could be debugged using a conventional chart parser, we feel that com pilation tim e can be up to 12 hours w ithout any problem. As for the resulting FSM , it seem s that w ith today's w orkstations up to 100,000 transitions m ight be acceptable. But the question still remained: how big a feature grammar can be compiled within these constraints?
2
The Initial Structures
The grammarian first writes a grammar in the high level G PSG -like notation which is then trans lated to FB F. This translation is relatively sim ple, it merely converts the user-written form into an internal Lisp form, expanding aliases, feature passing conventions etc. The FB F formalism seemed like a good input to the FSM compiler as it is well defined and quite fixed w ithin our system . FBF is effectively an assembly language for feature grammars. It is much in the spirit of PATR-II ([Shieber86]) but differs in that it uses term unification rather than graph unification as its basic operation, though that distinction if not im portant here.
The inputs to the FSM com pilation are:
• a distinguished category • a set of feature grammar rules.
• a set of lexical entries
The lexicon consists of a mapping of atomic sym bols to categories. In actual fact within our system these atom s are not words but preterminals. It is these preterm inals which label the arcs of the generated finite state machine.
It should be added that FBF is not a prerequisite for this technique. A ny feature grammar notation would be suitable (though the code would have to be changed).
3
The Compilation Process T he com pilation takes place in five stages:
• conversion into in tern al s tru c tu re s for fast access. T his consists of the conversion of categories in the g ram m ar and lexicon into an in tern al form , consisting of an atom ic type and a list of feature values, thus unification can be done m ore efficiently. Also, two indexes are created -one for the g ram m ar and one for the lexicon -b o th indexed by category type, allowing efficient access to them .
• conversion of the g ram m ar to a non-determ inistic finite s ta te m achine. T his is the m ain p a rt -see the the next section for details a b o u t this.
• rem oval of error sta te s from th e non-determ inistic finite s ta te m achine. S ta te s can be created w hich can n o t lead to final sta te s, these are rem oved as well as all arcs pointing to them .
• determ inising. S ta n d a rd determ inising of the finite s ta te m achine (as described in [Hopcroft79 p. 22])
• analysis to produce sta tistic s, th is finds the size, average and m axim um b ranching rates.
The A ctual Conversion
T he conversion is done by building "agenda s ta te s " on an agenda and processing th em un til the agenda is em pty. A n "agenda s ta te " consists of the following:
• A d e p th -the num ber of rew rites th a t are required to get th e first category in the rem ainder • a list of rem aining categories -these are the categories (p re term in a l or otherw ise) th a t have yet to be found before the end of a sentence is reached • A set of variable bindings
• a s ta te in the n on-determ inised m achine T he basic loop s ta r ts w ith an initial "ag en d a s ta te " w ith th e following settings:
• a d e p th of 0
• a list co n ta in in g only th e distinguished category
• a se t of e m p ty bindings
• th e in itial s ta te of th e (n on-determ inistic) FSM T he processing is as follows:
T ake an "a g en d a s ta te " from th e agenda and tak e its rem ain d er. R ew rite th e first category in th e rem ain d er, using th e g ram m ar, in all w ays, recursively u n til e ith er th e d e p th lim it is m et or a lexical category is found (i.e. a category w hich is in th e lexicon).
R ew rites are m ade by replacing th e first category w ith th e rig h t h a n d side of a g ram m ar rule, w hose left h a n d side unifies w ith th e first category. T h u s a rew rite changes th e first category, increm ents the depth, and possibly binds some variables1. Also, in addition to the right hand side, a special "end-subrule" marker («m) is added so that we can tell when to decrease the depth count. For example: S may rewrite as follows2
S ==> NP VP em = > Det Noun em VP em
Then for each rewrite, check the lexicon and find all entries that can match the first category. Add a transition to the state in the current "agenda state" , labelled with that lexical item , to a new state, in the non-determ inistic FSM. This may be a (truly) new state or an already existing state. Each state in the non-deterministic FSM has a "state descriptor" which symbolizes which categories from this state would lead to a final state. The state descriptor is constructed by taking the remaining categories list and dereferencing the variables, removing the "end-subrules" markers, and replacing any unbound variables with a unique atom name representing a variable3. Thus no unification is required in searching, a simple Lisp EQUAL is adequate (actually a more complex indexing system is used).
When looking for a "new state" , the state descriptor of the required state is constructed and a (rather large) index is checked to find if such a state already exists, if so the new transition points to the state related to that "state descriptor" .
If a truly new state is required a corresponding new "agenda state" is created. The "cdr" of the remaining categories list is taken: that is the next category is found in the remainder list, any "end-subrule" markers which precede it are removed and the depth is decrem ented.
A n E xam ple
For the sake of brevity the example grammar used here is only a standard context-free grammar with atom ic categories rather than a feature grammar. Thus we use EQUAL as our test operator, while with feature grammars we would use unification, and record any resulting bindings. We th en add tra n sitio n s from al to two new sta te s labelled w ith "t h e " and "Hanako" like so:
Hanako
We th en cre a te tw o new "agenda s ta te s " and add th em to the agenda d e p th : 1 rem ainder: (VP em) sta te : a2 d e p th : 2 rem ainder: (Noun em VP em) sta te : aS Now consider th e second one. As Noun is already a lexical category, th ere is no need to rew rite it. We can add a tra n sitio n from aS to a "new s ta te " . To find the "s ta te d e scrip to r" of this "new s ta te " we first rem ove the first category, and th en rem ove any "em" m arkers, decrem enting the d e p th accordingly. T he resulting rem ainder and d e p th is d e p th : 1 rem ainder: (VP em)
T h en we c reate the "s ta te d e scrip to r" from this new rem ainder, which will give sim ply (VP), which is th e sam e as th e d escrip to r of s2. T h u s th is new arc labelled w ith "boy" will go from aS to a2. Like this:
Hanako
Thus we only need one occurrence of the VP despite there being two "types" of NP. O f course in larger grammars, we would probably have two parts of the FSM representing V Ps, one dealing with singular subject V Ps, and the other with plural V Ps (actually there may be more depending on the distinctions made in the grammar). This of course means building a large FSM , but that is, in part, the object of this exercise, trading space (i.e. the size of the FSM) with tim e (reducing the number of unifications required).
4no bin d in gs are show n as we d e alin g w ith a sim p le ato m ic C F G
.1 G e t t i n g L o o p s fr o m R e c u r s io n
Consider the following three rules in isolation:
NP -NP PP NP -► Det Noun P P -Prep NP
If we can collapse recursion into loops, we can represent these three rules by the very simple FSM prep
We have two problems to deal with here, left recursion, and right recursion. Left recursion is a lot harder to deal with than right recursion. W ith left recursion, during the rewrite stage we must check to see if we have already used the rule during this rewrite. If we detect this, we construct the new rewrite in a different way.
Instead of replacing the first category with its expansion, we find: what the non-recursive rewrites are; and the rules which introduce the rewrites. For the sake of description we will consider the case where there is only one non-recursive and one recursive rule, as in this exam ple. Thus we have a "non-recursive rewrite" (Det Noun en) and a "non-recursive part of a recursive rule" (PP em -from the rule NP -► NP P P ). We then construct a new remainder (for an "agenda state") ( "non-recursive rewrite" ( "non-recursive part of a recursive rule" ) "top remainder" T his of course is too general as we are now tre a tin g the sta te s w ith the "s ta te descrip to rs" (Det Noun em VP em) and (Det Noun em (PP em) VP ea) as the sam e, which m ay not be true. W hat we need to do is ensure th a t after the "looping p a rt" we can get back to the sam e sta te which did no t follow th a t p a rt. (A ssum ing no variable bindings have m ade th a t join in ap p ro p ria te ). R ight recursion is a lot easier, having generated a sta te w ith the rem ainder (PP em VP em), we rewrite to (p re p NP em em VP em). A fter rem oving the p rep we will be left w ith a remainder of (NP em em VP em). B ecause we ignore "d ep th " and the "end-subrule" markers in generating "sta te d escrip tors" , the "sta te descriptor" of (NP em em VP em) is the sam e as th at o f (NP em VP em), d esp ite th e different d ep th s and num ber of "end-subrule" markers. T hu s after the preposition we can return to th e p oin t in the FSM where we require an N P followed by a VP.
It is true th at th is N P is "different" from the other. O ne is an N P w ith in a P P the other is the subject o f a sen ten ce, b ut becau se we are m erely doing recognition th is is all we need.
N otice th at this m atch in g of sta tes by a sta te descriptor is not guaranteed to m erge similar sta te s, since there m ay be cases w here one rem ainder does not start w ith a lexical category and another d oes. T hese m ay represent the sam e sta te if the first category can be w ritten to the a rem ainder th e sam e as the other (and only th at rem ainder). T his m eans th a t we will not guarantee the m o st m inim al FSM during com pilation , but will collapse m any sta tes. T h e gram m ar d escrib ed above can be con verted to a n on -d eterm in istic FSM o f ab out 9,000 s ta te s 5 in around on e hour on a Sun 4 /2 6 0 w ith 32M egab ytes o f m em ory. We feel th is is w ell w ith in our 12 h o u r / 100,000 s ta te lim it. B ut alth ough th is gram m ar is bigger than m any "toy gram m ars" , it is still rather sm all and n ot really large enough to cover a significant proportion of the dom ain we w ish to cover. 5w ithout conjunction the FSM is lew than 1,000 atate* It should be added that we have had problems in determinismg some of the generated FSMs. Though the conversion stage has taken around an hour, determinising has failed to finish in 75 hours, producing a much larger FSM than its non-determinised equivalent. This does suggest th a t perhaps we should only produce non-determ inistic FSMs as output.
C om plexity Results
It is n ot surprising th at th is is possible. T he really interestin g part is w h eth er useful gram m ars can
Com m ent
So the basic question is, "is it worth it?"
The major loss in moving from a chart parser using a feature grammar to a finite state machine is the loss of a parse tree. One of the reasons for adding a sentence grammar to a speech recogniser is to enable (eventually) some form of semantic analysis. There is an argument that because vast numbers of hypotheses have to be dealt with by a speech recogniser, perhaps running with a FSM as a grammar would be effective during recognition, and that post-processing of the few sentences found could be done with a chart parser.
Then again perhaps speed is not the real thing to worry about, a fast chart parser and unification algorithm m ight work almost as well (especially if machines are doubling in speed every year).
It is true that the technique is practically lim ited, no m atter how fast machines get there will always be grammars which cannot be converted in reasonable tim e and/or produce finite state m achines with too many states.
And as noted before, the algorithm does produce a FSM which accepts the subset of the language described by the feature grammar where the "depth" less than the given lim it, plus some extra sentences not originally accepted by the feature grammar. These extras are because of two faults in the conversion algorithm, namely in joining the end of left recursive rules and not constraining where variables have been co-indexed by another variable (and not an atom ic value).
T his over-generation seems to encourage the idea of using a real chart parser to post-process and correct the sentences accepted by the FSM (though the types of grammars which cause these problems are not comm on in our domain, so far). r;thin our working framework (speech recognition) this m ethod does produce useful results. As can still allow our grammarians to write a high level description, but still have a fast im plem entation of their grammar. So in spite of the short com ings we will probably use this technique for the foreseeable future. /1 0 3 0 9 , F /1 0 3 1 6 ), in which Marconi Speech and Information System s are the industrial partner.
A cknow ledgem ents
Hanako -♦ PropNoun saw -► Verb 1 Because variable* are "uniquified* at each instantiation of a rule the correct bindingi are ensured throughout the conversion.3 A tom ic sym bols are used here as categories for brevity 9This is actually over-general, as variables which have been bound to one variable, and hence co-referenced, but not (yet) bound to a literal, will still be treated as distinct by this m ethod.Let us go th ro u g h some of the steps. T he first stage is an agenda sta te of the form ainder: (PropN oun em VP em) d ep th : 2 rem ainder: (D et Noun em VP em)
)
W hen there are m ultiple occurrences of the first two parts we m ust form remainders for the crossproduct of them . However in our exam ple, suppose we start w ith the remainder (NP VP e a ) , the three parts are non-recursive rewrite Det Noun ea non-recursive part of recursive rule PP ea top remainder VP ea Thus the com plete rewrite is (D et Noun ea (PP e a ) VP ea) The "looping part" in brackets, (PP ea), does not appear in the "state descriptor" and hence this state is treated the sam e as (D et Noun ea VP ea). The im portant feature is this: when the categories before the bracketed part have been dealt with and we have remainder of the form ( (PP ea) VP ea), we construct two new "agenda states" , one with remainder (PP ea VP ea) and the other (VP ea).
be con verted to reasonably sized finite sta te m achines in reasonable tim e. T h e cod e is w ritten in C om m on Lisp and runs on a num ber of different m achines. It had to be re-w ritten a num ber o f tim es to get th e perform ance we w ished. It has been true th at the spectre o f u n a ccep ta b le com p u tation al com p lexity has been ju st round the corner a num ber of tim es but so far w e have k ep t it at bay. D escrib in g th e size of a gram m ar is difficult, but to give som e idea o f th e feasibility o f this m eth o d o f running feature gram m ars, one o f our current gram m ars, w hich con sists o f 31 G PSGlike rules, d escrib es d eclarative sen ten ces w ith the follow ing features: tra n sitiv e and intran sitive verbs co p u la sen ten ces m u ltip le a d jectives, and intensifiers in N P s quantifiers noun co m p ou n d in g N P co n ju n ction T he N P con ju n ction w as q uite a d rastic ad d ition , w hich increased th e size o f th e resulting FSM by an order o f m agn itu d e.
This work was supported by the UK Information Engineering D irectorate/ Science and Engineering Research Council as part of the IE D /SE R C Large Scale Integrated Speech TechnologyDem onstra tor Project (SERC grants D /29604, D /29611, D /29628, F
Intemational Parsing Workshop '89
International Parsing Workshop 89
International Parsing Workshop '89
. G Klein, F , P ullum and I. Sag Generalized. Phrase Structure Grammar Blackwell, O xford. G. G azd ar, E Klein, F. P ullum and I. Sag Generalized. Phrase Structure Grammar Blackwell, O xford, 1985
U llm an An Introduction to Automata Theory, Languages and Computation A ddison Wesley, R eading. J Opcroft, J , J. H opcroft and J. U llm an An Introduction to Automata Theory, Languages and Computation A ddison Wesley, R eading 1979.
An Introduction to Unification-based Approaches to Grammar CSLI L ecture notes N um ber. S Shieber, 4S. Shieber An Introduction to Unification-based Approaches to Grammar CSLI L ecture notes N um ber 4, 1986
T hom pson FBF -A Micro-formalism for grammar: Syntax, Semantics and Metatheory D ept of A l, U niversity of E d inburgh. forthcom ing[Thom pson89] H. T hom pson FBF -A Micro-formalism for grammar: Syntax, Semantics and Metatheory D ept of A l, U niversity of E d inburgh, forthcom ing |
6,501,182 | Summarizing Textual Information about Locations In a Geo-Spatial Information Display System | This demo describes the summarization of textual material about locations in the context of a geo-spatial information display system. When the amount of associated textual data is large, it is organized and summarized before display. A hierarchical summarization framework, conditioned on the small space available for display, has been fully implemented. Snapshots of the system, with narrative descriptions, demonstrate our results. | [] | Summarizing Textual Information about Locations In a Geo-Spatial Information Display System
Congxing Cai [email protected]
Information Sciences Institute Information Sciences Institute
University of Southern California University of Southern California Marina del Rey
90292, 90292Marina del ReyCalifornia, CaliforniaUSA, USA
Eduard Hovy [email protected]
Information Sciences Institute Information Sciences Institute
University of Southern California University of Southern California Marina del Rey
90292, 90292Marina del ReyCalifornia, CaliforniaUSA, USA
Summarizing Textual Information about Locations In a Geo-Spatial Information Display System
This demo describes the summarization of textual material about locations in the context of a geo-spatial information display system. When the amount of associated textual data is large, it is organized and summarized before display. A hierarchical summarization framework, conditioned on the small space available for display, has been fully implemented. Snapshots of the system, with narrative descriptions, demonstrate our results.
Introduction
Geospatial display systems are increasingly gaining attention, given the large amounts of geospatial data and services available online. Although geospatial imagery and maps show geometric relations among entities, they cannot be used to present other kinds of knowledge about the temporal, topic, and other conceptual relations and entities. Given an entity on a map, a description of what happened there, in what order in time, when, and why, requires additional types of information, typically contained in text, in order to support varied search and decision tasks.
In this demo, we apply text summarization to a geo-spatial information display system with potentially large amounts of textual data. By summarizing the textual material linked to each location, we demonstrate the ways one can organize this material for optimal display and search.
Of the many different types of text-oriented resources available, some are structured and others unstructured. This textual data can be linked to locations based on different reasons (containing place names, addresses, real objects with geographical features, etc.). Appropriately grouping and presenting the different aspects of the textual information in summarization is a challenging task.
A second challenge stems from the huge amounts of web material related to some geographical objects. For example, one may find millions of pages for a famous place or event at a specific map location. Given the common limitations of display space in most geospatial display systems, one must also design the interface to support dynamic browsing and search.
All these challenges bring new problems to existing summarization techniques. In the following sections, we demonstrate a hierarchical summarization framework that reduces displayed text and fully utilizes the small display space available for textual information.
Related Work
Associating each news page individually to its location(s) may overwhelm the amount of information displayable at any point and thereby limit the scalability of the system. Existing systems presented in (Teitler et al., 2008) and GeoTracker (Chen et al, 2007) organize material (at the area level) by time instead of somehow aggregating over larger numbers of related content. Since frequently the associated news contents overlap at least in part, a natural solution is to aggregate the content somehow to remove duplication. Moreover, the aggregation of news provides a global view of the textual information about the specific location. Our system is the first available geospatial text aggregation system to our knowledge.
Within geospatial display systems, the space available to display textual information is often quite limited. We therefore need to summarize the most important and relevant information about each location, drawing from all the web pages linked to it. However, directly applying a multi-document summarization (Lin and Hovy, 2001) to the web pages will generate poor results, due to unrelated titles, duplicate articles, and noisy contents contained in web pages. When several different events have occurred at a location, more than one distinct summary may be needed. It is therefore important to deploy topic recognition (Lin and Hovy, 2000) and/or topic clustering (Osinski and Weiss, 2005) to identify and group relevant pieces of each text into single-topic 'chunks'. We develop a novel hierarchical summarization system to improve the interactivity and browsability.
Text Summarization
Content Extraction and Summarization
Multi-webpage summarization is different from traditional multi-doc summarization. First, most web pages are much more complex than pure text documents. Since the web contains a combination of types of information-static text, image, videos, dynamic layout, etc.-even a single page can be treated as multiple documents. Current linking functions are based on keywords, making the relevant content of each relevant web page only a limited block within the page. Second, our task is oriented to locations, and hence differs from general content summarization. Hence, we need to identify and extract the essential part(s) of the webpage linked to the geospatial imagery for summarization and display. In our work, we utilize two important features, layout and semantics, to identify and extract the relevant content.
By rendering each web page into a DOM tree, we segment the page into large blocks based on its layout, including header, footer, left bar, right bar, main block, etc. We implemented a rule-based extractor to extract the most relevant block from the web page based on the relevance to the location.
Clustering
Given a list of text blocks relevant to a local point of interest, one can employ traditional text summarization techniques to produce a short summary for each one. This solution may not be helpful, however, since a long list of pages associated with each point of interest would be very hard for users to browse. Especially when the space allocated to text display by the geospatial system is also limited, a high compression ratio is typically required for the summarization system.
The solution we adopt is to deploy cluster-based multi-document summarization. Clustering must observe two criteria: first, the location of interest, and second, the text topic. Different clustering methods can be employed. To delimit topics, a simple heuristic is to introduce as additional criterion the event/article date: when the difference in document dates within a topical cluster is (far) larger than the actual duration of the topic event, we are probably dealing with multiple separate events at the same location. Better performance is obtained by using a topic detection module first, and then clustering documents based on the topics identified.
Unfortunately, documents usually contain multiple locations and multiple topics. The problem of 'topic drift' can cause confusion in a short summary. As in (Hearst, 1997), we segment each document into several 'mini-documents', each one devoted to a single topic, and then to perform location-and topic-based clustering over the (now larger) set of mini-documents.
Hierarchical Summary Generation
Whatever the clustering approach, the result is a potentially rather large set of individual topics associated with each location. Since screen space for the summaries may be very limited next to the maps / imagery, they have to be formatted and presented for maximal interpretability. To address this problem, we adopt a hierarchical structure to display incrementally longer summaries for each location of interest. At present we have found three levels of incrementally longer summaries to be most useful.
Thumbnail: a very short 'topic' that characterizes the (clusters of) documents or segments associated with each location. We present essentially one or two single keywords --the most informative words for each cluster. We implemented a new version of our topic signature technology, one that uses tf.idf instead of the entropy ratio, as scoring measure to rank each cluster's words.
Title: a headline-length phrase or short sentence (or two). The original titles of the web pages are often noisy or even unrelated to the current topic cluster. Sometimes, the title may be meaningless (it might for example contain the website's name "Pr Newswire"), or two different web pages may share the same title. We implemented a topicrelated headline generator based on our previous work (Lin and Hovy, 2000) by incorporating a topic-based selector.
Snippet: a paragraph-length excerpt characterizing the cluster. To produce paragraph-length summaries, we implemented an extraction-based text summarizer. We built a new version of previously investigated technology (Lin and Hovy, 2001), implementing several sentence scoring techniques and a score combination function.
Demonstration
Geospatial Interaction
The hierarchical summarization service is built upon the geo-spatial information display system, GeoXRAY 1 , a commercial product developed by Geosemble Technologies 2 . Figure 1 shows the system's display to support search and browsing of text content based on location of interest. The user can enter an address in the top search box, or search by business name. The system then centers the imagery at that address or business. Clicking on "Get Features" invokes the web services to get all features about the displayed image and displays the features in the "AREA: Features Found" list, and also draws them as points on the maps.
The user can explore the map using the navigation controller. On clicking the marker of an identified building, an information window pops up containing the associated structured web information (building name, business type, website, online images, and so on), as shown in Figure 2. Clicking on "Get News" retrieves all news related to the displayed features; features with associated news show a small newspaper icon (see next to "Sony Pictures Entertainment" in Figure 4). Clicking on the icon displays the news that was linked with the feature, sorted by date. The hierarchical summarization system, described in this paper extends the GeoXRAY system to show a summarized view of the news. The user can click on the "Cluster News" link. The results are displayed in a tree, showing the title of the cluster (thumbnail and title), under which appears a small summary of the cluster, under which appear links to all the news articles belonging to that cluster.
Summarization Example
We provide an example of our text summarization system performance in Figure 3. In this example, we have selected the location of Sony Film Studios in Culver City by clicking on the map. Figure 3(a) shows the titles and dates of some of the 126 news articles that contain the words "Sony Pictures Entertainment". As described above, these documents are clustered based on topics. Using our current parameter settings, 20 multi-result clusters are formed, leaving 34 results unclustered. (The size of clusters, or the number of clusters desired, can be varied by the user.) As mentioned above, each cluster is presented to the users by a minimal length thumbnail summary consisting of a few characteristic keywords; a partial list of these is shown in Figure 3(b). Figure 3(c) shows the result of selecting the cluster labeled "solar electrical system" (second from the bottom in Figure 3(b)), which contains two results. The summary contains the 5 top-ranked sentences from the two documents, presented in document order. In addition, the summary includes two hyperlinks to the two full texts for further inspection. The summary illustrates some of the strengths but also the shortcomings of the current system. It is clearly about a solar energy system installed in 2007 on top of the Jimmy Stewart Building by EI Solutions. This is enough detail for a user to determine whether or not to read the texts any further. However, two of the extracted sentences are not satisfactory: sentence 2 is broken off and sentence 3 should not be part of the news text at all. Premature sentence breaks result from inadequate punctuation and line break processing, which is still a research problem exacerbated by the complexity of web pages.
By showing the summary results, we merely demonstrate the improvement on browsability of the search system. We are relatively satisfied with the results. While the summaries are not always very good, they are uniformly understandable and completely adequate to prove that one can combine geospatial information access and text summarization in a usable and coherent manner.
Figure 1 .
1Geospatial Information Display System
Figure 2 .
2Navigating the Integrated Map
list of the news articles linked to Sony Pictures Entertainment (b) Clustering results relevant to Sony Pictures Entertainment (c) Summarization from the news articles in cluster Solar electricity system Figure 3. Document clustering and summarization for news relevant to Sony Picture Entertainment
GeoXRAY: http://www.geosemble.com/products_geoxray.html 2 Geosemble Technologies: http://www.geosemble.com/
AcknowledgmentsThanks to Geosemble Technologies for providing support of the geospatial information system.
Geotracker: Geospatial and temporal rss navigation. Giuseppe Di Yih-Farn Robin Chen, David Fabbrizio, Serban Gibbon, Bernard Jora, Bin Renger, Wei, WWW '07: Proceedings of the 16 th International Conference on World Wide Web. Yih-Farn Robin Chen, Giuseppe Di Fabbrizio, David Gibbon, Serban Jora, Bernard Renger and Bin Wei. Geotracker: Geospatial and temporal rss navigation. In WWW '07: Proceedings of the 16 th International Conference on World Wide Web, 2007.
TexTiling: Segmenting text into multiparagraph subtopic passages. Marti A Hearst, Computational Linguistics. 231Marti A. Hearst. TexTiling: Segmenting text into multi- paragraph subtopic passages. Computational Linguis- tics, 23(1):33-64, 1997.
The automated acquisition of topic signatures for text summarization. Chin-Yew Lin, Eduard Hovy, Proceedings of the 18 th Conference on Computational Linguistics. the 18 th Conference on Computational LinguisticsChin-Yew Lin and Eduard Hovy. The automated acqui- sition of topic signatures for text summarization. In Proceedings of the 18 th Conference on Computation- al Linguistics, 2000.
From single to multidocument summarization: A prototype system and its evaluation. Chin-Yew Lin, Eduard Hovy, ACL '02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. From single to multi- document summarization: A prototype system and its evaluation. In ACL '02: Proceedings of the 40th An- nual Meeting on Association for Computational Lin- guistics, 2001.
Carrot2: Design of a flexible and efficient web information retrieval framework. Stanislaw Osinski, Dawid Weiss, AWIC. Stanislaw Osinski and Dawid Weiss. Carrot2: Design of a flexible and efficient web information retrieval framework. In AWIC, 2005.
Newsstand: a new view on news. Benjamin E Teitler, Michael D Lieberman, Daniele Panozzo, Jagan Sankaranarayanan, Hanan Samet, Jon Sperling, GIS '08: Proceedings of the 16 th ACM SIGSPATIAL international conference on Advances in geographic information systems. Benjamin E. Teitler, Michael D. Lieberman, Daniele Panozzo, Jagan Sankaranarayanan, Hanan Samet and Jon Sperling. Newsstand: a new view on news. In GIS '08: Proceedings of the 16 th ACM SIGSPATIAL international conference on Advances in geographic information systems, 2008. |
31,317,546 | The Usage of Various Lexical Resources and Tools to Improve the Performance of Web Search Engines | In this paper we present how resources and tools developed within the Human Language Technology Group at the University of Belgrade can be used for tuning queries before submitting them to a web search engine. We argue that the selection of words chosen for a query, which are of paramount importance for the quality of results obtained by the query, can be substantially improved by using various lexical resources, such as morphological dictionaries and wordnets. These dictionaries enable semantic and morphological expansion of the query, the latter being very important in highly inflective languages, such as Serbian. Wordnets can also be used for adding another language to a query, if appropriate, thus making the query bilingual. Problems encountered in retrieving documents of interest are discussed and illustrated by examples. A brief description of resources is given, followed by an outline of the web tool which enables their integration. Finally, a set of examples is chosen in order to illustrate the use of the lexical resources and tool in question. Results obtained for these examples show that the number of documents obtained through a query by using our approach can double and even quadruple in some cases. | [
701156
] | The Usage of Various Lexical Resources and Tools to Improve the Performance of Web Search Engines
Cvetana Krstev [email protected]
Faculty of Philology
Belgrade
Ranka Stanković
Faculty of Mining and Geology
Faculty of Mathematics
Faculty of Mining and Geology
Belgrade 3 professor, 4 professorBelgrade, Belgrade
Duško Vitas [email protected]
Ivan Obradović [email protected]
The Usage of Various Lexical Resources and Tools to Improve the Performance of Web Search Engines
In this paper we present how resources and tools developed within the Human Language Technology Group at the University of Belgrade can be used for tuning queries before submitting them to a web search engine. We argue that the selection of words chosen for a query, which are of paramount importance for the quality of results obtained by the query, can be substantially improved by using various lexical resources, such as morphological dictionaries and wordnets. These dictionaries enable semantic and morphological expansion of the query, the latter being very important in highly inflective languages, such as Serbian. Wordnets can also be used for adding another language to a query, if appropriate, thus making the query bilingual. Problems encountered in retrieving documents of interest are discussed and illustrated by examples. A brief description of resources is given, followed by an outline of the web tool which enables their integration. Finally, a set of examples is chosen in order to illustrate the use of the lexical resources and tool in question. Results obtained for these examples show that the number of documents obtained through a query by using our approach can double and even quadruple in some cases.
Introduction
When delivering a query to a web search engine the user is typically interested in information available on the web related to a particular topic. The result of this query is a selection of web pages the search engine determines as relevant to the query. The information the user is interested in can generally be expressed in terms of concepts, abstract ideas or mental symbols that denote objects in a given category or class of entities, interactions, phenomena, or relationships between them. On the other hand, concepts are lexicalized by one or more synonymous words (simple or compound). For example, the concept of a "housing that someone is living in" is lexicalized by the word "house", but also by "dwelling", "home", "domicile", "abode", "habitation" or "dwelling house". Hence, the concept a web query pertains to is in practice very often formalized by a Boolean OR combination of words, which the user believes best describe the concept in question, e.g. "house OR home OR domicile". It goes without saying that the choice of words used in a query are of crucial importance for the relevance of the results delivered by the search engine. At the first glance, the main problem lies in the fact that the user, when composing a query, might omit some words related to the concept, thus reducing system recall. A simple query expansion by adding the omitted words would seemingly resolve this problem. However, the expansion of the set of words describing a concept in a query, although contributing to the recall in general, has and adverse effect. Namely, due to the fact that many words are homonymous or polysemous, adding new words to the query might reduce precision. Given this trade-off between recall and precision, words used in a query have to be very carefully selected in order to attain an optimal balance between the two.
The problem is further complicated when searches are performed for highly inflective languages such as Serbian, which, moreover, equally uses two alphabets, Cyrillic and Latin. Some of the search engines, such as Google, have tackled the problem of inflection, and Google queries for Serbian are now expanded with the usage of some sort of a stemmer. However, this approach solves the inflection problem only partially and the solution is far from systematic. As is often the case with stemmers, Google expands the query by including not only (some) inflective forms but also related words. For example, a Google query with the Serbian word prevodilac 'translator' also offers web pages containing the word prevod 'translation', while the query with javno mnjenje 'public opinion' also offers pages containing the word javnost 'populace'. As it could be expected, this kind of approach works poorly for verbs. For instance, a query with slati poruku 'to send a message' returns only pages that contain the verb slati in the infinitive form, or the verbal noun slanje 'sending' and omits numerous pages on the Web containing other verb forms like, for instance, šaljem poruku '(I) send a message'. In some cases, unrelated results are obtained. As Google tries to be too smart it assumes that an occurrence of 's' in Serbian text can be replaced by 'š'. Thus, when searching for strasna nedelja 'Passion Week' the unrelated results for strašna nedelja 'horrible week' or 'horrible Sunday' are obtained as well.
2. Typical problems when retrieving documents using a web search engine 1. In general, when the concept the query relates to is lexicalized by one or more multi-word terms in a highly inflective language, the search engines are faced with a problem they are practically unable to cope with. For example, let us consider that we wish to search the web for the information on beli luk 'garlic'. When searching with the two constituent keywords beli 'white' AND luk 'onion' the search engine would typically return an irrelevant document based on the following content:
Sastojci za 10 porcija: 3 glavice crnog luka, 1 šoljica ulja, 1/2 čaša belog vina, 1 čaša soka od paradajza (The ingredients for 10 portions: 3 onions, 1 cup of oil, ½ glass of white wine, 1 glass of tomato juice.)
This false retrieval occurs because two constituents of the multi-word term are treated separately, and neither nearness conditions nor grammatical agreement conditions are taken into account, which reduces precision. Conversely, if a literal search is performed as with "beli luk" then inflected forms of this multi-word term are not taken into account, and this reduces recall. In this case the aforementioned irrelevant document would be omitted, but so would be many relevant results, for instance Gambori u maslacu sa belim lukom (Shrimps on butter with garlic (in the instrumental case)) 2. The simple keyword search is based on the lexical realization of a concept and not on the concept itself. Thus, it does not take into account the synonyms, unless the user himself remembers to include them in the search, for instance by adding the Serbian synonym češnjak to beli luk, which would improve recall. Even more relevant results could be obtained if the search is further expanded with the Latin name Allium sativum which many users probably would not even know. This is, however, the simplest conceptual expansion of a query. A more sophisticated would be a web query on Amerindian languages (amerindijanski in Serbian). The user issuing such a query is most probably not looking for the occurrences of the exact term with its possible synonyms -indijanski and amerindski -in all inflectional forms (amerindijanskog, indijanskog, amerindskog, etc.), but also for the occurrences of the specific languages belonging to that language class, for instance, atakapa, mozan, tupi-gvarani and many others that are derivationally unrelated to the original keyword, thus making any stemmer useless.
3.
In some cases the user may wish to perform a bilingual search in order to find documents on the chosen subject in two languages, e.g. English and Serbian. In the case of garlic the appropriate query should be composed of the keywords beli luk, češnjak, Allium sativum, and garlic. It is not to be expected that a common user would normally possess the knowledge necessary to expand a query in this way.
The lexical resources used
In order to achieve an optimal balance between recall and precision in retrieving documents from the web we have developed WS4QE (Work Station for Query Expansion) which uses various language resources we have developed for Serbian (Krstev et al., 2008). These resources include morphological e-dictionaries and finite state transducers, which offer the possibilities for solving the problem of flections in queries, and electronic thesauri, ontologies and wordnets which offer various possibilities for automatic or semi-automatic refinement of queries by adding new words to the set of words initially specified by the user. 1. Morphological dictionaries of simple words and compounds in the so called LADL format (Courtois et al., 1990) basically consist of lemmas accompanied with inflectional class codes which enables a precise production of all inflectional forms. The Serbian morphological dictionary of simple words contains 117,000 lemmas which yields the production of approximately 1,400,000 different lexical words. More than 85,000 simple lemmas belong to general lexica, while the remaining 32,000 lemmas represent various kinds of simple proper names. The Serbian morphological dictionary of compounds contains approximately 2,700 lemmas (yielding more than 60,000 different forms) and it is being constantly upgrading.
2.
Inflectional finite state transducers (FST) for the inflection of both simple and compound words have been developed for the Unitex system (http://www-igm.univ-mlv.fr/~unitex/). It is important to stress that WS4QE does not rely only on a simple list of word forms for Serbian simple and compounds words, but on the inflectional transducers as well. This enables a more elaborate query expansion that can significantly improve retrieval performances. For instance, if a query is performed with the keyword beli luk, three inflectional transducers are used: one for inflection of the adjective beli 'white', one for inflection of the noun luk 'onion' and one for the compound as whole which takes care of agreement conditions. These transducers expand the query beli luk into beli luk AND belim lukom AND beli lukovi AND belih lukova AND belima lukovima AND belim lukovima AND bele lukove AND bela luka AND beloga luka AND belog luka AND belome luku AND belom luku Due to the third inflectional transducer this query expands into only 12 combinations of an adjective form and a noun form, instead of 216 possible combinations, thus disabling false retrieval such as: Tako, posmatrano sa dna vidika, izgleda kao da iz širokih lukova belog mosta teče i razliva se ne samo zelena Drina… 'Thus, from a bottom view, it appears that not only green Drina flows and spills over under the wide arcs of the white bridge…' 3. Wordnets in XML format are used for query expansion with related words as well as for bilingual searches. The Serbian and English lexicalizations of the same (or similar) concepts in the Serbian wordnet (SWNconceived within the Balkanet project (Tufiş, 2004), and presently encompassing 14.593 synsets) and the Princeton wordnet which is publicly available are connected via the Interlingual index (ILI) (Vossen, 1998). 4. In a similar way queries can be expanded by Prolex, a multilingual database of proper names which represents the implementation of an elaborate four-layered ontology of proper names (Krstev, et al., 2005) organized around a conceptual proper name that represents the same concept in different languages. For instance, Prolex establishes the meronymy relation between concepts 'New York' and 'United States of America', and automatically between their Serbian equivalents Njujork and Sjedinjene Američke Države. Various other relations are implemented as well.
The system options
Our system for query expansion allows the user to decide how his query will be expanded by choosing one or several of the offered options: 1. Alternate alphabet usage -for instance, the user can submit a keyword in Latin alphabet: štrajk 'strike' which will be expanded automatically by adding the keyword in Cyrillic: штрајк. 2. The inclusion of inflectional forms, for instance, štrajk, štrajka, štrajkovi, ... The inflection is done by Unitex procedures that use morphological dictionaries and inflectional FSTs for Serbian. The inflection works both for simple words and compounds.
3. The addition of synonyms -for instance, the synonym obustava rada 'work stoppage' can be added to the keyword štrajk. Synonyms are added on basis of the Serbian Wordnet (SWN). All the other relations included in SWN can also be used for the query expansion, for instance the keyword solarni sistem 'solar system' can be expanded by Merkur, Venera, Zemlja, Mars, etc. if meronymy is used for query expansion. 4. The expansion of proper names using Prolex which offers to the user the option of adding proper name aliases, its synonyms, but also other proper names which are semantically related to the initial proper name through holonym and meronym relations. Thus a query with the word Engleska 'England' can be expanded with Englez 'Englishman', Engleskinja, 'English woman' but also with Albion. 5. The inflection of free phrases by predicting their syntactic structure. Our presumption is that many free phrases used for search will have the same syntactic structure as a compound, and that the inflectional transducers for compounds that we have already developed can be applied to inflect them correctly. Our further presumption is that in many cases this structure can be predicted on the basis of morphological and syntactic features of the phrase components. These features can be obtained from the morphological e-dictionaries that are at our disposal during the query expansion process. The prediction of the phrase structure is also based on the frequencies of compound structures that we have obtained from our existing dictionary of compounds. This analysis shows that, not surprisingly, the most frequent structure for compounds with two components is adjective+noun, followed by the compounds with the structure X+noun, where X means "a word form that does not inflect within the compound". For compounds with three components the most frequent structure is noun+X+X. Data on frequencies can help in deciding which structure should be attributed to a free phrase when several options exist according to e-dictionaries. A nice example is the phrase Republika Francuska which, according to the dictionaries can be analyzed as a phrase of the form noun+noun or noun+adjective. Since the latter structure is not very frequent in Serbian, the former is chosen that is also the correct one. In this particular case the latter solution would not yield erroneous results either since for query expansion we need only correctly inflected forms and not grammatical categories. 6. In Serbian many compounds have a structure in which some of its components do not inflect (like X+noun or noun+X+X). When identifying the structure of a free phrase it may sometimes be difficult to decide which components inflect and which don't. One simple rule would be that word forms that are unknown (i.e. that do not have a corresponding entry in our e-dictionaries) do not inflect. It would yield correct examples in some cases (for instance, in šper ploče 'plywood' šper does not inflect and it is not in our e-dictionaries since it is not a valid Serbian word). In some other situations the prediction would be incorrect, as for Telecom Srbija 'Telecom Serbia' where Telecom is an unknown word but it inflects (e.g. the dative form is Telecomu Srbije). More sophisticated rules are also used to detect the components that do not inflect, one of them being "if the word that follows a noun is possibly a preposition and the next word is in the grammatical case that is required by that preposition, neither of the word forms following the noun will inflect". This rule would correctly determine that the free phrase kamatne stope na dinarsku štednju 'interest rates on savings in dinars 1 ' has the form adjective+noun+X+X+X due to the fact that the adjective form dinarsku is in the accusative case that is required by the preposition na. 7. In order to test our system we have used a log file of one of Serbian professional journals that deals with economic issues. The journal's web site is supported by a search engine that enables its readers to retrieve information from journal's archive. The used log file thus gives a good insight in users' queries. Many of the multi word queries are of no interest since they represent simple lists of key words, for instance Beograd, Gradska čistoća, privatizacija 'Belgrade, City Waste Disposal, privatization'. It is not expected that the user would be interested for inflections of such a list as a whole. Some phrases, as we have expected, had a structure not yet found among compounds, such as adjective+noun+conjunction+noun in Beogradski vodovod i kanalizacija 'Belgrade water supply and sewage system'. For many free phrases, especially those with fewer components, the structure was correctly detected and their inflected forms produced, e.g. smrznuto voće i povrće 'frozen fruits and vegetables'. As a by-product, the analysis of the log file detected some compounds that were not yet in the dictionary of compounds and which were subsequently added to it (the most frequent one being kursna lista 'the exchange rate list'). In order to be able to correctly inflect more free phrases we have produced some new inflectional transducers as for the structure adjective+conjunction+adjective+noun in ekonomska i monetarna unija 'economic and monetary union' 8. The bilingual search -for instance, to the keyword štrajk and its Serbian synonym obustava rada a corresponding English set of synonyms can be added: {strike, work stoppage}. The bilingual search is, however, done separately and the results are presented in two columns.
Technical implementation
The developed web application receives the user query, and subsequently uses the local web service WS4QE to expand the query and forward it to the Google search engine using the Google AJAX Search API. Google AJAX Search API is a Java script library which enables the embedding of Google searches into personal web pages or web applications. This library is composed of simple web objects which perform "inline" search using numerous Google services (Web Search, Local Search, Video Search, Blog Search, News Search and Book SearchNew!). We have embedded a simple, dynamic search box and the search results are displayed within our own web pages for different types of query expansions, depending on the resources and type of expansion. Web service WS4QE uses classes from .NET dll components developed within WS4LR (WorkStation for Lexical Resources) (Krstev et al., 2006), which enable the usage of lexical resources for query expansion. The web service returns the required information in XML form, which is being received and converted to appropriate application structures (string, array, table,...). Some of the typical calls are: getObliciLeme(lema), which retrieves all inflective forms of a lemma, getSinonimiWN_WithFlex(lema) which retrieves all wordnet synonyms with inflective forms, getSinonimiWN_NoFlex(lema) which retrieves all wordnet synonyms without inflective forms, getProlexTable (rec, jezikSearch, Inflect, ExpandWith) which retrieves all chosen proper name expansions according to the request specified by the user. We will now illustrate some of WS4QE features related to query expansion. Figure 1 depicts the home page of WS4QE where the left hand side shows the menu with the functions offered and the right side the login part. Besides query expansion, WS4QE also offers functions for manipulation of aligned texts and wordnet management, as listed in the menu, but we will leave here these functions aside and concentrate on query expansion. The user can choose from several options for query expansion, the wordnet advanced search being the most complex. Figure 2 shows the page for this type of search with the word beli luk in Latin alphabet chosen as the initial search string. As semantic expansion was chosen, the appropriate synset was retrieved and two other synonyms for beli luk, namely češnjak (as 'cyesxnxak' in the Aurora 2 code) and Allium sativum appeared in the list of words that can be used for composing the query. However, given that one of the synonyms is a Latin word, it was estimated that its introduction in the query would generate a great number of irrelevant documents in languages other than Serbian, so the options for removing some of the synonymous words was used and the word list was reduced to two Serbian words: beli luk and češnjak. In this particular case morphological expansion was omitted, and the query is further expanded only by including both chosen words in Cyrillic (Figure 3).
Figure 2. Semantic expansion of a query
The query, now composed of two Latin and two Cyrillic strings was then submitted by WS4QE to Google and, as a result, a total of 92,700 documents were obtained. The same query submitted directly to Google with only the initial string beli luk returned a total of 54,900. Thus the expanded expansion, without the morphological expansion almost doubled the number of documents obtained. It could, however, be argued that this does not necessarily mean that all obtained documents are relevant. Figure 3. Finalized query to be submitted to Google A thorough inspection of all documents was not performed, for obvious reasons, but it is safe to say that it is most unlikely that any of the documents obtained is irrelevant because both words used are specific in that they are neither homonymous nor polysemous. Part of the results is depicted in Figure 4. On the left hand side results obtained by the direct, unexpanded query are given, while the right hand side shows the results of the expanded query. For illustration purposes, two additional queries were performed using the word istraživač 'researcher'. Since the word istraživač has no synonyms in Serbian wordnet, semantic expansion was performed by including the words from the hypernym of istraživač, namely naučnik and učenjak 'scientist'. The query was further expanded by including all words in Cyrillic alphabet, morphological expansion once more omitted. The result of the expanded query was a total of 160,000 documents as opposed to 66,600 obtained by the unexpanded query ( Figure 5). The expanded query once again doubled the number of documents obtained. Finally, a second query was performed for the word istraživač. This time a morphological expansion was performed and the semantic expansion omitted, but the extension to Cyrillic alphabet remained. As a result 285,000 documents were obtained, which means that the recall has been quadrupled. Thus we may conclude that a considerable increase of recall was obtained in all three examples. Figure 5. Results for expanded query for 'istraživač'
Conclusion
Given the rapidly growing number of documents on the web, the formulation of queries that are submitted to web search engines has become an increasingly sensitive matter. Queries often need to be 'fine tuned' in order to obtain an optimal balance between recall and precision. Lexical resources can be put to the aid of the user by offering him/her various possibilities of query expansion, with the ultimate aim of obtaining a better balanced query. We believe that the approach we have outlined in this paper purports this thesis. Needless to say, lexical resources are invaluable for many other tasks, and some of them can already be performed using the tool that we have described here in the context of query expansion. Our further endeavors will hence be twofold. On the one hand, we shall continue do develop our lexical resources, focusing in the next stage on dictionaries of compounds. On the other hand, we will strive to broaden the scope of tasks that can be solved with our tools. The existence of reliable lexical resources is already indispensable, but their importance, along with the tools for handling them, can only grow in the future.
Figure 1 .
1WS4QE home page
Figure 4 .
4Results for expanded query for 'beli luk'
Dinar is Serbian currency
For reasons of flexibility letters specific for the Serbian language ć, č, š ,ž ,đ, dž, lj and nj, are internally coded as cx, cy, sx, zx, dx, dy, lx and nx, respectively)
Dictionnaires électroniques du français. Langue française. Courtois, BlandineMax Silberztein87Courtois, Blandine; Max Silberztein (eds.) (1990). Dictionnaires électroniques du français. Langue française 87. Paris: Larousse http://www-igm.univ-mlv.fr/~unitex/
Resources and Methods in the Morphosyntactic Processing of Serbo-Croatian. C Krstev, Formal Description of Slavic Languages: The Fifth Conference. Zybatow, Gerhild et al.Leipzig; Peter LangFrankfurt am MainKrstev, C., et al., (2008). Resources and Methods in the Morphosyntactic Processing of Serbo-Croatian, In Formal Description of Slavic Languages: The Fifth Conference, Leipzig 2003, Zybatow, Gerhild et al. (eds.), Peter Lang: Frankfurt am Main, pp. 3-17...
WS4LR: A Workstation for Lexical Resources. C Krstev, R Stanković, D Vitas, I Obradović, Proceedings of the 5th International Conference on Language Resources and Evaluation. the 5th International Conference on Language Resources and EvaluationGenoa, ItalyKrstev, C., Stanković, R., Vitas, D., Obradović, I. (2006). WS4LR: A Workstation for Lexical Resources, Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC 2006, Genoa, Italy, May 2006, pp. 1692-1697
. C Krstev, D Vitas, D Maurel, M Tran, Krstev, C., Vitas, D., Maurel, D., Tran, M. (2005).
Multilingual Ontology of Proper Names. Proc. of Second Language & Technology Conference. of Second Language & Technology ConferencePoznań, Poland; PoznańWydawnictwo Poznańskie SpMultilingual Ontology of Proper Names. In Proc. of Second Language & Technology Conference, Poznań, Poland, April 21-23, Wydawnictwo Poznańskie Sp. z o.o, Poznań
D Tufiş, Special Issue on BalkaNet Project. 7Tufiş, D. (ed.), (2004).: Special Issue on BalkaNet Project, Romanian Journal on Information Science and Technology. Bucureşti: Publishing house of the Romanian academy, Vol. 7, No.1-2.
EuroWordNet: A Multilingual Database with Lexical Semantic Networks. P Vossen, Kluwer Academic PublishersDordrechtVossen, P. (ed.) (1998).: EuroWordNet: A Multilingual Database with Lexical Semantic Networks. Dordrecht: Kluwer Academic Publishers |
18,621,944 | Error Correcting Romaji-kana Conversion for Japanese Language Education | We present an approach to help editors of Japanese on a language learning SNS correct learners' sentences written in Roman characters by converting them into kana. Our system detects foreign words and converts only Japanese words even if they contain spelling errors. Experimental results show that our system achieves about 10 points higher conversion accuracy than traditional input method (IM). Error analysis reveals some tendencies of the errors specific to language learners. | [
5844380,
29358,
1308791
] | Error Correcting Romaji-kana Conversion for Japanese Language Education
November 13, 2011
Seiji Kasahara [email protected]††
Mamoru Komachi [email protected]††
Masaaki Nagata [email protected]
Yuji Matsumoto
Nara Institute of Science and Technology
Ikoma-shi8916-5, 630-0192Takayama-cho, NaraJapan
NTT Communication Science Laboratories
2-4 Hikari-dai, Seika-cho, Soraku-gun619-0237KyotoJapan
Error Correcting Romaji-kana Conversion for Japanese Language Education
Proceedings of the Workshop on Advances in Text Input Methods (WTIM 2011)
the Workshop on Advances in Text Input Methods (WTIM 2011)Chiang Mai, ThailandNovember 13, 2011
We present an approach to help editors of Japanese on a language learning SNS correct learners' sentences written in Roman characters by converting them into kana. Our system detects foreign words and converts only Japanese words even if they contain spelling errors. Experimental results show that our system achieves about 10 points higher conversion accuracy than traditional input method (IM). Error analysis reveals some tendencies of the errors specific to language learners.
Introduction
The Japan Foundation reports that more than 3.65 million people in 133 countries and regions were studying Japanese in 2009. Japanese is normally written in thousands of ideographic characters imported from Chinese (kanji) and about 50 unique syllabic scripts (kana). Because memorizing these characters is tough for people speaking European languages, many learners begin their study with romaji, or romanization of Japanese.
However, sentences written in kana are easier to edit for native Japanese than the ones in Roman characters. Converting Roman characters into kana helps Japanese editors correct learners' sentences, but naive romaji-kana conversion does not work well because there are spelling errors in learners' sentences. Even though traditional input methods have functionality to convert Roman characters into kana, existing IMs cannot treat learners' errors correctly since they are mainly designed for native Japanese speakers.
In this paper, we present an attempt to make the learner's sentences easier to read and correct for a native Japanese editor by converting erroneous text written in Roman characters into correct text written in kana while leaving foreign words unchanged. Our method consists of three steps: iden-tification of language, spelling correction and converting text from Roman to kana. First, learners often write a word from their native language directly in a Japanese sentence. However, they are not converted correctly into their kana counterpart since the original spelling is usually not equivalent to the Japanese transliteration. Thus it is better to leave these word unchanged for the readability of editors. Second, since erroneous words cannot be converted correctly, spelling correction is effective. We combined filtering with cosine similarities and edit distance to correct learners' spelling errors. Third, we greedily convert Roman characters to kana for manual correction by native Japanese teachers. We compared our proposed system with a standard IM and conducted error analysis of our system, showing the characteristics of the learner's errors.
Related Work
Our interest is mainly focused on how to deal with erroneous inputs. Error detection and correction on sentences written in kana with kana character N-gram was proposed in (Shinnou, 1999). Our approach is similar to this, but our target is sentences in Roman characters and has the additional difficulty of language identification. Errortolerant Chinese input methods were introduced in (Zheng et al., 2011;Chen and Lee, 2000). Though Roman-to-kana conversion is similar to pinyin-to-Chinese conversion, our target differs from them because our motivation is to help Japanese language teachers. Japanese commercial IMs such as Microsoft Office IME 1 , ATOK 2 , and Google IME 3 have a module of spelling correction, but their target is native Japanese speakers. (Ehara and Tanaka-Ishii, 2008) presented a high accuracy language detection system for text input. We perform error correction in addition to language identification. Correcting Japanese learners' error is also proposed in (Mizumoto et al., 2011). They try to correct sentences written in kana and kanji mixed, whereas we aim at texts in Roman characters.
Romanization of Japanese
There are some different standards of romanization in Japanese. The three main ones are Hepburn romanization, Kunrei-shiki Romaji, and Nihonshiki Romaji. Most Japanese learners write in the Hepburn system, so we use this standard for our conversion system. Hepburn romanization generally follows English phonology with Romance vowels. It is an intuitive method of showing the pronunciation of a word in Japanese. The most common variant is to omit the macrons or circumflexes used to indicate a long vowel.
Romanized Japanese Learners Corpus from Lang-8
To our knowledge, there are no Japanese learners' copora written in Roman characters. Therefore, we collected text for a romanized Japanese learners' corpus from Lang-8 4 , a language learning SNS. Since it does not officially distribute the data, we crawled the site in Dec 2010. It has approximately 75,000 users writing on a wide range of topics. There are 925,588 sentences written by Japanese learners and 763,971 (93.4%) are revised by human editors (Mizumoto et al., 2011). About 10,000 sentences of them are written in Roman characters. Table 1 shows some examples of sentences in Lang-8. As a feature of learners' sentences in Roman characters, most of them have delimiters between words, but verbs and their conjugational endings are conjoined. Another is the ambiguity of particle spelling. For example, " " (topic marker) is assigned to ha by the conversion rule of Hepburn romanization, but it is pronounced as wa, so both of them are found in the corpus. Pairs of " " wo (accusative case marker) and o, " " he (locative-goal case marker) and e also have the same ambiguity.
Error Tolerant Romaji-kana Conversion System
The system consists of three components: language identification, error correction with approximate matching, and Roman-to-kana conversion. 4 http://lang-8.com/
Language Identification
Language identification is done by exact matching input sequences in English with a romanized 5 Japanese dictionary. Learners sometimes directly write words in their native language without adapting to Japanese romaji style. Since we are not focusing on implementing full transliteration (Knight and Graehl, 1998), we would like to convert only Japanese words into kana. To achieve this, we use an English word dictionary because most foreign words found in learners' sentences are English words. By adding dictionary, we can easily extend our system to another language. Those words matched with the dictionary are not converted. WordNet 2.1 6 is used as the dictionary. It has 155,287 unique words. We also use a Japanese word dictionary to decide whether a word goes to the approximate word matching phase or not. The Japanese word dictionary is IPADic 2.7.0. We also use a dictionary of Japanese verb conjugations, because verbs in learners' sentence are followed by conjugational endings but they are separated in our word dictionary. The conjugation dictionary is made of all the occurrences of verbs and their conjugations extracted from Mainichi newspaper of 1991, with a Japanese dependency parser CaboCha 0.53 7 to find bunsetsu (phrase) containing at least one verb. The number of extracted unique conjugations is 243,663.
Error Correction
Words which are not matched in either the English or the Japanese dictionary in the language identification step are corrected by the following method. Spelling error correction is implemented by approximate word matching with two different measures. One is the cosine similarity of character unigrams. The other is edit distance. We use only IPADic to get approximate words.
Candidate Generation with Approximate Word Matching
First, we would like to select candidates with the minimum edit distance (Wagner and Fischer, 1974). Edit distance is the minimum number of editing operations (insertion, deletion and substitution) required to transform one string into an- ::::::
Muscle :::::: musical wo mietai. Muscle musical wo mitai. Muscle musical anatah wa aigo ga wakarimasu ka. anata wa eigo ga wakarimasu ka.
other. However, the computational cost of edit distance calculations can be a problem with a large vocabulary. 8 Therefore, we reduce the number of candidates using approximate word matching with cosine distance before calculating edit distance (Kukich, 1992). Cosine distance is calculated using character n-gram features. We set n = 1 because it covers most candidates in dictionary and reduces the number of candidates appropriately. For example, when we retrieved the approximate words for packu in our dictionary with cosine distance, the number of candidates is reduced to 163, and examples of retrieved words are kau, pakku, chikau, pachikuri, etc. Approximate word matching with cosine similarity can be performed very efficiently (Okazaki and Tsujii, 2010) 9 to get candidates from a large scale word dictionary.
Selecting the Most Likely Candidate
The system selects the correct word by choosing the most likely candidate by N-gram cost normalized by a word length. It is calculated with a romanized character 5-gram model built from kakasi-romanized Mainichi newspaper corpora of 1991 using SRILM 1.5.12 10 with Witten-Bell smoothing. 11
Converting Roman Characters into Kana
We greedily convert Roman characters into the longest match kana characters. If a word includes character with circumflex, it is assumed to be two vowels meaning long sound (e.g., "kyôdai" is expanded as kyoudai: brother ). Characters not used in the Hepburn system are assumed to be another character which has similar sound in English if possible. For example, ca, ci, cu, ce ,co are treated as ka, shi, ku, se, ko respectively.
Most kanas correspond to a pair of a consonant and a vowel. Although most pairs of Roman characters are converted into kana unambiguously, some pairs have several possibilities. One of them is a pair of n and following characters. For example, we can read Japanese word kinyuu as " /kin-yuu: finance" and " /kinyuu: entry." The reason why it occurs is that n can be a syllable alone. Solving this kind of ambiguity is out of scope of this paper; and we hope it is not a problem in practice, because after manual correction we can translate kana back to Roman characters unambiguously.
Experiments
We have evaluated our approach in converting Roman characters into kana after spelling error correction of sentences.
Evaluation Metrics
We evaluate the accuracy of word error correction. We also evaluate error correction performance with recall and precision. Recall and Precision are defined as follows:
Recall = N t /N w , P recision = N t /N e where N t , N w and N e denote the number of words corrected from wrong word to right word by the system, the number of words that contain errors, and the number of words edited by the system.
Experimental Settings
For comparison, we use Anthy 7900 12 as a baseline, which is one of the de facto standard open source IMs. It does not use either language identification or approximate word matching. Note that Anthy is not particularly tailored for spelling error correction. To compare with another system which has error correction function, we experimented with Google CGI API for Japanese Input 13 . Since it does not have Romaji-kana conversion module, the experiment was conducted using Romaji-kana conversion by Anthy and error correction by Google API. We also compare our system with and without approximate word matching.
Data Set
We collected 500 sentences written in Roman characters from Lang-8. Although some of them have been already revised, we manually reannotated gold standard answers to enhance the consistency of quality. While making the test set, we corrected only spellings even if they contain other type of error because our main purpose is correcting spelling errors. 14 Table 4 shows the spelling correction accuracy. The word accuracy of the proposed system is 85.0% which is about 10 points higher than Anthy's 74.5%. The accuracy of our method without approximate word matching is 84.5%, showing that language identification is the crucial component of our method. 15 Examples of successfully corrected word are shown in Table 2. Underlined words are erroneous words and words underlined with wavy line are foreign words. Spelling correction with approximate matching can improve precision without degrading recall. However, the low performance of the baseline system shows difficulty of this task.
Experimental Results
Discussion
Examples of uncorrected words are shown in Table 3. The top three largest ones are matching with valid word (40%), too large edit distance between original word and correct word (24%), and compound words (14%).
Matching with valid word: Matching with valid word occurs when the input matches a word 14 There are 3,274 words in the test data and 32 characters in a sentence on average. 15 The number of foreign words in the test data is 137 and 124 words of them were correctly identified. in the dictionary. For example, if a learner incorrectly writes renshou instead of renshuu, it is not corrected because it is found in Japanese dictionary. This type of error cannot be corrected without context information so a word based language model is worth trying.
Too large edit distance: A word whose edit distance from the input is larger than the threshold is not selected as a candidate. For example, if the learner writes muzukashii as musugashi, the edit distance between words is 3 which is lower than our threshold (=1). We can vary threshold but setting larger threshold introduces dissimilar words into the candidate list. Table 5 shows error types with their percentage against all erroneous words and system accuracy (where L1 means learners native language). Learners tend to confuse vowels and write erroneous word such as domou instead of doumo. Setting lower cost to edit operations of vowels than those of consonants may fix these kind of phonetic errors. A Japanese IM which lets us input kana and kanji by typing only consonants (Tanaka-Ishii et al., 2001) can be seen as a special case where the cost of edit operations of vowels is set to zero.
Compound words: Our system is effective when our dictionary and the learners' sentence use the same granularity of tokenization. For example, "nouryokushiken: capacity test" can be treated as two words, "nouryoku: capacity" and "shiken: test." In fact, IPADic does not have an entry for "nouryoku shiken." Therefore, the single word "nouryokushiken" does not hit when matching. To solve this problem, word segmentation techniques may be effective.
Table 1 :
1Examples of learners' sentences in Lang-8. Spell errors are underlined.
learners' sentence
correct
kana
yorushiku onegia shimasu.
yoroshiku onegai shimasu.
Table 2 :
2Examples of successfully corrected wordmisspelled
kana
correct
kana
shuutmatsu
t
shuumatsu
do-yoobi
doyoubi
packu
c
pakku
Table 3 :
3Examples of uncorrected wordmisspelled
kana
correct
renshou
renshuu
musugashi
muzukashii
noryoukushiken
nouryokushiken
Table 4 :
4Performance of error correctionmethod
Acc P
R
Anthy (baseline)
74.5 66.7 69.7
Anthy w/ Google API
77.8 69.8 72.9
Proposed w/o word match 84.5 76.6 77.3
Proposed w/ word match
85.0 78.1 78.6
Table 5 :
5Error types and system performance (percentage)error type
number
corrected
Typo
31 (13.1) 7 (22.6)
Due to L1 phonetics 62 (26.3) 4 (6.5)
Due to L1 writing
28 (11.9) 2 (7.1)
Confusing vowels
88 (37.3) 7 (8.0)
Others
27 (11.4) 0.0 (0.0)
Total
236
20 (8.5)
http://www.microsoft.com/japan/ office/2010/ime/default.mspx 2 http://www.atok.com/ 3 http://www.google.com/intl/ja/ime/
Romanization was performed by kakasi 2.3.4. http: //kakasi.namazu.org/ 6 http://wordnet.princeton.edu/ 7 http://chasen.org/˜taku/software/ cabocha/
We set the maximum distance between input and candidate as 1, because it achieved the best accuracy in preliminary experiment. 9 http://www.chokkan.org/software/ simstring/ 10 http://www-speech.sri.com/projects/ srilm/ 11 Witten-Bell smoothing works well compared to Kneser-Ney when data is very sparse.
http://anthy.sourceforge.jp/ 13 http://www.google.com/intl/ja/ime/ cgiapi.html
AcknowledgmentWe would like to express our gratitude to our colleagues, Tomoya Mizumoto and Joseph Irwin for their cooperation.
A New Statistical Approach to Chinese Pinyin Input. Zheng Chen, Kai-Fu Lee, Proceedings of ACL. ACLZheng Chen and Kai-Fu Lee. 2000. A New Statistical Approach to Chinese Pinyin Input. In Proceedings of ACL, pages 241-247.
Multilingual Text Entry using Automatic Language Detection. Yo Ehara, Kumiko Tanaka-Ishii, Proceedings of IJCNLP. IJCNLPYo Ehara and Kumiko Tanaka-Ishii. 2008. Multilin- gual Text Entry using Automatic Language Detec- tion. In Proceedings of IJCNLP, pages 441-448.
. Kevin Knight, Jonathan Graehl, Machine Transliteration. Computational Linguistics. 244Kevin Knight and Jonathan Graehl. 1998. Ma- chine Transliteration. Computational Linguistics, 24(4):599-612.
Techniques for Automatically Correcting Words in Text. Karen Kukich, ACM Computing Surveys. 244Karen Kukich. 1992. Techniques for Automatically Correcting Words in Text. ACM Computing Sur- veys, 24(4):377-439.
Mining Revision Log of Language Learning SNS for Automated Japanese Error Correction of Second Language Learners. Tomoya Mizumoto, Mamoru Komachi, Masaaki Nagata, Yuji Matsumoto, Proceedings of IJCNLP. IJCNLPTomoya Mizumoto, Mamoru Komachi, Masaaki Na- gata, and Yuji Matsumoto. 2011. Mining Re- vision Log of Language Learning SNS for Auto- mated Japanese Error Correction of Second Lan- guage Learners. In Proceedings of IJCNLP.
Simple and Efficient Algorithm for Approximate Dictionary Matching. Naoaki Okazaki, Jun'ichi Tsujii, Proceedings of COLING. COLINGNaoaki Okazaki and Jun'ichi Tsujii. 2010. Simple and Efficient Algorithm for Approximate Dictionary Matching. In Proceedings of COLING, pages 851- 859.
Detection and Correction for Errors in Hiragana Sequences by a Hiragana Character N-gram. Hiroyuki Shinnou, Transaction of Information Processing Society of Japan. 40in JapaneseHiroyuki Shinnou. 1999. Detection and Correction for Errors in Hiragana Sequences by a Hiragana Char- acter N-gram (in Japanese). Transaction of Informa- tion Processing Society of Japan, 40(6):2690-2698.
Japanese input system with digits -Can Japanese be input only with consonants?. Kumiko Tanaka-Ishii, Yusuke Inutsuka, Masato Takeichi, Proceedings of HLT. HLTKumiko Tanaka-Ishii, Yusuke Inutsuka, and Masato Takeichi. 2001. Japanese input system with digits -Can Japanese be input only with consonants? In Proceedings of HLT, pages 211-218.
The String to String Correction Problem. A Robert, Michael J Wagner, Fischer, Journal of the ACM. 211Robert A. Wagner and Michael J. Fischer. 1974. The String to String Correction Problem. Journal of the ACM, 21(1):168-173.
CHIME: An Efficient Error-Tolerant Chinese Pinyin Input Method. Yabin Zheng, Chen Li, Maosong Sun, Proceedings of IJCAI. IJCAIYabin Zheng, Chen Li, and Maosong Sun. 2011. CHIME: An Efficient Error-Tolerant Chinese Pinyin Input Method. In Proceedings of IJCAI, pages 2551-2556. |
259,266,012 | LT at SemEval-2023 Task 1: Effective Zero-Shot Visual Word Sense Disambiguation Approaches using External Knowledge Sources | The objective of the SemEval-2023 Task 1: Visual Word Sense Disambiguation (VWSD)(Raganato et al., 2023)is to identify the correct image illustrating the indented meaning of a target word and some minimal additional context.The omnipresence of textual and visual data in the task strongly suggests the utilization of the recent advances in multi-modal machine learning, i.e., pretrained visiolinguistic models (VLMs). Often referred to as foundation models due to their strong performance on many vision-language downstream tasks, these models further demonstrate powerful zero-shot capabilities. In this work, we utilize various pertained VLMs in a zero-shot fashion for multiple approaches using external knowledge sources to enrich the contextual information. Further, we evaluate our methods on the final test data and extensively analyze the suitability of different knowledge sources, the influence of training data, model sizes, multi-linguality, and different textual prompting strategies. Although we are not among the best-performing systems (rank 20 of 56), our experiments described in this work prove competitive results. Moreover, we aim to contribute meaningful insights and propel multi-modal machine learning tasks like VWSD. Jitsev. 2022. Reproducible scaling laws for contrastive language-image learning. arXiv preprint arXiv:2212.07143. | [
337425,
208117506,
201646309,
216036089
] | LT at SemEval-2023 Task 1: Effective Zero-Shot Visual Word Sense Disambiguation Approaches using External Knowledge Sources
July 13-14, 2023
Florian Schneider [email protected]
Department of Informatics
Language Technology Group
Universität Hamburg
Germany
Chris Biemann [email protected]
Department of Informatics
Language Technology Group
Universität Hamburg
Germany
LT at SemEval-2023 Task 1: Effective Zero-Shot Visual Word Sense Disambiguation Approaches using External Knowledge Sources
Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)
the The 17th International Workshop on Semantic Evaluation (SemEval-2023)July 13-14, 2023
The objective of the SemEval-2023 Task 1: Visual Word Sense Disambiguation (VWSD)(Raganato et al., 2023)is to identify the correct image illustrating the indented meaning of a target word and some minimal additional context.The omnipresence of textual and visual data in the task strongly suggests the utilization of the recent advances in multi-modal machine learning, i.e., pretrained visiolinguistic models (VLMs). Often referred to as foundation models due to their strong performance on many vision-language downstream tasks, these models further demonstrate powerful zero-shot capabilities. In this work, we utilize various pertained VLMs in a zero-shot fashion for multiple approaches using external knowledge sources to enrich the contextual information. Further, we evaluate our methods on the final test data and extensively analyze the suitability of different knowledge sources, the influence of training data, model sizes, multi-linguality, and different textual prompting strategies. Although we are not among the best-performing systems (rank 20 of 56), our experiments described in this work prove competitive results. Moreover, we aim to contribute meaningful insights and propel multi-modal machine learning tasks like VWSD. Jitsev. 2022. Reproducible scaling laws for contrastive language-image learning. arXiv preprint arXiv:2212.07143.
Introduction
This paper presents and analyses effective zero-shot approaches for the SemEval-2023 Task 1: Visual Word Sense Disambiguation (VWSD) (Raganato et al., 2023). In traditional word sense disambiguation (WSD), the context or sentence in which ambiguous words, i.e., words with multiple meanings, occur is used for disambiguation by identifying the correct sense in a sense inventory. Frequently used as sense inventories are dictionaries or knowledge bases such as WordNet or DBPedia. As opposed to traditional WSD, in the VWSD shared task, images are used to disambiguate a word given a context. Precisely, given a word and another word serving as context, the task is to identify the image that corresponds to or illustrates the correct meaning in a set of ten images. In the trial phase of the task, 12869 samples in English, including gold labels, were provided. However, besides 463 English samples, the final phase test data also contains 305 Italian and 200 Farsi samples. A random VWSD Figure 1: An illustration of a random VWSD sample with the target word 'bonxie', the context 'bonxie skua', and the correct image highlighted by a a golden border sample is illustrated in Figure 1.
Due to the multi-modal nature of the task, it requires methods or models to understand textual semantics contained in the target word and context word and visual semantics contained in the images. Therefore, our approach leverages state-of-the-art pretrained visiolinguistic models (VLMs) in a zeroshot fashion, i.e., we do not continue pretraining or finetune. This is motivated by several reasons based on the tasks data: First, the samples are not restricted to a particular topic, e.g., animals or plants, but can belong to any topic (open-domain), which rules out domain adaption strategies. Further, since current VLMs are trained on massive amounts of text-image pairs crawled from the internet, continuing pretraining on additional open-domain data will likely have no effect. Second, the textual context is minimal and contains only a single or, at most, two additional words, which are often rare English words like the Latin names of certain plants. Additionally, it frequently requires expert knowledge to identify the correct image because the set of ten images often contains images very similar to the gold image. Due to this, finetuning a VLM on the provided data is ineffective, which we also confirmed in conducted but not reported finetuning experiments. Third, recent pretrained VLMs have proven capable of grasping textual and visual semantics out of the box by demonstrating strong zero-shot performance in many vision-language downstream tasks.
The central strategy of the approaches presented by this work is to utilize given information to acquire additional context from external knowledge sources. A pretrained VLM then computes embeddings for the acquired context and all images to find the image with the maximum similarity. See Section 3 algorithmic details and an illustrative overview.
Our code is publicly available on GitHub 1
Background
Pretrained Visio-Linguistic Models The combination of recent advances in Natural Language Processing and Computer Vision has greatly increased interest and performance in the emerging field of multi-modal machine learning, especially in visio-linguistic models (VLMs) with strong zero-shot performance on many downstream tasks (Long et al., 2022). In this work, we specifically focus on VLMs referred to as CLIP ( Clip (Ilharco et al., 2021), and it has an active and large community. Specifically, we evaluate the performance of our VWSD approaches using different sizes of the original model (Radford et al., 2021), models trained on the publicly available datasets LAION (Schuhmann et al., 2022), and multi-lingual versions from SentenceTransformer Gurevych, 2019, 2020) and OpenClip (Cherti et al., 2022). An overview with more details about the CLIP models employed in this work is given in Table 1.
External Knowledge Since the provided context in a VWSD sample is minimal, we use different external knowledge sources to acquire additional contextual information. One of the sources is Wikipedia, from which we retrieve article summaries using the target word and the additional context word(s). Another source is a large-scale corpus (Panchenko et al., 2018), containing 252B tokens based on English CommonCrawl data, which we have indexed using ElasticSearch (Gormley and Tong, 2015). The only multi-modal external knowledge source we employ is VisualSem (Alberts et al., 2021), a high-quality knowledge graph containing 90K nodes with 1.3M glosses in 14 languages and 930K images associated with the nodes. Unfortunately, our request to use the large-scale multimodal knowledge graph BabelNet (Navigli et al., 2021) was rejected. BabelNet arguably would have improved our results significantly since it contains 1.4M senses described by 135M glosses in 500 languages and illustrated by 51M images.
Approaches
This section provides details for the zero-shot approaches to VWSD presented by this work. Section 4 then analyzes and discusses the evaluation results. The general strategy, illustrated in Figure 2, comprises five primary steps: In (1), we acquire addi-
Name Alias # P H # TS BS ML sentence-transformers/clip-ViT-B-32-multilingual-v1 SBCM 28M 512 400M 32K yes openai/clip-vit-base-patch32 OAIB 15M 512 400M 32K no openai/clip-vit-large-patch14 OAIL 42M 768 400M 32K no laion/CLIP-ViT-L-14-laion2B-s32B-b82K LCL 42M 768 2B 82K no laion/CLIP-ViT-H-14-frozen-xlm-roberta-large-lai... LCH
119M 1024 5B 90K yes Table 1: Details on different pretrained CLIP models evaluated in this work. The Alias column describes the alias for the model within this paper; # P is the number of parameters; H is the embedding dimension; # TS and BS is the number of text-image pairs, and the batch size used during pretraining, respectively; ML indicates whether the model is multi-lingual or not. Note that the names are hyperlinks directing to huggingface for more information.
tional context from an external knowledge source (see Section 2) using the textual information provided in a VWSD sample. In (2) and (3), we leverage a pretrained CLIP model to compute embeddings from the acquired context and all ten images contained in the sample. (4), we compute the cosine similarity of the text embedding and all image embeddings and (5) select the image with the maximum similarity as the best matching image. Depending on the method, we use different external knowledge sources, employ different pretrained CLIP models, or compute the textual or visual embeddings differently.
Baseline -No External Knowledge
Our baseline method does not use external knowledge but computes the textual embedding only from the target word and or context in a VWSD sample. However, we test different template sentences or prompts to compute the textual embedding (see Table 2).
Wikipedia Summaries In this approach, we implemented a multi-stage algorithm to retrieve the summary of the best-matching Wikipedia article for the target word and the provided context. For more details on this algorithm, please refer to the implementation published on our GitHub repository. Although Wikipedia is available in many languages, we translate the Italian and Farsi samples to English using Google Translate 2 for library limitation reasons. If we cannot retrieve a summary for a given VWSD sample, we use a template sentence that contains the target word and the context. We then truncate too-long summaries and use the CLIP text encoder to compute the textual embedding. VisualSem Arguably the most sophisticated approaches are based on the multi-modal knowledge graph (KG) VisualSem (see Section 2). There, we first retrieve the best-matching node in the KG for the textual or visual information in a VWSD sample. To do so, we use a pretrained CLIP to compute node embeddings for each node in the KGs and use the FAISS (Johnson et al., 2019) for indexing and efficient similarity search. To compute the node embeddings, we tested four strategies: For the "single_image" and "single_gloss" strategies, one node in the KG has several embeddings, i.e., we compute an embedding for each associated image up to a maximum of 50 images, and each associated gloss in a particular language. For the "avg_image" and "avg_gloss" strategies, we compute a single embedding for each node in the KG, which is the average of the respective single embeddings. Then, to retrieve the best matching node(s) for a VWSD sample, we first use the same CLIP model used to compute the KG node embeddings and compute a query embedding from the sample's textual or visual information. When using textual information of a sample, we refer to it as "text_first"; when using visual information, i.e., the images, we refer to it as "image_first". Using the query embedding, we then perform an exhaustive similarity search over all nodes to find the best matching node(s). Finally, we find the most similar image, i.e., our prediction for the image with the intended meaning, using the embedding of the retrieved node and the "text_first" or "image_first" embedding. Since this algorithm has many possible parameters and combinations thereof, it is challenging to describe, hence, please refer to our GitHub repository for implementation details.
Evaluation and Analysis
In this section, we present and analyze the evaluation results of our approaches described in Section 3. The evaluation is based on the final multilingual evaluation data, including the gold labels released in the Google Group after the competition 3 . Evaluation results for the approaches discussed in this section are depicted in Figure 3.
Baseline -No External Knowledge
In our first experiments, we tested the performance of different pretrained CLIP models without external knowledge. From the results shown in the first row of Figure 3, it can be observed that all models show strong performance on the English test data. As expected, the largest model, LCH, outperforms the smallest model by a significant margin. Noticeable is also the linear decrease in performance with respect to the complexity of the model and the number of text-image pairs in the training data. When inspecting the baseline results for Italian and Farsi languages, a remarkable decrease in performance is noticeable. However, as expected, the multi-lingual CLIP variants significantly outperform English-only versions. Further, a pattern seen across all models and approaches is that the Hit@3 score is significantly higher than the Hits@1 score. This leads to the conclusion that the samples often contain a few very similar images, which are hard to disambiguate and require expert knowledge.
In another experiment, we measured the performance impact of the employed template string. Therefore we used the 9 different template strings described in Table 2 to compute textual embeddings using the LCH model and evaluated the performance of the baseline approach on the English test data. Note that we took inspiration for the template strings from (Radford et al., 2021) From the results depicted in Figure 4, we can see that the most influential parameter of our template strings is 3 See the CodaLab competition page for details.
Template String Alias
An image of a "WORD" as in "CONTEXT" . whether or not it contains the context information.
Template strings containing context information work significantly better than template strings containing only the target word.
Wikipedia Summaries From the results in the second row of Figure 3, we can notice substantial improvements in performance for the Italian and Farsi data, often on par with English data that improved only slightly or even decreased. From this, we can conclude that the Italian and Farsi translation into English worked reasonably well and that Wikipedia is a promising resource for VWSD. We argue that this approach could be further improved when using Wikipedia in the respective languages directly and when additional information, such as images, is used.
Common Crawl Sentences From the third row of Figure 3, we can see that this approach substantially outperforms all other approaches regardless of the employed model or language of the VWSD samples. Especially for Farsi, the improvements compared to our baseline are significant and are in a similar range as the corresponding English samples. This again proves the effectiveness of our simple translation approach. We argue that the translation works so well because only a single or a few words need to be translated, which could be easily done by a dictionary lookup. Another reason this approach works well is arguably the web-scale size of the corpus.
VisualSem As shown by the results in the last two rows of Figure 3, the VisualSem approaches did not work. All results are worse than or equal to our baseline results independent of the employed model and language. These are unexpected results since it is the only multi-modal knowledge source we employ and therefore needs further investiga- Figure 3: Evaluation results of different zero-shot VWSD approaches presented in this paper. The y-Axis label of each row describes the name of the approach. The x-Axis label of each column indicates the language of the VWSD samples, whereas the x-Axis ticks refer to the alias of the CLIP model used in the experiment (see Table 1). Figure 4: Evaluation results from the baseline approach using different template strings as described in Table 2 tion and error analysis in future work. Probable causes for the poor performance could be algorithmic flaws, the relatively small size of VisualSem, or ignoring meaningful but eventually important information, such as relations between the nodes, in our approaches.
Conclusion
This work presents various zero-shot Visual Word Sense Disambiguation approaches using different external knowledge sources. Across all approaches, we analyzed different pretrained versions of the CLIP model varying in size, training data, and multi-lingual capabilities. Further, we assessed the suitability of three external knowledge sources: Wikipedia, a large-scale English Common Crawl corpus, and the multi-modal knowledge graph Visu-alSem. Our best-performing approach involved the Common Crawl corpus which we queried for sentences containing the target word and context, serving as additional context. By translating Farsi and Italian samples into English, we achieved strong competitive results not only for English samples.
Figure 2 :
2A schematic overview of the general strategy for the VWSD zero-shot approaches presented by this work.
Table 2 :
2Different template strings for English samples. The Alias column defines the alias within this paper.
Scaling Up Visual and Vision-Language Representation Learning with Noisy Text Supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, International Conference on Machine Learning (ICML). OnlineChao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling Up Visual and Vision-Language Representation Learning with Noisy Text Supervision. In International Conference on Machine Learning (ICML), pages 4904-4916, On- line.
Billion-scale similarity search with GPUs. Jeff Johnson, Matthijs Douze, Hervé Jégou, IEEE Transactions on Big Data. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data.
A ConvNet for the 2020s. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)New Orleans, LA, USAZhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Fe- ichtenhofer, Trevor Darrell, and Saining Xie. 2022. A ConvNet for the 2020s. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 11976-11986, New Orleans, LA, USA.
Soyeon Caren Han, and Haiqing Yang. 2022. Vision-and-Language Pretrained Models: A Survey. Siqu Long, Feiqi Cao, International Joint Conference on Artificial Intelligence (IJCAI). Vienna, AustriaSiqu Long, Feiqi Cao, Soyeon Caren Han, and Haiqing Yang. 2022. Vision-and-Language Pretrained Mod- els: A Survey. In International Joint Conference on Artificial Intelligence (IJCAI), Vienna, Austria.
Ten Years of BabelNet: A Survey. Roberto Navigli, Michele Bevilacqua, Simone Conia, Dario Montagnini, Francesco Cecconi, International Joint Conference on Artificial Intelligence (IJCAI). OnlineRoberto Navigli, Michele Bevilacqua, Simone Conia, Dario Montagnini, and Francesco Cecconi. 2021. Ten Years of BabelNet: A Survey. In International Joint Conference on Artificial Intelligence (IJCAI), pages 4559-4567, Online.
Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl. Alexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, Chris Biemann, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC). the Eleventh International Conference on Language Resources and Evaluation (LREC)Miyazaki, JapanAlexander Panchenko, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, and Chris Biemann. 2018. Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC), Miyazaki, Japan.
Learning Transferable Visual Models from Natural Language Supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, International Conference on Machine Learning (ICML). OnlineAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning Transferable Visual Models from Natural Language Supervision. In International Conference on Machine Learning (ICML), pages 8748-8763, Online.
SemEval-2023 Task 1: Visual Word Sense Disambiguation. Alessandro Raganato, Iacer Calixto, Asahi Ushio, Jose Camacho-Collados, Mohammad Taher Pilehvar, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023). the 17th International Workshop on Semantic Evaluation (SemEval-2023)Toronto, CanadaAssociation for Computational LinguisticsAlessandro Raganato, Iacer Calixto, Asahi Ushio, Jose Camacho-Collados, and Mohammad Taher Pilehvar. 2023. SemEval-2023 Task 1: Visual Word Sense Disambiguation. In Proceedings of the 17th Interna- tional Workshop on Semantic Evaluation (SemEval- 2023), Toronto, Canada. Association for Computa- tional Linguistics.
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. Nils Reimers, Iryna Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence Embeddings using Siamese BERT- Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3973-3983, Hong Kong, China.
Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation. Nils Reimers, Iryna Gurevych, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineNils Reimers and Iryna Gurevych. 2020. Making Monolingual Sentence Embeddings Multilingual us- ing Knowledge Distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512-4525, Online.
LAION-5B: An Open Large-Scale Dataset for Training Next Generation Image-Text Models. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, Jenia Jitsev, Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track. New Orleans, LA, USAChristoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. 2022. LAION-5B: An Open Large-Scale Dataset for Training Next Generation Image-Text Models. In Thirty-sixth Conference on Neural Information Pro- cessing Systems (NeurIPS) Datasets and Benchmarks Track, New Orleans, LA, USA.
FLAVA: A Foundational Language And Vision Alignment Model. Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, Douwe Kiela, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)New Orleans, LA, USAAmanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 2022. FLAVA: A Foun- dational Language And Vision Alignment Model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 15617- 15629, New Orleans, LA, USA.
Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems (NIPS). Long Beach, CA, USA30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Processing Systems (NIPS), volume 30, pages 5998- 6008, Long Beach, CA, USA.
Transformers: State-of-the-Art Natural Language Processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transform- ers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 38-45, Online.
CoCa: Contrastive Captioners are Image-Text Foundation Models. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, Yonghui Wu, Transactions on Machine Learning Research. TMLRJiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Ye- ung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. CoCa: Contrastive Captioners are Image-Text Foun- dation Models. Transactions on Machine Learning Research (TMLR). |
11,641,943 | Fast Inference in Phrase Extraction Models with Belief Propagation | Modeling overlapping phrases in an alignment model can improve alignment quality but comes with a high inference cost. For example, the model of DeNero and Klein (2010) uses an ITG constraint and beam-based Viterbi decoding for tractability, but is still slow. We first show that their model can be approximated using structured belief propagation, with a gain in alignment quality stemming from the use of marginals in decoding. We then consider a more flexible, non-ITG matching constraint which is less efficient for exact inference but more efficient for BP. With this new constraint, we achieve a relative error reduction of 40% in F 5 and a 5.5x speed-up. | [
16516994,
12313253,
16749512,
1734281,
1567400,
765547,
1319915,
303981,
1557806,
9820235,
1819664,
2727312,
2646100,
528246,
5994263,
503611,
912349
] | Fast Inference in Phrase Extraction Models with Belief Propagation
June 3-8, 2012.
David Burkett [email protected]
Computer Science Division
University of California
Montréal, BerkeleyCanada
Dan Klein [email protected]
Computer Science Division
University of California
Montréal, BerkeleyCanada
Fast Inference in Phrase Extraction Models with Belief Propagation
June 3-8, 2012.2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 29-38,
Modeling overlapping phrases in an alignment model can improve alignment quality but comes with a high inference cost. For example, the model of DeNero and Klein (2010) uses an ITG constraint and beam-based Viterbi decoding for tractability, but is still slow. We first show that their model can be approximated using structured belief propagation, with a gain in alignment quality stemming from the use of marginals in decoding. We then consider a more flexible, non-ITG matching constraint which is less efficient for exact inference but more efficient for BP. With this new constraint, we achieve a relative error reduction of 40% in F 5 and a 5.5x speed-up.
Introduction
Modern statistical machine translation (MT) systems most commonly infer their transfer rules from word-level alignments (Koehn et al., 2007;Li and Khudanpur, 2008;Galley et al., 2004), typically using a deterministic heuristic to convert these to phrase alignments (Koehn et al., 2003). There have been many attempts over the last decade to develop model-based approaches to the phrase alignment problem (Marcu and Wong, 2002;Birch et al., 2006;DeNero et al., 2008;Blunsom et al., 2009). However, most of these have met with limited success compared to the simpler heuristic method. One key problem with typical models of phrase alignment is that they choose a single (latent) segmentation, giving rise to undesirable modeling biases (DeNero et al., 2006) and reducing coverage, which in turn reduces translation quality (DeNeefe et al., 2007;DeNero et al., 2008). On the other hand, the extraction heuristic identifies many overlapping options, and achieves high coverage.
In response to these effects, the recent phrase alignment work of DeNero and Klein (2010) models extraction sets: collections of overlapping phrase pairs that are consistent with an underlying word alignment. Their extraction set model is empirically very accurate. However, the ability to model overlapping -and therefore non-local -features comes at a high computational cost. DeNero and Klein (2010) handle this in part by imposing a structural ITG constraint (Wu, 1997) on the underlying word alignments. This permits a polynomial-time algorithm, but it is still O(n 6 ), with a large constant factor once the state space is appropriately enriched to capture overlap. Therefore, they use a heavily beamed Viterbi search procedure to find a reasonable alignment within an acceptable time frame. In this paper, we show how to use belief propagation (BP) to improve on the model's ITG-based structural formulation, resulting in a new model that is simultaneously faster and more accurate.
First, given the model of DeNero and Klein (2010), we decompose it into factors that admit an efficient BP approximation. BP is an inference technique that can be used to efficiently approximate posterior marginals on variables in a graphical model; here the marginals of interest are the phrase pair posteriors. BP has only recently come into use in the NLP community, but it has been shown to be effective in other complex structured classification tasks, such as dependency parsing (Smith and Eisner, 2008). There has also been some prior success in using BP for both discriminative (Niehues and Vogel, 2008) and generative (Cromières and Kurohashi, 2009) word alignment models.
By aligning all phrase pairs whose posterior under BP exceeds some fixed threshold, our BP approximation of the model of DeNero and Klein (2010) can achieve a comparable phrase pair F 1 . Furthermore, because we have posterior marginals rather than a single Viterbi derivation, we can explicitly force the aligner to choose denser extraction sets simply by lowering the marginal threshold. Therefore, we also show substantial improvements over DeNero and Klein (2010) in recall-heavy objectives, such as F 5 .
More importantly, we also show how the BP factorization allows us to relax the ITG constraint, replacing it with a new set of constraints that permit a wider family of alignments. Compared to ITG, the resulting model is less efficient for exact inference (where it is exponential), but more efficient for our BP approximation (where it is only quadratic). Our new model performs even better than the ITG-constrained model on phrase alignment metrics while being faster by a factor of 5.5x. Figure 1 shows part of an aligned sentence pair, including the word-to-word alignments, and the extracted phrase pairs licensed by those alignments. Formally, given a sentence pair (e, f), a word-level alignment a is a collection of links between target words e i and source words f j . Following past work, we further divide word links into two categories: sure and possible, shown in Figure 1 as solid and hatched grey squares, respectively. We represent a as a grid of ternary word link variables a ij , each of which can take the value sure to represent a sure link between e i and f j , poss to represent a possible link, or off to represent no link.
Extraction Set Models
An extraction set π is a set of aligned phrase pairs to be extracted from (e, f), shown in Figure 1 as green rounded rectangles. We represent π as a set of boolean variables π ghk , which each have the value true when the target span [g, h] is phrase-aligned to the source span [k, ]. Following previous work on phrase extraction, we limit the size of π by imposing a phrase length limit d: π only contains a variable π ghk if h − g < d and − k < d.
There is a deterministic mapping π(a) from a word alignment to the extraction set licensed by that word alignment. We will briefly describe it here, and then present our factorized model. ] is the span containing those. f 9 is null-aligned, so σ f 9 = [−1, ∞], which blocks all phrase pairs containing f 9 from being extracted.
Extraction Sets from Word Alignments
The mapping from a word alignment to the set of licensed phrase pairs π(a) is based on the standard rule extraction procedures used in most modern statistical systems (Koehn et al., 2003;Galley et al., 2006;Chiang, 2007), but extended to handle possible links (DeNero and Klein, 2010). We start by using a to find a projection from each target word e i onto a source span, represented as blue vertical lines in Figure 1. Similarly, source words project onto target spans (red horizontal lines in Figure 1). π(a) contains a phrase pair iff every word in the target span projects within the source span and vice versa. Figure 1 contains an example for d = 2.
Formally, the mapping introduces a set of spans σ. We represent the spans as variables whose values are intervals, where σ e i = [k, ] means that the target word e i projects to the source span [k, ]. The set of legal values for σ e i includes any interval with 0 ≤ k ≤ < |f| and − k < d, plus the special interval [−1, ∞] that indicates e i is null-aligned. The span variables for source words σ f j have target spans [g, h] as values and are defined analogously.
For a set I of positions, we define the range func-tion:
range(I) = [−1, ∞] I = ∅ [min i∈I i, max i∈I i] else(1)
For a fixed word alignment a we set the target span variable σ e i :
σ e i,s = range({j : a ij = sure}) (2) σ e i,p = range({j : a ij = off}) (3) σ e i = σ e i,s ∩ σ e i,p(4)
As illustrated in Figure 1, this sets σ e i to the minimal span containing all the source words with a sure link to e i if there are any. Otherwise, because of the special case for range(I) when I is empty, σ e i,s = [−1, ∞], so σ e i is the minimal span containing all poss-aligned words. If all word links to e i are off, indicating that e i is null-aligned, then σ e i is [−1, ∞], preventing the alignment of any phrase pairs containing e i .
Finally, we specify which phrase pairs should be included in the extraction set π. Given the spans σ based on a, π(a) sets π ghk = true iff every word in each phrasal span projects within the other:
σ e i ⊆ [k, ] ∀i ∈ [g, h] (5) σ f j ⊆ [g, h] ∀j ∈ [k, ]
Formulation as a Graphical Model
We score triples (a, π, σ) as the dot product of a weight vector w that parameterizes our model and a feature vector φ(a, π, σ). The feature vector decomposes into word alignment features φ a , phrase pair features φ π and target and source null word features
φ e ∅ and φ f ∅ : 1 φ(a, π, σ) = i,j φ a (a ij ) + g,h,k, φ π (π ghk )+ i φ e ∅ (σ e i ) + j φ f ∅ (σ f j )(6)
This feature function is exactly the same as that used by DeNero and Klein (2010). 2 However, while they formulated their inference problem as a search for the highest scoring triple (a, π, σ) for an observed sentence pair (e, f), we wish to derive a conditional probability distribution p(a, π, σ|e, f). We do this with the standard transformation for linear models: p(a, π, σ|e, f) ∝ exp(w·φ(a, π, σ)). Due to the factorization in Eq. (6), this exponentiated form becomes a product of local multiplicative factors, and hence our model forms an undirected graphical model, or Markov random field.
In addition to the scoring function, our model also includes constraints on which triples (a, π, σ) have nonzero probability. DeNero and Klein (2010) implicitly included these constraints in their representation: instead of sets of variables, they used a structured representation that only encodes triples (a, π, σ) satisfying both the mapping π = π(a) and the structural constraint that a can be generated by a block ITG grammar. However, our inference procedure, BP, requires that we represent (a, π, σ) as an assignment of values to a set of variables. Therefore, we must explicitly encode all constraints into the multiplicative factors that define the model. To accomplish this, in addition to the soft scoring factors we have already mentioned, our model also includes a set of hard constraint factors. Hard constraint factors enforce the relationships between the variables of the model by taking a value of 0 when the constraints they encode are violated and a value of 1 when they are satisfied. The full factor graph representation of our model, including both soft scoring factors and hard constraint factors, is drawn schematically in Figure 2.
Soft Scoring Factors
The scoring factors all take the form exp(w · φ), and so can be described in terms of their respective local feature vectors, φ. Depending on the values of the variables each factor depends on, the factor can be active or inactive. Features are only extracted for active factors; otherwise φ is empty and the factor produces a value of 1.
SURELINK. Each word alignment variable a ij has a corresponding SURELINK factor L ij to incorporate scores from the features φ a (a ij ). L ij is active whenever a ij = sure. φ a (a ij ) includes posteriors from unsupervised jointly trained HMM word alignment models (Liang et al., 2006), dictionary and identical word features, a position distortion feature, and features for numbers and punctuation. PHRASEPAIR. For each phrase pair variable π ghk , scores from φ π (π ghk ) come from the factor R ghk , which is active if π ghk = true. Most of the model's features are on these factors, and include relative frequency statistics, lexical template indicator features, and indicators for numbers of words and Chinese characters. See DeNero and Klein (2010) for a more comprehensive list.
a 11 a 21 L ij L 11 L 21 a 12 a i1 L i1 L 12 L 22 a 22 a 1j a ij L 1j A (a) ITG factor a gk a hk a g a h a g|f| a h|f| a |e|k a |e| P ghk R ghk π ghk S e g S e h N e h N e g σ e g σ e h S f k S f N f N f k σ f k σ f (b) SPAN
NULLWORD. We can determine if a word is null-aligned by looking at its corresponding span variable. Thus, we include features from φ e ∅ (σ e i ) in a factor N e i that is active if σ e i = [−1, ∞]. The features are mostly indicators for common words. There are also factors N f j for source words, which are defined analogously.
Hard Constraint Factors
We encode the hard constraints on relationships between variables in our model using three families of factors, shown graphically in Figure 2. The SPAN and EXTRACT factors together ensure that π = π(a). The ITG factor encodes the structural constraint on a.
SPAN. First, for each target word e i we include a factor S e i to ensure that the span variable σ e i has a value that agrees with the projection of the word alignment a. As shown in Figure 2b, S e i depends on σ e i and all the word alignment variables a ij in column i of the word alignment grid. S e i has value 1 iff the equality in Eq. (4) holds. Our model also includes a factor S f j to enforce the analogous relationship between each σ f j and corresponding row j of a.
EXTRACT. For each phrase pair variable π ghk we have a factor P ghk to ensure that π ghk = true iff it is licensed by the span projections σ. As shown in Figure 2b, in addition to π ghk , P ghk depends on the range of span variables σ e i for i ∈ [g, h] and σ f j for j ∈ [k, ]. P ghk is satisfied when π ghk = true and the relations in Eq. (5) all hold, or when π ghk = false and at least one of those relations does not hold.
ITG. Finally, to enforce the structural constraint on a, we include a single global factor A that depends on all the word link variables in a (see Figure 2a). A is satisfied iff a is in the family of block inverse transduction grammar (ITG) alignments. The block ITG family permits multiple links to be on (a ij = off) for a particular word e i via terminal block productions, but ensures that every word is in at most one such terminal production, and that the full set of terminal block productions is consistent with ITG reordering patterns (Zhang et al., 2008).
Relaxing the ITG Constraint
The ITG factor can be viewed as imposing two different types of constraints on allowable word alignments a. First, it requires that each word is aligned to at most one relatively short subspan of the other sentence. This is a linguistically plausible constraint, as it is rarely the case that a single word will translate to an extremely long phrase, or to multiple widely separated phrases. 3
The other constraint imposed by the ITG factor is the ITG reordering constraint. This constraint is imposed primarily for reasons of computational tractability: the standard dynamic program for bitext parsing depends on ITG reordering (Wu, 1997). While this constraint is not dramatically restrictive (Haghighi et al., 2009), it is plausible that removing it would permit the model to produce better alignments. We tested this hypothesis by developing a new model that enforces only the constraint that each word align to one limited-length subspan, which can be viewed as a generalization of the atmost-one-to-one constraint frequently considered in the word-alignment literature (Taskar et al., 2005;Cromières and Kurohashi, 2009).
Our new model has almost exactly the same form as the previous one. The only difference is that A is replaced with a new family of simpler factors:
ONESPAN. For each target word e i (and each source word f j ) we include a hard constraint factor U e i (respectively U f j ). U e i is satisfied iff |σ e i,p | < d
Belief Propagation
Belief propagation is a generalization of the well known sum-product algorithm for undirected graphical models. We will provide only a procedural sketch here, but a good introduction to BP for inference in structured NLP models can be found in Smith and Eisner (2008), and Chapters 16 and 23 of MacKay (2003) contain a general introduction to BP in the more general context of message-passing algorithms. At a high level, each variable maintains a local distribution over its possible values. These local distribution are updated via messages passed between variables and factors. For a variable V , N (V ) denotes the set of factors neighboring V in the factor graph. Similarly, N (F ) is the set of variables neighboring the factor F . During each round of BP, messages are sent from each variable to each of its neighboring factors:
q (k+1) V →F (v) ∝ G∈N (V ),G =F r (k) G→V (v)(7)
and from each factor to each of its neighboring variables:
r (k+1) F →V (v) ∝ X F ,X F [V ]=v F (X F ) U ∈N (F ),U =V q (k) U →F (v) (8)
where X F is a partial assignment of values to just the variables in N (F ).
Marginal beliefs at time k can be computed by simply multiplying together all received messages and normalizing:
b (k) V (v) ∝ G∈N (V ) r (k) G→V (v)(9)
Although messages can be updated according to any schedule, generally one iteration of BP updates each message once. The process iterates until some stopping criterion has been met: either a fixed number of iterations or some convergence metric.
For our models, we say that BP has converged
whenever V,v b (k) V (v) − b (k−1) V (v) 2
< δ for some small δ > 0. 4 While we have no theoretical convergence guarantees, it usually converges within 10 iterations in practice.
Efficient BP for Extraction Set Models
In general, the efficiency of BP depends directly on the arity of the factors in the model. Performed naïvely, the sum in Eq. (8) will take time that grows exponentially with the size of N (F ). For the softscoring factors, which each depend only on a single variable, this isn't a problem. However, our model also includes factors whose arity grows with the input size: for example, explicitly enumerating all assignments to the word link variables that the ITG factor depends on would take O(3 n 2 ) time. 5 To run BP in a reasonable time frame, we need efficient factor-specific propagators that can exploit the structure of the factor functions to compute outgoing messages in polynomial time (Duchi et al., 2007;Smith and Eisner, 2008). Fortunately, all of our hard constraints permit dynamic programs that accomplish this propagation. Space does not permit a full description of these dynamic programs, but we will briefly sketch the intuitions behind them.
SPAN and ONESPAN. Marginal beliefs for S e i or U e i can be computed in O(nd 2 ) time. The key observation is that for any legal value σ e i = [k, ], S e i and U e i require that a ij = off for all j / ∈ [k, ]. 6 Thus, we start by computing the product of all the off beliefs:
Factor Runtime Count Total SURELINK O(1) O(n 2 ) O(n 2 ) PHRASEPAIR O(1) O(n 2 d 2 ) O(n 2 d 2 ) NULLWORD O(nd) O(n) O(n 2 d) SPAN O(nd 2 ) O(n) O(n 2 d 2 ) EXTRACT O(d 3 ) O(n 2 d 2 ) O(n 2 d 5 ) ITG O(n 6 ) 1 O(n 6 ) ONESPAN O(nd 2 ) O(n) O(n 2 d 2 )) values [k , ] where [k , ] ⊆ [k, ].
Likewise for source words. Multiplying together these per-word beliefs and the belief that π ghk = true yields the joint belief of a consistent assignment with π ghk = true, which can be used to efficiently compute outgoing messages.
ITG.
To build outgoing messages, the ITG factor A needs to compute marginal beliefs for all of the word link variables a ij . These can all be computed in O(n 6 ) time by using a standard bitext parser to run the inside-outside algorithm. By using a normal form grammar for block ITG with nulls (Haghighi et al., 2009), we ensure that there is a 1-1 correspondence between the ITG derivations the parser sums over and word alignments a that satisfy A.
The asymptotic complexity for all the factors is shown in Table 1. The total complexity for inference in each model is simply the sum of the complexities of its factors, so the complexity of the ITG model is O(n 2 d 5 + n 6 ), while the complexity of the relaxed model is just O(n 2 d 5 ). The complexity of exact inference, on the other hand, is exponential in d for the ITG model and exponential in both d and n for the relaxed model.
Training and Decoding
We use BP to compute marginal posteriors, which we use at training time to get expected feature counts and at test time for posterior decoding. For each sentence pair, we continue to pass messages until either the posteriors converge, or some maximum number of iterations has been reached. 7 After running BP, the marginals we are interested in can all be computed with Eq. (9).
Training
We train the model to maximize the log likelihood of manually word-aligned gold training sentence pairs (with L 2 regularization). Because π and σ are determined when a is observed, the model has no latent variables. Therefore, the gradient takes the standard form for loglinear models:
LL = φ(a, π, σ) − (10) a ,π ,σ p(a , π , σ |e, f)φ(a , π , σ ) − λw
The feature vector φ contains features on sure word links, extracted phrase pairs, and null-aligned words. Approximate expectations of these features can be efficiently computed using the marginal beliefs b a ij (sure), b π ghk (true), and b σ e i ([−1, ∞]) and b σ f j ([−1, ∞]), respectively. We learned our final weight vector w using AdaGrad (Duchi et al., 2010), an adaptive subgradient version of standard stochastic gradient ascent.
Testing
We evaluate our model by measuring precision and recall on extracted phrase pairs. Thus, the decoding problem takes a sentence pair (e, f) as input, and must produce an extraction set π as output. Our approach, posterior thresholding, is extremely simple: we set π ghk = true iff b π ghk (true) ≥ τ for some fixed threshold τ . Note that this decoding method does not require that there be any underlying word alignment a licensing the resulting extraction set π, 8 but the structure of the model is such that two conflicting phrase pairs are unlikely to simultaneously have high posterior probability.
Most publicly available translation systems expect word-level alignments as input. These can also be generated by applying posterior thresholding, aligning target word i to source word j whenever b a ij (sure) ≥ t. 9
Experiments
Our experiments are performed on Chinese-to-English alignment. We trained and evaluated all models on the NIST MT02 test set, which consists of 150 training and 191 test sentences and has been used previously in alignment experiments (Ayan and Dorr, 2006;Haghighi et al., 2009;DeNero and Klein, 2010). The unsupervised HMM word aligner used to generate features for the model was trained on 11.3 million words of FBIS newswire data. We test three models: the Viterbi ITG model of DeNero and Klein (2010), our BP ITG model that uses the ITG factor, and our BP Relaxed model that replaces the ITG factor with the ONESPAN factors. In all of our experiments, the phrase length d was set to 3. 10
Phrase Alignment
We tested the models by computing precision and recall on extracted phrase pairs, relative to the gold phrase pairs of up to length 3 induced by the gold word alignments. For the BP models, we trade off precision and recall by adjusting the decoding threshold τ . The Viterbi ITG model was trained to optimize F 5 , a recall-biased measure, so in addition to F 1 , we also report the recall-biased F 2 and F 5 measures. The maximum number of BP iterations was set to 5 for the BP ITG model and to 10 for the BP Relaxed model. Klein, 2010). The BP Relaxed model performs the best of all, consistently achieving higher recall for fixed precision than either of the other models. Because of its lower asymptotic runtime, it is also much faster: over 5 times as fast as the Viterbi ITG model and over 10 times as fast as the BP ITG model. 11
Timing
BP approximates marginal posteriors by iteratively updating beliefs for each variable based on current beliefs about other variables. The iterative nature of the algorithm permits us to make an explicit speed/accuracy tradeoff by limiting the number of iterations. We tested this tradeoff by limiting both of the BP models to run for 2, 3, 5, 10, and 20 iterations. The results are shown in Figure 5. Neither model benefits from running more iterations than used to obtain the results in Figure 4, but each can be sped up by a factor of almost 1.5x in exchange for a modest (< 1 F 1 ) drop in accuracy. 11 The speed advantage of Viterbi ITG over BP ITG comes from Viterbi ITG's aggressive beaming.
Translation
We ran translation experiments using Moses (Koehn et al., 2007), which we trained on a 22.1 million word parallel corpus from the GALE program. We compared alignments generated by the baseline HMM model, the Viterbi ITG model and the Relaxed BP model. 12 The systems were tuned and evaluated on sentences up to length 40 from the NIST MT04 and MT05 test sets. The results, shown in Table 2, show that the BP Relaxed model achives a 0.8 BLEU improvement over the HMM baseline, comparable to that of the Viterbi ITG model, but taking a fraction of the time, 13 making the BP Relaxed model a practical alternative for real translation applications.
12 Following a simplified version of the procedure described by DeNero and Klein (2010), we added rule counts from the HMM alignments to the extraction set aligners' counts.
13 Some of the speed difference between the BP Relaxed and Viterbi ITG models comes from better parallelizability due to drastically reduced memory overhead of the BP Relaxed model.
Conclusion
For performing inference in a state-of-the-art, but inefficient, alignment model, belief propagation is a viable alternative to greedy search methods, such as beaming. BP also results in models that are much more scalable, by reducing the asymptotic complexity of inference. Perhaps most importantly, BP permits the relaxation of artificial constraints that are generally taken for granted as being necessary for efficient inference. In particular, a relatively modest relaxation of the ITG constraint can directly be applied to any model that uses ITG-based inference (e.g. Zhang and Gildea, 2005;Cherry and Lin, 2007;Haghighi et al., 2009).
Figure 1 :
1A schematic representation of part of a sentence pair. Solid grey squares indicate sure links (e.g. a 48 = sure), and hatched squares possible links (e.g. a 67 = poss). Rounded green rectangles are extracted phrase pairs (e.g. π 5667 = true). Target spans are shown as blue vertical lines and source spans as red horizontal lines. Because there is a sure link at a 48 , σ f 8 = [4, 4] does not include the possible link at a 38 . However, f 7 only has possible links, so σ f 7 = [5, 6
Figure 2 :
2A factor graph representation of the ITG-based extraction set model. For visual clarity, we draw the graph separated into two components: one containing the factors that only neighbor word link variables, and one containing the remaining factors.
Figure 3 :
3(length limit) and either σ e i,p = [−1, ∞] or ∀j ∈ σ e i,p , a ij = off (no gaps), with σ e i,p as in Eq. (3). Figure 3 shows the portion of the factor graph from Figure 2a redrawn with the ONESPAN factors replacing the ITG factor. As Figure 3 shows, there is no longer a global factor; each U e i depends only on the word link variables from column i. ONESPAN factors
Table 1 :
1Asymptotic complexity for all factors. EXTRACT. Marginal beliefs for P ghk can be computed in O(d 3 ) time. For each of the O(d) target words, we can find the total incoming belief that σ e i is within [k, ] by summing over the O(d 2b = j q a ij (off). Then, for each of the O(nd) legal
source spans [k, ] we can efficiently find a joint be-
lief by summing over consistent assignments to the
O(d) link variables in that span.
The phrase alignment results are shown in Figure 4. The BP ITG model performs comparably to the Viterbi ITG model. However, because posterior decoding permits explicit tradeoffs between precision and recall, it can do much better in the recallbiased measures, even though the Viterbi ITG model was explicitly trained to maximize F 5 (DeNero and Figure 4: Phrase alignment results. A portion of the Precision/Recall curve is plotted for the BP models, with the result from the Viterbi ITG model provided for reference.60
65
70
75
80
60
65
70
75
80
85
Recall
Precision
Viterbi ITG
BP ITG
BP Relaxed
Model
Best Scores
Sentences
F 1
F 2
F 5
per Second
Viterbi ITG 71.6 73.1 74.0
0.21
BP ITG
71.8 74.8 83.5
0.11
BP Relaxed 72.6 75.2 84.5
1.15
Time (seconds per sentence)Figure 5: Speed/accuracy tradeoff. The speed axis is on a logarithmic scale. From fastest to slowest, data points correspond to maximums of 2, 5, 10, and 20 BP iterations. F 1 for the BP Relaxed model was very low when limited to 2 iterations, so that data point is outside the visible area of the graph.Viterbi ITG
BP ITG
BP Relaxed
67
68
69
70
71
72
73
0.0625 0.125 0.25
0.5
1
2
Best F1
Speed (sentences per second)
Viterbi ITG
BP ITG
BP Relaxed
Model
BLEU
Relative
Hours to
Improve. Train/Align
Baseline
32.8
+0.0
5
Viterbi ITG
33.5
+0.7
831
BP Relaxed
33.6
+0.8
39
Table 2 :
2Machine translation results.
In addition to the arguments we write out explicitly, all feature functions have access to the observed sentence pair (e, f).2 Although the null word features are not described inDeNero and Klein (2010), all of their reported results include these features(DeNero, 2010).
Short gaps can be accomodated within block ITG (and in our model are represented as possible links) as long as the total aligned span does not exceed the block size.
We set δ = 0.001. 5 For all asymptotic analysis, we define n = max(|e|, |f|). 6 For ease of exposition, we assume that all alignments are either sure or off ; the modifications to account for the general case are straightforward.
See Section 7.2 for an empirical investigation of this maximum.8 This would be true even if we computed posteriors exactly, but is especially true with approximate marginals from BP, which are not necessarily consistent.
For our experiments, we set t = 0.2. 10 Because the runtime of the Viterbi ITG model grows exponentially with d, it was not feasible to perform comparisons for higher phrase lengths.
AcknowledgementsThis project is funded by an NSF graduate research fellowship to the first author and by BBN under DARPA contract HR0011-06-C-0022.
Going beyond AER: An extensive analysis of word alignments and their impact on MT. Bonnie J Necip Fazil Ayan, Dorr, ACL. Necip Fazil Ayan and Bonnie J. Dorr. 2006. Going be- yond AER: An extensive analysis of word alignments and their impact on MT. In ACL.
Constraining the phrase-based, joint probability statistical translation model. Alexandra Birch, AMTA. Chris Callison-Burch, and Miles OsborneAlexandra Birch, Chris Callison-Burch, and Miles Os- borne. 2006. Constraining the phrase-based, joint probability statistical translation model. In AMTA.
A gibbs sampler for phrasal synchronous grammar induction. Phil Blunsom, Trevor Cohn, ACL-IJCNLP. Chris Dyer, and Miles OsbornePhil Blunsom, Trevor Cohn, Chris Dyer, and Miles Os- borne. 2009. A gibbs sampler for phrasal synchronous grammar induction. In ACL-IJCNLP.
Inversion transduction grammar for joint phrasal translation modeling. Colin Cherry, Dekang Lin, NAACL Workshop on Syntax and Structure in Statistical Translation. Colin Cherry and Dekang Lin. 2007. Inversion transduc- tion grammar for joint phrasal translation modeling. In NAACL Workshop on Syntax and Structure in Statisti- cal Translation.
Hierarchical phrase-based translation. David Chiang, Computational Linguistics. 332David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.
An alignment algorithm using belief propagation and a structure-based distortion model. Fabien Cromières, Sadao Kurohashi, EACL. Fabien Cromières and Sadao Kurohashi. 2009. An alignment algorithm using belief propagation and a structure-based distortion model. In EACL.
What can syntax-based MT learn from phrase-based MT?. Steve Deneefe, Kevin Knight, Wei Wang, Daniel Marcu, EMNLP-CoNLL. Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In EMNLP-CoNLL.
Discriminative modeling of extraction sets for machine translation. John Denero, Dan Klein, ACL. John DeNero and Dan Klein. 2010. Discriminative mod- eling of extraction sets for machine translation. In ACL.
Why generative phrase models underperform surface heuristics. John Denero, Dan Gillick, James Zhang, Dan Klein, NAACL Workshop on Statistical Machine Translation. John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why generative phrase models underperform surface heuristics. In NAACL Workshop on Statistical Machine Translation.
Sampling alignment structure under a Bayesian translation model. John Denero, Alexandre Bouchard-Côté, Dan Klein, EMNLP. John DeNero, Alexandre Bouchard-Côté, and Dan Klein. 2008. Sampling alignment structure under a Bayesian translation model. In EMNLP.
. John Denero, Personal CommunicationJohn DeNero. 2010. Personal Communication.
Using combinatorial optimization within max-product belief propagation. John Duchi, Danny Tarlow, Gal Elidan, Daphne Koller, John Duchi, Danny Tarlow, Gal Elidan, and Daphne Koller. 2007. Using combinatorial optimization within max-product belief propagation. In NIPS 2006.
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, In COLTJohn Duchi, Elad Hazan, and Yoram Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In COLT.
What's in a translation rule. Michel Galley, Mark Hopkins, Kevin Knight, Daniel Marcu, HLT-NAACL. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In HLT- NAACL.
Scalable inference and training of context-rich syntactic translation models. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve Deneefe, Wei Wang, Ignacio Thayer, COLING-ACL. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In COLING-ACL.
Better word alignments with supervised ITG models. Aria Haghighi, John Blitzer, John Denero, Dan Klein, ACL-IJCNLP. Aria Haghighi, John Blitzer, John DeNero, and Dan Klein. 2009. Better word alignments with supervised ITG models. In ACL-IJCNLP.
Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Franz Josef Och, Daniel Marcu ; Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, ACL. Philipp Koehn, Hieu Hoang. Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan HerbstACLPhilipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In ACL. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL.
A scalable decoder for parsing-based machine translation with equivalent language model state maintenance. Zhifei Li, Sanjeev Khudanpur, ACL SSST. Zhifei Li and Sanjeev Khudanpur. 2008. A scalable decoder for parsing-based machine translation with equivalent language model state maintenance. In ACL SSST.
Alignment by agreement. Percy Liang, Ben Taskar, Dan Klein, HLT-NAACL. Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In HLT-NAACL.
Information theory, inference, and learning algorithms. J C David, Mackay, Cambridge Univ PressDavid J.C. MacKay. 2003. Information theory, infer- ence, and learning algorithms. Cambridge Univ Press.
A phrase-based, joint probability model for statistical machine translation. Daniel Marcu, Daniel Wong, EMNLP. Daniel Marcu and Daniel Wong. 2002. A phrase-based, joint probability model for statistical machine transla- tion. In EMNLP.
Discriminative word alignment via alignment matrix modeling. Jan Niehues, Stephan Vogel, ACL Workshop on Statistical Machine Translation. Jan Niehues and Stephan Vogel. 2008. Discriminative word alignment via alignment matrix modeling. In ACL Workshop on Statistical Machine Translation.
Dependency parsing by belief propagation. David A Smith, Jason Eisner, EMNLP. David A. Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In EMNLP.
A discriminative matching approach to word alignment. Ben Taskar, Simon Lacoste-Julien, Dan Klein, EMNLP. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word align- ment. In EMNLP.
Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Dekai Wu, Computational Linguistics. 233Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.
Stochastic lexicalized inversion transduction grammar for alignment. Hao Zhang, Daniel Gildea, ACL. Hao Zhang and Daniel Gildea. 2005. Stochastic lexical- ized inversion transduction grammar for alignment. In ACL.
Bayesian learning of noncompositional phrases with synchronous parsing. Hao Zhang, Chris Quirk, Robert C Moore, Daniel Gildea, ACL:HLT. Hao Zhang, Chris Quirk, Robert C. Moore, and Daniel Gildea. 2008. Bayesian learning of non- compositional phrases with synchronous parsing. In ACL:HLT. |
218,610,502 | Nonparametric Bayesian Inference and Efficient Parsing for Tree-adjoining Grammars | In the line of research extending statistical parsing to more expressive grammar formalisms, we demonstrate for the first time the use of tree-adjoining grammars (TAG). We present a Bayesian nonparametric model for estimating a probabilistic TAG from a parsed corpus, along with novel block sampling methods and approximation transformations for TAG that allow efficient parsing. Our work shows performance improvements on the Penn Treebank and finds more compact yet linguistically rich representations of the data, but more importantly provides techniques in grammar transformation and statistical inference that make practical the use of these more expressive systems, thereby enabling further experimentation along these lines. | [
52800576,
6056834,
6961896,
14453288,
6684426,
218604523,
1868,
6818994,
8036724,
14190520
] | Nonparametric Bayesian Inference and Efficient Parsing for Tree-adjoining Grammars
August 4-9
Elif Yamangil
Harvard University Cambridge
MassachusettsUSA
Stuart M Shieber [email protected]
Harvard University Cambridge
MassachusettsUSA
Nonparametric Bayesian Inference and Efficient Parsing for Tree-adjoining Grammars
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics
the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAugust 4-9
In the line of research extending statistical parsing to more expressive grammar formalisms, we demonstrate for the first time the use of tree-adjoining grammars (TAG). We present a Bayesian nonparametric model for estimating a probabilistic TAG from a parsed corpus, along with novel block sampling methods and approximation transformations for TAG that allow efficient parsing. Our work shows performance improvements on the Penn Treebank and finds more compact yet linguistically rich representations of the data, but more importantly provides techniques in grammar transformation and statistical inference that make practical the use of these more expressive systems, thereby enabling further experimentation along these lines.
Introduction
There is a deep tension in statistical modeling of grammatical structure between providing good expressivity -to allow accurate modeling of the data with sparse grammars -and low complexity -making induction of the grammars (say, from a treebank) and parsing of novel sentences computationally practical. Tree-substitution grammars (TSG), by expanding the domain of locality of context-free grammars (CFG), can achieve better expressivity, and the ability to model more contextual dependencies; the payoff would be better modeling of the data or smaller (sparser) models or both. For instance, constructions that go across levels, like the predicate-argument structure of a verb and its arguments can be modeled by TSGs (Goodman, 2003).
Recent work that incorporated Dirichlet process (DP) nonparametric models into TSGs has provided an efficient solution to the daunting model selection problem of segmenting training data trees into appropriate elementary fragments to form the grammar (Cohn et al., 2009;Post and Gildea, 2009). The elementary trees combined in a TSG are, intuitively, primitives of the language, yet certain linguistic phenomena (notably various forms of modification) "split them up", preventing their reuse, leading to less sparse grammars than might be ideal (Yamangil and Shieber, 2012;Chiang, 2000;Resnik, 1992).
TSGs are a special case of the more flexible grammar formalism of tree adjoining grammar (TAG) (Joshi et al., 1975). TAG augments TSG with an adjunction operator and a set of auxiliary trees in addition to the substitution operator and initial trees of TSG, allowing for "splicing in" of syntactic fragments within trees. This functionality allows for better modeling of linguistic phenomena such as the distinction between modifiers and arguments (Joshi et al., 1975;XTAG Research Group, 2001). Unfortunately, TAG's expressivity comes at the cost of greatly increased complexity. Parsing complexity for unconstrained TAG scales as O(n 6 ), impractical as compared to CFG and TSG's O(n 3 ). In addition, the model selection problem for TAG is significantly more complicated than for TSG since one must reason about many more combinatorial options with two types of derivation operators. This has led researchers to resort to manual (Doran et al., 1997) or heuristic techniques. For example, one can consider "outsourcing" the auxiliary trees (Shieber, 2007), use template rules and a very small number of grammar categories (Hwa, 1998), or rely on head-words and force lexicalization in order to constrain the problem (Xia et al., 2001;Chiang, 2000;Carreras et al., 2008). However a solution has not been put forward by which a model that maximizes a principled probabilistic objective is sought after.
Recent work by Cohn and Blunsom (2010) argued that under highly expressive grammars such as TSGs where exponentially many derivations may be hypothesized of the data, local Gibbs sampling is insufficient for effective inference and global blocked sampling strategies will be necessary. For TAG, this problem is only more severe due to its mild context-sensitivity and even richer combinatorial nature. Therefore in previous work, Shindo et al. (2011) and Yamangil and Shieber (2012) used tree-insertion grammar (TIG) as a kind of expressive compromise between TSG and TAG, as a substrate on which to build nonparametric inference. However TIG has the constraint of disallowing wrapping adjunction (coordination between material that falls to the left and right of the point of adjunction, such as parentheticals and quotations) as well as left adjunction along the spine of a right auxiliary tree and vice versa.
In this work we formulate a blocked sampling strategy for TAG that is effective and efficient, and prove its superiority against the local Gibbs sampling approach. We show via nonparametric inference that TAG, which contains TSG as a subset, is a better model for treebank data than TSG and leads to improved parsing performance. TAG achieves this by using more compact grammars than TSG and by providing the ability to make finer-grained linguistic distinctions. We explain how our parameter refinement scheme for TAG allows for cubic-time CFG parsing, which is just as efficient as TSG parsing. Our presentation assumes familiarity with prior work on block sampling of TSG and TIG (Cohn and Blunsom, 2010;Shindo et al., 2011;Yamangil and Shieber, 2012).
Probabilistic Model
In the basic nonparametric TSG model, there is an independent DP for every grammar category (such as c = NP), each of which uses a base distribution P 0 that generates an initial tree by making stepwise decisions and concentration parameter α c that controls the level of sparsity (size) of the generated grammars: G c ∼ DP(α c , P 0 (· | c)) We extend this model by adding specialized DPs for auxiliary trees G aux c ∼ DP(α aux c , P aux 0 (· | c)) Therefore, we have an exchangeable process for generating auxiliary tree a j given j − 1 auxiliary trees previously generated
p(a j | a <j ) = n c,a j + α aux c P aux 0 (a j | c) j − 1 + α aux c (1)
as for initial trees in TSG (Cohn et al., 2009).
We must define base distributions for initial trees and auxiliary trees. P 0 generates an initial tree with root label c by sampling rules from a CFGP and making a binary decision at every node generated whether to leave it as a frontier node or further expand (with probability β c ) (Cohn et al., 2009). Similarly, our P aux 0 generates an auxiliary tree with root label c by sampling a CFG rule fromP , flipping an unbiased coin to decide the direction of the spine (if more than a unique child was generated), making a binary decision at the spine whether to leave it as a foot node or further expand (with probability γ c ), and recurring into P 0 or P aux 0 appropriately for the off-spine and spinal children respectively.
We glue these two processes together via a set of adjunction parameters µ c . In any derivation for every node labeled c that is not a frontier node or the root or foot node of an auxiliary tree, we determine the number (perhaps zero) of simultaneous adjunctions (Schabes and Shieber, 1994) by sampling a Geometric(µ c ) variable; thus k simultaneous adjunctions would have probability (µ c ) k (1 − µ c ). Since we already provide simultaneous adjunction we disallow adjunction at the root of auxiliary trees.
Inference
Given this model, our inference task is to explore posterior derivations underlying the data. Since TAG derivations are highly structured objects, we design a blocked Metropolis-Hastings sampler that samples derivations per entire parse trees all at once in a joint fashion (Cohn and Blunsom, 2010;Shindo et al., 2011;Yamangil and Shieber, 2012). As in previous work, we use a Goodman-transformed TAG as our proposal distribution (Goodman, 2003) that incorporates additional CFG rules to account for the possibility of backing off to the infinite base distribution P aux 0 , and use the parsing algorithm described by Shieber et al. (1995) for computing inside probabilities under this TAG model.
The algorithm is illustrated in Table 1 along with Figure 1. Inside probabilities are computed in a bottom-up fashion and a TAG derivation is sampled top-down (Johnson et al., 2007) Figure 1: Example used for illustrating blocked sampling with TAG. On the left hand side we have a partial training tree where we highlight the particular nodes (with node labels 0, 1, 2, 3, 4) that the sampling algorithm traverses in post-order. On the right hand side is the TAG grammar fragment that is used to parse these particular nodes: one initial tree and two wrapping auxiliary trees where one adjoins into the spine of the other for full generality of our illustration. Grammar nodes are labeled with their Goodman indices (letters i, j, k, l, m).
. The N α N γ N N N i . . . β δ N 0 α N 1 γ N 2 N 3 N 4 . . . β δ N j α N k N * β N l γ N m N * δ
Greek letters α, β, γ, δ denote entire subtrees. We assume that a subtree in an auxiliary tree (e.g., α) parses the same subtree in a training tree.
sampler visits every node of the tree in post-order (O(n) operations, n being the number of nodes), visits every node below it as a potential foot (another O(n) operations), visits every mid-node in the path between the original node and the potential foot (if spine-adjunction is allowed) (O(log n) operations), and forms the appropriate chart items. The complexity is O(n 2 log n) if spine-adjunction is allowed, O(n 2 ) otherwise.
Parameter Refinement
During inference, adjunction probabilities are treated simplistically to facilitate convergence. Only two parameters guide adjunction: µ c , the probability of adjunction; and p(a j | a <j , c) (see Equation 1), the probability of the particular auxiliary tree being adjoined given that there is an adjunction. In all of this treatment, c, the context of an adjunction, is the grammar category label such as S or NP, instead of a unique identifier for the node at which the adjunction occurs as was originally the case in probabilistic TAG literature. However it is possible to experiment with further refinement schemes at parsing time. Once the sampler converges on a grammar, we can reestimate its adjunction probabilities. Using the O(n 6 ) parsing algorithm (Shieber et al., 1995) we experimented with various refinements schemes -ranging from full node identifiers, to Goodman (1) per-node, e.g., N i [ν] denoting the probability of starting at an initial subtree that has Goodman index i and generating the subtree rooted at node ν, and (2) per-path, e.g., N j [ν-η] denoting the probability of starting at an auxiliary subtree that has Goodman index j and generating the subtree rooted at ν minus the subtree rooted at η. Above, c denotes the context of adjunction, which is the nonterminal label of the node of adjunction (here, N), µ c is the probability of adjunction, n c,a is the count of the auxiliary tree a, and n c = a n c,a is total number of adjunctions at context c. The function π(·) retrieves the inside probability corresponding to an item. index identifiers of the subtree below the adjunction (Hwa, 1998), to simple grammar category labels -and find that using Goodman index identifiers as c is the best performing option.
Interestingly, this particular refinement scheme also allows for fast cubic-time parsing, which we achieve by approximating the TAG by a TSG with little loss of coverage (no loss of coverage under special conditions which we find that are often satisfied) and negligible increase in grammar size, as discussed in the next section.
Cubic-time parsing
MCMC training results in a list of sufficient statistics of the final derivation that the TAG sampler converges upon after a number of iterations. Basically, these are the list of initial and auxiliary trees, their cumulative counts over the training data, and their adjunction statistics. An adjunction statistic is listed as follows. If α is any elementary tree, and β is an auxiliary tree that adjoins n times at node ν of α that is uniquely reachable at path p, we write α p ← β (n times). We denote ν alternatively as Figure 2: TAG to TSG transformation algorithm. By removing adjunctions in the correct order we end up with a larger yet adjunction-free TSG.
α[p]. * q ! p " n m k # * p " i i i q ! i k # * m i " i # i i i # j j j q ! i j i j ! ij i (1) (2) (3)
Now imagine that we end up with a small grammar that consists of one initial tree α and two auxiliary trees β and γ, and the following adjunctions occurring between them
α p ← β (n times) α p ← γ (m times) β q ← γ (k times)
as shown in Figure 2. Assume that α itself occurs l > n + m times in total so that there is nonzero probability of no adjunction anywhere within α. Also assume that the node uniquely identified by α[p] has Goodman index i, which we denote as i = G(α[p]).
The general idea of this TAG-TSG approximation is that, for any auxiliary tree that adjoins at a node ν with Goodman index i, we create an initial tree out of it where the root and foot nodes of the auxiliary tree are both replaced by i. Further, we split the subtree rooted at ν from its parent and rename the substitution site that is newly created at ν as i as well. (See Figure 2.) We can separate the foot subtree from the rest of the initial tree since it is completely remembered by any adjoined auxiliary trees due to the nature of our refinement scheme. However this method fails for adjunctions that occur at spinal nodes of auxiliary trees that have foot nodes below them since we would not know in which order to do the initial tree creation. However when the spine-adjunction relation is amenable to a topological sort (as is the case in Figure 2), we can apply the method by going in this order and doing some extra bookkeeping: updating the list of Goodman indices and redirecting adjunctions as we go along. When there is no such topological sort, we can approximate the TAG by heuristically dropping low-frequency adjunctions that introduce cycles. 1 The algorithm is illustrated in Figure 2. In (1) we see the original TAG grammar and its adjunctions (n, m, k are adjunction counts). Note that the adjunction relation has a topological sort of α, β, γ. We process auxiliary trees in this order and iteratively remove their adjunctions by creating specialized initial tree duplicates. In (2) we first visit β, which has adjunctions into α at the node denoted α[p] where p is the unique path from the root to this node. We retrieve the Goodman index of this node i = G(α[p]), split the subtree rooted at this node as a new initial tree α i , relabel its root as i, and rename the newly-created substitution site at α[p] as i. Since β has only this adjunction, we replace it with initial tree version β i where root/foot labels of β are replaced with i, and update all adjunctions into β as being into β i . In (3) we visit γ which now has adjunctions into α and β i . For the α[p] adjunction we create γ i the same way we created β i but this time we cannot remove γ as it still has an adjunction into β i . We retrieve the Goodman index of the node of adjunction j = G(β i [q]), split the subtree rooted at this node as new initial tree β ij , relabel its root as j, and rename the newly-created substitution site at β i [q] as j. Since γ now has only this adjunction left, we remove it by also creating initial tree version γ j where root/foot labels of γ are replaced with j. At this point we have an adjunctionfree TSG with elementary trees (and counts) α(l), α i (l), β i (n), β ij (n), γ i (m), γ j (k) where l is the count of initial tree α. These counts, when they are normalized, lead to the appropriate adjunc- tion probability refinement scheme of µ c × p(a j | a <j , c) where c is the Goodman index. Although this algorithm increases grammar size, the sparsity of the nonparametric solution ensures that the increase is almost negligible: on average the final Goodman-transformed CFG has 173.9K rules for TSG, 189.2K for TAG. Figure 3 demonstrates the comparable Viterbi parsing times for TSG and TAG.
Evaluation
We use the standard Penn treebank methodology of training on sections 2-21 and testing on section 23. All our data is head-binarized, all hyperparameters are resampled under appropriate vague gamma and beta priors. Samplers are run 1000 iterations each; all reported numbers are averages over 5 runs. For simplicity, parsing results are based on the maximum probability derivation (Viterbi algorithm).
In Table 4, we compare TAG inference schemes and TSG. TAG Gibbs operates by locally adding/removing potential adjunctions, similar to Cohn et al. (2009). TAG is the O(n 2 ) algorithm that disallows spine adjunction. We see that TAG has the best parsing performance, while TAG provides the most compact representation.
Conclusion
We described a nonparametric Bayesian inference scheme for estimating TAG grammars and showed the power of TAG formalism over TSG for returning rich, generalizable, yet compact representations of data. The nonparametric inference scheme presents a principled way of addressing the difficult model selection problem with TAG. Our sampler has near quadratic-time efficiency, and our parsing approach remains context-free allowing for fast cubic-time parsing, so that our overall parsing framework is highly scalable. 2 There are a number of extensions of this work: Experimenting with automatically induced adjunction refinements as well as incorporating substitution refinements can benefit Bayesian TAG (Shindo et al., 2012;Petrov et al., 2006). We are also planning to investigate TAG for more context-sensitive languages, and synchronous TAG for machine translation.
Figure 3 :
3Nonparametric TAG (blue) parsing is efficient and incurs only a small increase in parsing time compared to nonparametric TSG (red).
Figure 5 :
5Example wrapping trees from estimated TAGs.
Table 1 :
1Computation of inside probabilities for
TAG sampling. We create two types of chart
items:
Figure 4: EVALB results. Note that the Gibbs sampler for TAG has poor performance and provides no grammar compaction due to its lack of convergence.model
F measure
# initial trees
# auxiliary trees
TSG
84.15
69.5K
-
TAGGibbs
82.47
69.9K
1.7K
TAG
84.87
66.4K
1.5K
TAG
84.82
66.4K
1.4K
label
#adj
ave.
#lex.
#left
#right
#wrap
(spine adj)
depth
trees
trees
trees
trees
VP
4532 (23)
1.06
45
22
65
0
NP
2891 (46)
1.71
68
94
13
1
NN
2160 (3)
1.08
85
16
110
0
NNP
1478 (2)
1.12
90
19
90
0
NNS
1217 (1)
1.10
43
9
60
0
VBN
1121 (1)
1.05
6
18
0
0
VBD
976 (0)
1.0
16
25
0
0
NP
937 (0)
3.0
1
5
0
0
VB
870 (0)
1.02
14
31
4
0
S
823 (11)
1.48
42
36
35
3
total
23320 (118)
1.25
824
743
683
9
Table 2 :
2Grammar analysis for an estimated TAG, categorized by label. Only the most common top 10 are shown, binarization variables are denoted with overline. A total number of 98 wrapping adjunctions (9 unique wrapping trees) and 118 spine adjunctions occur.
We found that, on average, about half of our grammars have a topological sort of their spine-adjunctions. (On average fewer than 100 spine adjunctions even exist.) When no such sort exists, only a few low-frequency adjunctions have to be removed to eliminate cycles.
An extensive report of our algorithms and experiments will be provided in the PhD thesis of the first author(Yamangil, 2013). Our code will be made publicly available at code.seas.harvard.edu/˜elif.
TAG, dynamic programming, and the perceptron for efficient, feature-rich parsing. Xavier Carreras, Michael Collins, Terry Koo, Proceedings of the Twelfth Conference on Computational Natural Language Learning, CoNLL '08. the Twelfth Conference on Computational Natural Language Learning, CoNLL '08Stroudsburg, PA, USAAssociation for Computational LinguisticsXavier Carreras, Michael Collins, and Terry Koo. 2008. TAG, dynamic programming, and the percep- tron for efficient, feature-rich parsing. In Proceed- ings of the Twelfth Conference on Computational Natural Language Learning, CoNLL '08, pages 9- 16, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.
Statistical parsing with an automatically-extracted tree adjoining grammar. David Chiang, Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL '00. the 38th Annual Meeting on Association for Computational Linguistics, ACL '00Morristown, NJ, USAAssociation for Computational LinguisticsDavid Chiang. 2000. Statistical parsing with an automatically-extracted tree adjoining grammar. In Proceedings of the 38th Annual Meeting on Associa- tion for Computational Linguistics, ACL '00, pages 456-463, Morristown, NJ, USA. Association for Computational Linguistics.
Blocked inference in Bayesian tree substitution grammars. Trevor Cohn, Phil Blunsom, Proceedings of the ACL 2010 Conference Short Papers, ACLShort '10. the ACL 2010 Conference Short Papers, ACLShort '10Stroudsburg, PA, USAAssociation for Computational LinguisticsTrevor Cohn and Phil Blunsom. 2010. Blocked in- ference in Bayesian tree substitution grammars. In Proceedings of the ACL 2010 Conference Short Pa- pers, ACLShort '10, pages 225-230, Stroudsburg, PA, USA. Association for Computational Linguis- tics.
Inducing compact but accurate treesubstitution grammars. Trevor Cohn, Sharon Goldwater, Phil Blunsom, NAACL '09: Proceedings of Human Language Technologies: The. Trevor Cohn, Sharon Goldwater, and Phil Blun- som. 2009. Inducing compact but accurate tree- substitution grammars. In NAACL '09: Proceed- ings of Human Language Technologies: The 2009
Annual Conference of the North American Chapter of the Association for Computational Linguistics. Morristown, NJ, USAAssociation for Computational LinguisticsAnnual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 548-556, Morristown, NJ, USA. Association for Computational Linguistics.
Maintaining the forest and burning out the underbrush in xtag. Christine Doran, Beth Hockey, Philip Hopely, Joseph Rosenzweig, Anoop Sarkar, B Srinivas, Fei Xia, Alexis Nasr, Owen Rambow, Proceedings of the ENVGRAM Workshop. the ENVGRAM WorkshopChristine Doran, Beth Hockey, Philip Hopely, Joseph Rosenzweig, Anoop Sarkar, B. Srinivas, Fei Xia, Alexis Nasr, and Owen Rambow. 1997. Maintain- ing the forest and burning out the underbrush in xtag. In Proceedings of the ENVGRAM Workshop.
Efficient parsing of DOP with PCFG-reductions. In Rens Bod, Remko Scha, and Khalil Sima'an, editors, Data-Oriented Parsing. Joshua Goodman, CSLI PublicationsStanford, CAJoshua Goodman. 2003. Efficient parsing of DOP with PCFG-reductions. In Rens Bod, Remko Scha, and Khalil Sima'an, editors, Data-Oriented Parsing. CSLI Publications, Stanford, CA.
An empirical evaluation of probabilistic lexicalized tree insertion grammars. Rebecca Hwa, Proceedings of the 17th international conference on Computational linguistics. the 17th international conference on Computational linguisticsMorristown, NJ, USA1Association for Computational LinguisticsRebecca Hwa. 1998. An empirical evaluation of probabilistic lexicalized tree insertion grammars. In Proceedings of the 17th international conference on Computational linguistics -Volume 1, pages 557- 563, Morristown, NJ, USA. Association for Compu- tational Linguistics.
Bayesian inference for PCFGs via Markov chain Monte Carlo. Mark Johnson, Thomas Griffiths, Sharon Goldwater, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference. Rochester, New YorkAssociation for Computational LinguisticsMark Johnson, Thomas Griffiths, and Sharon Gold- water. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, pages 139-146, Rochester, New York, April. Association for Computational Linguistics.
Tree adjunct grammars. K Aravind, Leon S Joshi, Masako Levy, Takahashi, Journal of Computer and System Sciences. 101Aravind K. Joshi, Leon S. Levy, and Masako Taka- hashi. 1975. Tree adjunct grammars. Journal of Computer and System Sciences, 10(1):136-163.
Learning accurate, compact, and interpretable tree annotation. Slav Petrov, Leon Barrett, Romain Thibaux, Dan Klein, Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsSydney, AustraliaAssociation for Computational LinguisticsSlav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics, pages 433-440, Sydney, Australia, July. Association for Computa- tional Linguistics.
Bayesian learning of a tree substitution grammar. Matt Post, Daniel Gildea, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. the ACL-IJCNLP 2009 Conference Short PapersSuntec, Singapore, AugustAssociation for Computational LinguisticsMatt Post and Daniel Gildea. 2009. Bayesian learning of a tree substitution grammar. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 45-48, Suntec, Singapore, August. Association for Computational Linguistics.
Probabilistic tree-adjoining grammar as a framework for statistical natural language processing. Philip Resnik, Proceedings of the 14th conference on Computational linguistics. the 14th conference on Computational linguisticsStroudsburg, PA, USAAssociation for Computational Linguistics2COLING '92Philip Resnik. 1992. Probabilistic tree-adjoining grammar as a framework for statistical natural lan- guage processing. In Proceedings of the 14th con- ference on Computational linguistics -Volume 2, COLING '92, pages 418-424, Stroudsburg, PA, USA. Association for Computational Linguistics.
An alternative conception of tree-adjoining derivation. Yves Schabes, Stuart M Shieber, Computational Linguistics. 201Also available as cmp-lg/9404001Yves Schabes and Stuart M. Shieber. 1994. An alternative conception of tree-adjoining derivation. Computational Linguistics, 20(1):91-124. Also available as cmp-lg/9404001.
Principles and implementation of deductive parsing. M Stuart, Yves Shieber, Fernando C N Schabes, Pereira, J. Log. Program. 241&2Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of de- ductive parsing. J. Log. Program., 24(1&2):3-36.
Probabilistic synchronous tree-adjoining grammars for machine translation: The argument from bilingual dictionaries. M Stuart, Shieber, Proceedings of the Workshop on Syntax and Structure in Statistical Translation. Dekai Wu and David Chiangthe Workshop on Syntax and Structure in Statistical TranslationRochester, New YorkStuart M. Shieber. 2007. Probabilistic synchronous tree-adjoining grammars for machine translation: The argument from bilingual dictionaries. In Dekai Wu and David Chiang, editors, Proceedings of the Workshop on Syntax and Structure in Statistical Translation, Rochester, New York, 26 April.
Insertion operator for Bayesian tree substitution grammars. Hiroyuki Shindo, Akinori Fujino, Masaaki Nagata, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papersStroudsburg, PA, USA2Association for Computational LinguisticsHiroyuki Shindo, Akinori Fujino, and Masaaki Nagata. 2011. Insertion operator for Bayesian tree substitu- tion grammars. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies: short pa- pers -Volume 2, HLT '11, pages 206-211, Strouds- burg, PA, USA. Association for Computational Lin- guistics.
Bayesian symbol-refined tree substitution grammars for syntactic parsing. Hiroyuki Shindo, Yusuke Miyao, Akinori Fujino, Masaaki Nagata, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, KoreaAssociation for Computational Linguistics1Hiroyuki Shindo, Yusuke Miyao, Akinori Fujino, and Masaaki Nagata. 2012. Bayesian symbol-refined tree substitution grammars for syntactic parsing. In Proceedings of the 50th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 440-448, Jeju Island, Korea, July. Association for Computational Linguistics.
Automatically extracting and comparing lexicalized grammars for different languages. Fei Xia, Chung-Hye Han, Martha Palmer, Aravind Joshi, Proceedings of the 17th international joint conference on. the 17th international joint conference onSan Francisco, CA, USA. MorganKaufmann Publishers Inc2IJCAI'01Fei Xia, Chung-hye Han, Martha Palmer, and Aravind Joshi. 2001. Automatically extracting and compar- ing lexicalized grammars for different languages. In Proceedings of the 17th international joint confer- ence on Artificial intelligence -Volume 2, IJCAI'01, pages 1321-1326, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.
A lexicalized tree adjoining grammar for English. Xtag Research Group, IRCS-01-03IRCS, University of PennsylvaniaTechnical ReportXTAG Research Group. 2001. A lexicalized tree adjoining grammar for English. Technical Report IRCS-01-03, IRCS, University of Pennsylvania.
Estimating compact yet rich tree insertion grammars. Elif Yamangil, Stuart Shieber, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, KoreaAssociation for Computational Linguistics2Short Papers)Elif Yamangil and Stuart Shieber. 2012. Estimating compact yet rich tree insertion grammars. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 110-114, Jeju Island, Korea, July. Association for Computational Linguistics.
Rich Linguistic Structure from Large-Scale Web Data. Elif Yamangil, Harvard University. ForthcomingPh.D. thesisElif Yamangil. 2013. Rich Linguistic Structure from Large-Scale Web Data. Ph.D. thesis, Harvard Uni- versity. Forthcoming. |
5,509,327 | A causal framework for explaining the predictions of black-box sequence-to-sequence models | We interpret the predictions of any blackbox structured input-structured output model around a specific input-output pair. Our method returns an "explanation" consisting of groups of input-output tokens that are causally related. These dependencies are inferred by querying the black-box model with perturbed inputs, generating a graph over tokens from the responses, and solving a partitioning problem to select the most relevant components. We focus the general approach on sequence-tosequence problems, adopting a variational autoencoder to yield meaningful input perturbations. We test our method across several NLP sequence generation tasks. | [
1918428,
7205805
] | A causal framework for explaining the predictions of black-box sequence-to-sequence models
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 7-11, 2017. 2017
David Alvarez-Melis [email protected]
CSAIL
MIT
Tommi S Jaakkola
CSAIL
MIT
A causal framework for explaining the predictions of black-box sequence-to-sequence models
Natural Language Processing
Copenhagen, DenmarkAssociation for Computational LinguisticsSeptember 7-11, 2017. 2017
We interpret the predictions of any blackbox structured input-structured output model around a specific input-output pair. Our method returns an "explanation" consisting of groups of input-output tokens that are causally related. These dependencies are inferred by querying the black-box model with perturbed inputs, generating a graph over tokens from the responses, and solving a partitioning problem to select the most relevant components. We focus the general approach on sequence-tosequence problems, adopting a variational autoencoder to yield meaningful input perturbations. We test our method across several NLP sequence generation tasks.
Introduction
Interpretability is often the first casualty when adopting complex predictors. This is particularly true for structured prediction methods at the core of many natural language processing tasks such as machine translation (MT). For example, deep learning models for NLP involve a large number of parameters and complex architectures, making them practically black-box systems. While such systems achieve state-of-the-art results in MT (Bahdanau et al., 2014), summarization (Rush et al., 2015) and speech recognition (Chan et al., 2015), they remain largely uninterpretable, although attention mechanisms (Bahdanau et al., 2014) can shed some light on how they operate.
Stronger forms of interpretability could offer several advantages, from trust in model predic-tions, error analysis, to model refinement. For example, critical medical decisions are increasingly being assisted by complex predictions that should lend themselves to easy verification by human experts. Without understanding how inputs get mapped to the outputs, it is also challenging to diagnose the source of potential errors. A slightly less obvious application concerns model improvement (Ribeiro et al., 2016) where interpretability can be used to detect biases in the methods.
Interpretability has been approached primarily from two main angles: model interpretability, i.e., making the architecture itself interpretable, and prediction interpretability, i.e., explaining particular predictions of the model (cf. (Lei et al., 2016)). Requiring the model itself to be transparent is often too restrictive and challenging to achieve. Indeed, prediction interpretability can be more easily sought a posteriori for black-box systems including neural networks.
In this work, we propose a novel approach to prediction interpretability with only oracle access to the model generating the prediction. Following (Ribeiro et al., 2016), we turn the local behavior of the model around the given input into an interpretable representation of its operation. In contrast to previous approaches, we consider structured prediction where both inputs and outputs are combinatorial objects, and our explanation consists of a summary of operation rather than a simpler prediction method.
Our method returns an "explanation" consisting of sets of input and output tokens that are causally related under the black-box model. Causal dependencies arise from analyzing perturbed versions of inputs that are passed through the black-box model. Although such perturbations might be available in limited cases, we generate them automatically. For sentences, we adopt a variational autoencoder to produce semantically related sentence variations. The resulting inferred causal dependencies (interval estimates) form a dense bipartite graph over tokens from which explanations can be derived as robust min-cut k-partitions.
We demonstrate quantitatively that our method can recover known dependencies. As a starting point, we show that a grapheme-to-phoneme dictionary can be largely recovered if given to the method as a black-box model. We then show that the explanations provided by our method closely resemble the attention scores used by a neural machine translation system. Moreover, we illustrate how our summaries can be used to gain insights and detect biases in translation systems. Our main contributions are:
• We propose a general framework for explaining structured black-box models
• For sequential data, we propose a variational autoencoder for controlled generation of input perturbations required for causal analysis
• We evaluate the explanations produced by our framework on various sequence-tosequence prediction tasks, showing they can recover known associations and provide insights into the workings of complex systems.
Related Work
There is a wide body of work spanning various fields centered around the notion of "interpretability". This term, however, is underdetermined, so the goals, methods and formalisms of these approaches are often non-overlapping (Lipton, 2016). In the context of machine learning, perhaps the most visible line of work on interpretability focuses on medical applications (Caruana et al., 2015), where trust can be a decisive factor on whether a model is used or not. With the ever-growing success and popularity of deep learning methods for image processing, recent work has addressed interpretability in this setting, usually requiring access to the method's activations and gradients (Selvaraju et al., 2016), or directly modeling how influence propagates (Bach et al., 2015). For a broad overview of interpretability in machine learning, we refer the reader to the recent survey by Doshi-Velez and Kim (2017).
Most similar to this work are the approaches of Lei et al. (2016) and Ribeiro et al. (2016). The former proposes a model that justifies its predictions in terms of fragments of the input. This approach formulates explanation generation as part of the learning problem, and, as most previous work, only deals with the case where predictions are scalar or categorical. On the other hand, Ribeiro et al. (2016) propose a framework for explaining the predictions of black-box classifiers by means of locally-faithful interpretable models. They focus on sparse linear models as explanations, and rely on local perturbations of the instance to explain. Their model assumes the input directly admits a fixed size interpretable representation in euclidean space, so their framework operates directly on this vector-valued representation.
Our method differs from-and can be thought of as generalizing-these approaches in two fundamental aspects. First, our framework considers both inputs and outputs to be structured objects thus extending beyond the classification setting. This requires rethinking the notion of explanation to adapt it to variable-size combinatorial objects. Second, while our approach shares the locality and model-agnostic view of Ribeiro et al. (2016), generating perturbed versions of structured objects is a challenging task by itself. We propose a solution to this problem in the case of sequence-tosequence learning.
Interpreting structured prediction
Explaining predictions in the structured inputstructured output setting poses various challenges. As opposed to scalar or categorical prediction, structured predictions vary in size and complexity. Thus, one must decide not only how to explain the prediction, but also what parts of it to explain. Intuitively, the "size" of an explanation should grow with the size of the input and output. A good explanation would ideally also decompose into cognitive chunks (Doshi-Velez and Kim, 2017): basic units of explanation which are a priori bounded in size. Thus, we seek a framework that naturally decomposes an explanation into (potentially several) explaining components, each of which justifies, from the perspective of the black-box model, parts of the output relative to the parts of the input.
Formally, suppose we have a black-box model F : X → Y that maps a structured input x ∈ X to a structured output y ∈ Y. We make no assumptions on the spaces X , Y, except that their elements admit a feature-set representation x = {x 1 , x 2 , . . . , x n }, y = {y 1 , y 2 , . . . , y m }. Thus, x and y can be sequences, graphs or images. We refer to the elements x i and y j as units or "tokens" due to our motivating application of sentences, though everything in this work holds for other combinatorial objects.
For a given input output pair (x, y), we are interested in obtaining an explanation of y in terms of x. Following (Ribeiro et al., 2016), we seek explanations via interpretable representations that are both i) locally faithful, in the sense that they approximate how the model behaves in the vicinity of x, and ii) model agnostic, that is, that do not require any knowledge of F . For example, we would like to identify whether token x i is a likely cause for the occurrence of y j in the output when the input context is x. Our assumption is that we can summarize the behavior of F around x in terms of a weighted bipartite graph G = (V x ∪ V y , E), where the nodes V x and V y correspond to the elements in x and y, respectively, and the weight of each edge E ij corresponds to the influence of the occurrence of token x i on the appearance of y j . The bipartite graph representation suggests naturally that the explanation be given in terms of explaining components. We can formalize these components as subgraphs
G k = (V k x ∪ V k y , E k ), where the elements in V k
x are likely causes for the elements in V k y . Thus, we define an explanation of y as a collection of such components:
E x→y = {G 1 , . . . , G k }.
Our approach formalizes this framework through a pipeline (sketched in Figure 1) consisting of three main components, described in detail in the following section: a perturbation model for exercising F locally, a causal inference model for inferring associations between inputs and predictions, and a selection step for partitioning and selecting the most relevant sets of associations.
We refer to this framework as a structured-output causal rationalizer (SOCRAT).
A note on alignment models When the inputs and outputs are sequences such as sentences, one might envision using an alignment model, such as those used in MT, to provide an explanation. This differs from our approach in several respects. Specifically, we focus on explaining the behavior of the "black box" mapping F only locally, around the current input context, not globally. Any global alignment model would require access to substantial parallel data to train and would have varying coverage of the local context around the specific example of interest. Any global model would likely also suffer from misspecification in relation to F . A more related approach to ours would be an alignment model trained locally based on the same perturbed sentences and associated outputs that we generate.
Building blocks 4.1 Perturbation Model
The first step in our approach consists of obtaining perturbed versions of the input: semantically similar to the original but with potential changes in elements and their order. This is a major challenge with any structured inputs. We propose to do this using a variational autoencoder (VAE) (Kingma and Welling, 2014;Rezende et al., 2014). VAEs have been successfully used with fixed dimensional inputs such as images (Rezende and Mohamed, 2015;Sønderby et al., 2016) and recently also adapted to generating sentences from continuous representations (Bowman et al., 2016). The goal is to introduce the perturbation in the continuous latent representation rather than directly on the structured inputs.
A VAE is composed of a probabilistic encoder ENC : X → R d and a decoder DEC : R d → X . The encoder defines a distribution over latent codes q(z|x), typically by means of a twostep procedure that first maps x → (µ, σ) and then samples z from a gaussian distribution with these parameters. We can leverage this stochasticity to obtain perturbed versions of the input Figure 1: A schematic representation of the proposed prediction interpretability method. by sampling repeatedly from this distribution, and then mapping these back to the original space using the decoder. The training regime for the VAE ensures approximately that a small perturbation of the hidden representation maintains similar semantic content while introducing small changes in the decoded surface form. We emphasize that the approach would likely fail with an ordinary autoencoder where small changes in the latent representation can result in large changes in the decoded output. In practice, we ensure diversity of perturbations by scaling the variance term σ and sampling pointsz and different resolutions. We provide further details of this procedure in the supplement. Naturally, we can train this perturbation model in advance on (unlabeled) data from the input domain X , and then use it as a subroutine in our method. After this process is complete, we have N pairs of perturbed input-output pairs:
Perturbation Model Causal Inference Explanation Selection (x, y) {(x i ,ỹ i )} G(U ∪ V, E) {E k x→y } K k=1 z z1z 2 z3 z4 z5z 6 z7 z8 s 1 s 2 s 3 s 4 t 1 t 2 t 3 t 4 t 5 s 1 s 2 t 1 t 2 t 3 s 1 s 2 t 1 t 2{(x i ,ỹ i )} N i=1
which exercise the mapping F around semantically similar inputs.
Causal model
The second step consists of using the perturbed input-output pairs {(x i ,ỹ i )} N i=1 to infer causal dependencies between the original input and output tokens. A naive approach would consider 2x2 contingency tables representing presence/absence of input/output tokens together with a test statistic for assessing their dependence. Instead, we incorporate all input tokens simultaneously to predict the occurrence of a single output token via logistic regression. The quality of these dependency estimators will depend on the frequency with which each input and output token occurs in the perturbations. Thus, we are interested in obtaining uncertainty estimates for these predictions, which can be naturally done with a Bayesian approach to logistic regression. Let φ x (x) ∈ {0, 1} |x| be a binary vector encoding the presence of the original tokens x 1 , . . . , x n from x in the perturbed versionx. For each target token y j ∈ y, we estimate a model:
P (y j ∈ỹ |x) = σ(θ T j φ x (x))(1)
where σ(z) = (1 + exp(−z)) −1 . We use a Gaussian approximation for the logarithm of the logistic function together with the prior p(θ) = N (θ 0 , H −1 0 ) (Murphy, 2012). Since in our case all tokens are guaranteed to occur at least once (we include the original example pair as part of the set), we use θ 0 = α1, H 0 = βI, with α, β > 0. Upon completion of this step, we have dependency coefficients between all original input and output tokens {θ ij }, along with their uncertainty estimates.
Explanation Selection
The last step in our interpretability framework consists of selecting a set explanations for (x, y). The steps so far yield a dense bipartite graph between the input and output tokens. Unless |x| and |y| are small, this graph itself may not be sufficiently interpretable. We are interested in selecting relevant components of this dependency graph, i.e., partition the vertex set of G into disjoint subsets so as to minimize the weight of omitted edges (i.e. the k-cut value of the partition).
Graph partitioning is a well studied NPcomplete problem (Garey et al., 1976). The usual setting assumes deterministic edge weights, but in our case we are interested in incorporating the uncertainty of the dependency estimates-resulting from their finite sample estimation-into the partitioning problem. For this, we rely on the approach of Fan et al. (2012) designed for interval estimates of edge weights. At a high level, this is a robust optimization formulation which seeks to minimize worst case cut values, and can be cast as a Mixed Integer Programming (MIP) problem. Specifically, for a bipartite graph G = (U, V, E) Algorithm 1 Structured-output causal rationalizer 1: procedure SOCRAT(x, y, F ) 2:
(µ, σ) ← ENCODE(x) 3:
for i = 1 to N do 4:zi ← SAMPLE(µ, σ)
Perturbation Model. 5:xi ← DECODE(zi) 6:ỹi ← F (xi) 7: end for 8: G ← CAUSAL(x, y, {xi,ỹi} N i=1 ) 9:
Ex →y ← BIPARTITION(G) 10:
Ex →y ← SORT(Ex →y ) By cut capacity 11:
return Ex →y 12: end procedure with edge weights given as uncertainty intervals θ ij ±θ ij , the partitioning problem is given by
min (x u ik ,x v jk ,y ij )∈Y n i=1 m j=1 θ ij y ij + max S:S⊆V,|S|≤Γ (it,jt)∈V \S (i,j)∈Sθ ij y ij + (Γ − Γ )θ it,jt y it,jt(2)
where x u ik , x v jk are binary variables indicating subset belonging for elements of U and V respectively, y ij are binary auxiliary variables indicating whether i and j are in different partitions, and Y is a set of constraints that ensure the K-partition is valid. Γ is a parameter in [0, |V |] which adjusts the robustness of the partition (the number of deviations from the mean edge values). See the supplement for further explanation of this objective.
If |x| and |y| are small, the number of clusters K will also be small, so we can simply return all the partitions (i.e. the explanation chunks) E k x→y := (V k x ∪ V k y ). However, when K is large, one might wish to entertain only the κ most relevant explanations. The graph partitioning framework provides us with a natural way to score the importance of each chunk. Intuitively, subgraphs that have few high-valued edges connecting them to other parts of the graph (i.e. low cut-capacity) can be thought of as self-contained explanations, and thus more relevant for interpretability. We can therefore define the importance score an atom as:
importance(E k x→y ) := − (i,j)∈X k θ ij(3)
where X k is the cut-set implied by E k x→y :
X k = {(i, j) ∈ E | i ∈ E k
x→y , j ∈ V \ E k x→y } The full interpretability method is succinctly expressed in Algorithm 1.
Experimental Framework
Training and optimization
For the experiments involving sentence inputs, we train in advance the VAE described in Section 4.1. We use symmetric encoder-decoders consisting of recurrent neural networks with an intermediate variational layer. In our case, however, we use L stacked RNN's on both sides, and a stacked variational layer. Training variational autoencoders for text is notoriously hard. In addition to dropout and KLD annealing (Bowman et al., 2016), we found that slowly scaling the variance sampled from the normal distribution from 0 to 1 made training much more stable.
For the partitioning step we compare the robust formulation described above with two classical approaches to bipartite graph partitioning which do not take uncertainty into account: the coclustering method of Dhillon (2001) and the biclustering method of Kluger et al. (2003). For these two, we use off-the-shelf implementations, 1 while we solve the MIP problem version of (2) with the optimization library gurobi. 2
Recovering simple mappings
Before using our interpretability framework in real tasks where quantitative evaluation of explanations is challenging, we test it in a simplified setting where the "black-box" is simple and fully known. A reasonable minimum expectation on our method is that it should be able to infer many of these simple dependencies. For this purpose, we use the CMU Dictionary of word pronunciations, 3 which is based on the ARPAbet symbol set and consists of about 130K word-to-phoneme pairs. Phonemes are expressed as tokens of 1 to 3 characters. An example entry in this dictionary is the pair vowels → V AW1 AH0 L Z. Though the mapping is simple, it is not one-toone (a group of characters can correspond to a single phoneme) nor deterministic (the same character can map to different phonemes depending on the context). Thus, it provides a reasonable testbed for our method. The setting is as follows: given an input-output pair from the cmudict "black-box", we use our method to infer dependencies between characters in the input and phonemes in the output. Since locality in this context is morphological instead of semantic, we produce perturbations selecting n words randomly from the intersection of the cmudict vocabulary and the set of words with edit distance at most 2 from the original word.
To evaluate the inferred dependencies, we randomly selected 100 key-value pairs from the dictionary and manually labeled them with characterto-phoneme alignments. Even though our framework is not geared to produce pairwise alignments, it should nevertheless be able to recover them to a certain extent. To provide a point of reference, we compare against a (strong) baseline that is tailored to such a task: a state-of-theart unsupervised word alignment method based on Monte Carlo inference (Tiedemann and Östling, 2016). The results in Figure 2 show that the version of our method that uses the uncertainty clustering performs remarkably close to the alignment system, with an alignment error rate only ten points above an oracle version of this system that was trained on the full arpabet dictionary (dashed line). The raw and partitioned explanations provided by our method for an example input-output pair are shown in Table 1, where the edge widths correspond to the estimated strength of dependency. Throughout this work we display the nodes in the same lexical order of the inputs/outputs to facilitate reading, even if that makes the explanation chunks less visibly discernible. Instead, we sometimes provide an additional (sorted) heatplot Table 1: Inferred dependency graphs before (left) and after (right) explanation selection for the prediction: boolean → B UW0 L IY1 AH0 N, in independent runs with large (top) and small (bottom) clustering parameter k.
of dependency values to show these partitions.
Machine Translation
In our second set of experiments we evaluate our explanation model in a relevant and popular sequence-to-sequence task: machine translation. As black-boxes, we use three different methods for translating English into German: (i) Azure's Machine Translation system, (ii) a Neural MT model, and (iii) a human (native speaker of German). We provide details on all three systems in the supplement. We translate the same English sentences with all three methods, and explain their predictions using SOCRAT. To be able to generate sentences with similar language and structure as those used to train the two automatic systems, we use the monolingual English side of the WMT14 dataset to train the variational autoencoder described in Section 4.1. For every explanation instance, we sample S = 100 perturbations and use the blackboxes to translate them. In all cases, we use the same default SOCRAT configurations, including the robust partitioning method.
In Figure 3, we show the explanations provided by our method for the predictions of each of the three systems on the input sentence "Students said they looked forward to his class". Although the three black-boxes all provided different translations, the explanations show a mostly consistent clustering around the two phrases in the sentence, and in all three cases the cluster with the highest cut value (i.e. the most relevant explanative chunk) is the one containing the subject. Interestingly, the dependency coefficients are overall higher for the human than for the other systems, suggesting more coherence in the translations (potentially because the human translated sentences in context, while the two automatic systems carry over no information from one example to the next).
The NMT system, as opposed to the other two, is not truly a black-box. We can open the box to get a glimpse on the true dependencies on the inputs used by the system at prediction time (the attention weights) and compare them to the explanation graph. The attention matrix, however, is dense and not normalized over target tokens, so it is not directly comparable to our dependency scores. Nevertheless, we can partition it with the coclustering method described in Section 4.3 to enforce group structure and make it easier to compare. Figure 4 shows the attention matrix and the explanation for an example sentence of the test set. Their overall cluster structure agrees, though our method shows conservatism with respect to the dependencies of the function words (to, for). Interestingly, our method is able to figure out that the <unk> token was likely produced by the word "appeals", as shown by the explanation graph.
It must be emphasized that although we dis- play attention scores in various experiments in this work, we do so only for qualitative evaluation purposes. Our model-agnostic framework can be used on top of models that do not use attention mechanisms or for which this information is hard to extract. Even in cases where it is available, the explanation provided by SOCRAT might be complementary or even preferable to attention scores because: (a) being normalized on both directions (as opposed to only over source tokens) and partitioned, it is often more interpretable than a dense attention matrix, and (b) it can be retrieved chunkby-chunk in decreasing order of relevance, which is especially important when explaining large inputs and/or outputs.
A (mediocre) dialogue system
So far we have used our method to explain (mostly) correct predictions of meaningful models. But we can use it to gain insights into the workings of flawed black-box systems too. To test this, we train a simple dialogue system on the OpenSubtitle corpus (Tiedemann, 2009), consisting of ∼14M two-step movie dialogues. As before, we use a sequence-to-sequence model with attention, but now we constrain the quality of the model, using only two layers, hidden state dimension of 1000 and no hyper-parameter tuning. Figure 5: Explanation with S = 50 (left) and attention (right) for the first prediction in Table 2.
Although most of the predictions of this model are short and repetitive (Yes/No/<unk> answers), some of them are seemingly meaningful, and might-if observed in isolation-lead one to believe the system is much better than it actually is. For example, the predictions in Table 2 suggest a complex use of the input to generate the output.
To better understand this model, we rationalize its predictions using SOCRAT. The explanation graph for one such "good" prediction, shown in Figure 5, suggests that there is little influence of anything except the tokens What and you on the output. Thus, our method suggests that this model is using only partial information of the input and has probably memorized the connection between question words and responses. This is confirmed upon inspecting the model's attention scores for this prediction (same figure, right pane).
Bias detection in parallel corpora
Natural language processing methods that derive semantics from large corpora have been shown to incorporate biases present in the data, such as archaic stereotypes of male/female occupations (Caliskan et al., 2017) and sexist adjective associations (Bolukbasi et al., 2016). Thus, there is interest in methods that can detect and address those biases. For our last set of experiments, we use our approach to diagnose and explain biased translations of MT systems, first on a simplistic but verifiable synthetic setting, where we inject a pre-specified spurious association into an otherwise normal parallel training corpus, and then on an industrial-quality black-box system.
We simulate a biased corpus as follows. Starting from the WMT14 English-French dataset, we identify French sentences written in the informal register (e.g. containing the singular second person tu) and prepend their English translation with the word However. We obtain about 6K examples this way, after which we add an additional 1M examples that do not contain the word however on the English side. The purpose of this is to attempt to induce a (false) association between this adverb and the informal register in French. We then train a sequence-to-sequence model on this polluted data, and we use it to translate adversariallychosen sentences containing the contaminating token. For example, given the input sentence "However, you might think this is good", the method predicts the translation "Tu peux penser qu ' il est bon que tu <unk>", which, albeit far from perfect, seems reasonable. However, using SOCRAT to explain this prediction (cf. Figure 6) raises a red flag: there is an inexplicable strong dependency between the function word however and tokens in the output associated with the informal register (tu, peux), and a lack of dependency between the second tu and the source-side pronoun you. The model's attention for this prediction (shown in Figure 7, left) confirms that it has picked up this spurious association. Indeed, translating the English sentence now without the prepended adverb results in a switch to the formal register, as shown in the second plot in Figure 7.
Although somewhat contrived, this synthetic setting works as a litmus test to show that our method is able to detect known artificial biases from a model's predictions. We now move to a real setting, where we investigate biases in the predictions of an industrial-quality translation system. We use Azure's MT service to translate into French various simple sentences that lack gender specification in English, but which require genderdeclined words in the output. We choose sentences containing occupations and adjectives previously shown to exhibit gender biases in linguistic corpora (Bolukbasi et al., 2016). After observing the choice of gender in the translation, we use SO-CRAT to explain the output.
In line with previous results, we observe that this translation model exhibits a concerning preference for the masculine grammatical gender in sentences containing occupations such as doctor, professor or adjectives such as smart, talented, while choosing the feminine gender for charming, compassionate subjects who are dancers or nurses. The explanation graphs for two such examples, shown in Figure 8 (left and center), suggest strong associations between the genderneutral but stereotype-prone source tokens (nurse, doctor, charming) and the gender-carrying target tokens (i.e. the feminine-declined cette, danseuse, charmante in the first sentence and the masculine ce, médecin, talenteux in the second). While it is not unusual to observe interactions between multiple source and target tokens, the strength of dependence in some of these pairs (charm-ing→danseuse, doctor→ce) is unexplained from a grammatical point of view. For comparison, the third example-a sentence in the plural form that does not involve choice of grammatical gender in French-shows comparatively much weaker associations across words in different parts of the sentence.
Discussion
Our model-agnostic framework for prediction interpretability with structured data can produce reasonable, coherent, and often insightful explanations. The results on the machine translation task demonstrate how such a method yields a partial view into the inner workings of a black-box system. Lastly, the results of the last two experiments also suggest potential for improving existing systems, by questioning seemingly correct predictions and explaining those that are not.
The method admits several possible modifications. Although we focused on sequence-tosequence tasks, SOCRAT generalizes to other settings where inputs and outputs can be expressed as sets of features. An interesting application would be to infer dependencies between textual and image features in image-to-text prediction (e.g. image captioning). Also, we used a VAE-based sampling for object perturbations but other approaches are possible depending on the nature of the domain or data.
Figure 2 :
2Arpabet test results as a function of number of perturbations used. Shown are mean plus confidence bounds over 5 repetitions. Left: Alignment Error Rate, Right: F1 over edge prediction.
Figure 3 :
3Explanations for the predictions of three Black-Box translators: Azure (top), NMT (middle) and human (bottom). Note that the rows and columns of the heatmaps are permuted to show explanation chunks (clusters).
Figure 4 :
4Top: Original and clustered attention matrix of the NMT system for a given translation. Bottom: Dependency estimates and explanation graph generated by SOCRAT with with S = 100.
Figure 6 :Figure 7 :
67Explanation with S = 50 for the prediction of the biased translator. Attention scores on similar sentences by the biased translator.
Figure 8 :
8Explanations for biased translations of similar gender-neutral English sentences into French generated with Azure's MT service. The first two require gender declination in the target (French) language, while the third one, in plural, does not. The dependencies in the first two shed light on the cause of the biased selection of gender in the output sentence.
Table 2 :
2"Good" dialogue system predictions.don't
it
know.
What
do
I
doesn't matter?
mean
you
I
don't
know.
?
matter
t
'
doesn
it
mean
you
do
What
0.15
0.30
0.45
0.60
0.75
http://scikit-learn.org/stable/modules/biclustering.html 2 http://www.gurobi.com/ 3 www.speech.cs.cmu.edu/cgi-bin/cmudict
AcknowledgmentsWe thank the anonymous reviewers for their helpful suggestions regarding presentation and additional experiments, and Dr. Chantal Melis for valuable feedback. DAM gratefully acknowledges support from a CONACYT fellowship and the MIT-QCRI collaboration.
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, Wojciech Samek, PLoS One. 107Sebastian Bach, Alexander Binder, Grégoire Mon- tavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On Pixel-Wise Ex- planations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS One, 10(7):1-46.
Neural Machine Translation By Jointly Learning To Align and Translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural Machine Translation By Jointly Learning To Align and Translate. Iclr 2015, pages 1-15.
Man is to Computer Programmer as Woman is to Homemaker?. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai, Debiasing Word Embeddings. NIPSTolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Home- maker? Debiasing Word Embeddings. NIPS, (Nips):4349--4357.
R Samuel, Luke Bowman, Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating Sentences from a Continuous Space. Iclr. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew M. Dai, Rafal Jozefowicz, and Samy Ben- gio. 2016. Generating Sentences from a Continuous Space. Iclr, pages 1-13.
Semantics derived automatically from language corpora contain human-like biases. Aylin Caliskan, Joanna J Bryson, Arvind Narayanan, Science. 3566334Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science (80-. )., 356(6334):183-186.
Intelligible Models for HealthCare : Predicting Pneumonia Risk and Hospital 30-day Readmission. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, Noemie Elhadad, Proc. 21th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. -KDD '15. 21th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. -KDD '15Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligi- ble Models for HealthCare : Predicting Pneumonia Risk and Hospital 30-day Readmission. Proc. 21th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. -KDD '15, pages 1721-1730.
William Chan, Navdeep Jaitly, Quoc V Le, Oriol Vinyals, Listen, attend and spell. arXiv Prepr. William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv Prepr., pages 1-16.
Co-clustering documents and words using Bipartite spectral graph partitioning. Inderjit S Dhillon, Proc 7th ACM SIGKDD Conf. 7th ACM SIGKDD ConfInderjit s. Dhillon. 2001. Co-clustering documents and words using Bipartite spectral graph partition- ing. Proc 7th ACM SIGKDD Conf, pages 269-274.
A Roadmap for a Rigorous Science of Interpretability. Finale Doshi, - Velez, Been Kim, ArXiv eprints, (MlFinale Doshi-Velez and Been Kim. 2017. A Roadmap for a Rigorous Science of Interpretability. ArXiv e- prints, (Ml):1-12.
Robust optimization of graph partitioning involving interval uncertainty. Neng Fan, P Qipeng, Panos M Zheng, Pardalos, In Theor. Comput. Sci. 447Neng Fan, Qipeng P. Zheng, and Panos M. Pardalos. 2012. Robust optimization of graph partitioning in- volving interval uncertainty. In Theor. Comput. Sci., volume 447, pages 53-61.
Some simplified NP-complete graph problems. M R Garey, D S Johnson, L Stockmeyer, Theor. Comput. Sci. 13M. R. Garey, D. S. Johnson, and L. Stockmeyer. 1976. Some simplified NP-complete graph prob- lems. Theor. Comput. Sci., 1(3):237-267.
Auto-Encoding Variational Bayes. Iclr. P Diederik, Max Kingma, Welling, Diederik P Kingma and Max Welling. 2014. Auto- Encoding Variational Bayes. Iclr, (Ml):1-14.
OpenNMT: Open-Source Toolkit for Neural Machine Translation. G Klein, Y Kim, Y Deng, J Senellert, A M Rush, ArXiv e-printsG. Klein, Y. Kim, Y. Deng, J. Senellert, and A. M. Rush. 2017. OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints.
Yuval Kluger, Ronen Basri, Joseph T Chang, Mark Gerstein, Spectral biclustering of microarray data: Coclustering genes and conditions. Yuval Kluger, Ronen Basri, Joseph T. Chang, and Mark Gerstein. 2003. Spectral biclustering of microarray data: Coclustering genes and conditions.
Rationalizing Neural Predictions. Tao Lei, Regina Barzilay, Tommi Jaakkola, EMNLP 2016, Proc. 2016 Conf. Empir. Methods Nat. Lang. Process. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. In EMNLP 2016, Proc. 2016 Conf. Empir. Methods Nat. Lang. Pro- cess., pages 107-117.
Zachary C Lipton, The Mythos of Model Interpretability. ICML Work. Hum. Interpret. WhiZachary C Lipton. 2016. The Mythos of Model In- terpretability. ICML Work. Hum. Interpret. Mach. Learn., (Whi).
Kevin P Murphy, Machine Learning: A Probabilistic Perspective. Kevin P. Murphy. 2012. Machine Learning: A Proba- bilistic Perspective.
Stochastic backpropagation and approximate inference in deep generative models. D J Rezende, D Mohamed, Wierstra, Proc. 31st. 32D J Rezende, S Mohamed, and D Wierstra. 2014. Stochastic backpropagation and approximate infer- ence in deep generative models. Proc. 31st . . . , 32:1278-1286.
Danilo Jimenez Rezende, Shakir Mohamed, Variational Inference with Normalizing Flows. Danilo Jimenez Rezende and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows.
Proc. 32nd Int. Conf. Mach. Learn. 32nd Int. Conf. Mach. Learn37Proc. 32nd Int. Conf. Mach. Learn., 37:1530-1538.
Why Should I Trust You?": Explaining the Predictions of Any Classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proc. 22Nd ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., KDD '16. 22Nd ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., KDD '16New York, NY, USAACMMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Ex- plaining the Predictions of Any Classifier. In Proc. 22Nd ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., KDD '16, pages 1135-1144, New York, NY, USA. ACM.
A Neural Attention Model for Abstractive Sentence Summarization. Sumit Alexander M Rush, Jason Chopra, Weston, Proc. Conf. Empir. Methods Nat. Lang. Process. Conf. Empir. Methods Nat. Lang. essAlexander M Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. Proc. Conf. Empir. Meth- ods Nat. Lang. Process., (September):379-389.
Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. R Ramprasaath, Abhishek Selvaraju, Ramakrishna Das, Michael Vedantam, Devi Cogswell, Dhruv Parikh, Batra, Ramprasaath R. Selvaraju, Abhishek Das, Ramakr- ishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. 2016. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. (Nips):1-5.
Tapani Casper Kaae Sønderby, Lars Raiko, Maaløe, Søren Kaae Sønderby, and Ole Winther. 2016. Ladder Variational Autoencoders. NIPS, (Nips). Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. 2016. Lad- der Variational Autoencoders. NIPS, (Nips).
News from OPUS -A Collection of Multilingual Parallel Corpora with Tools and Interfaces. Jörg Tiedemann, Recent Adv. Nat. Lang. Process. N. Nicolov, G. Bontcheva, G. Angelova, and R. MitkovAmsterdam/PhiladelphiaJörg Tiedemann. 2009. News from OPUS -A Collec- tion of Multilingual Parallel Corpora with Tools and Interfaces. In N. Nicolov, G. Bontcheva, G. An- gelova, and R. Mitkov, editors, Recent Adv. Nat. Lang. Process., pages 237--248. John Benjamins, Amsterdam/Philadelphia.
Efficient Word Alignment with Markov Chain Monte Carlo. Jörg Tiedemann, Robert Östling, Prague Bull. Math. Linguist. 106Jörg Tiedemann and Robert Östling. 2016. Efficient Word Alignment with Markov Chain Monte Carlo. Prague Bull. Math. Linguist., (106):125-146. |
17,536,397 | Information Theoretical and Statistical Features for Intrinsic Plagiarism Detection | In this paper we present some information theoretical and statistical features including function word skip n-grams for detecting plagiarism intrinsically. We train a binary classifier with different feature sets and observe their performances. Basically, we propose a set of 36 features for classifying plagiarized and non-plagiarized texts in suspicious documents. Our experiment finds that entropy, relative entropy and correlation coefficient of function word skip n-gram frequency profiles are very effective features. The proposed feature set achieves F-Score of 85.10%. | [
17649769,
10398193,
3643309
] | Information Theoretical and Statistical Features for Intrinsic Plagiarism Detection
September 2015
Rashedur Rahman [email protected]
IRT-SystemX & LIMSI-CNRS Paris-Sud University
Information Theoretical and Statistical Features for Intrinsic Plagiarism Detection
Proceedings of the SIGDIAL 2015 Conference
the SIGDIAL 2015 ConferencePrague, Czech RepublicSeptember 2015
In this paper we present some information theoretical and statistical features including function word skip n-grams for detecting plagiarism intrinsically. We train a binary classifier with different feature sets and observe their performances. Basically, we propose a set of 36 features for classifying plagiarized and non-plagiarized texts in suspicious documents. Our experiment finds that entropy, relative entropy and correlation coefficient of function word skip n-gram frequency profiles are very effective features. The proposed feature set achieves F-Score of 85.10%.
Introduction
Extrinsic plagiarism detection attempts to detect whether a document is plagiarised relative to reference documents. IPD (intrinsic plagiarism detection), which is relatively new, detects the plagiarised section(s) in a suspicious document without using any reference document. The basic hypothesis behind IPD is different writers have their own styles and they maintain these in their writings consciously or subconsciously. Sometimes it is very difficult to define the reference set for the task of external plagiarism detection. Additionally, the source of the plagiarized text may not be available in digitized format. Therefore, researchers are trying to answer whether it is possible to detect plagiarism without using any reference.
In this paper, we investigate some information theoretical and statistical measurements for IPD as a binary classification task. A set of 36 features has been proposed for classifying plagiarized and non-plagiarized segments in the suspicious documents. We use the PAN-PC-11 (Potthast et al., 2010) corpus compiled for IPD task. The PAN corpus is artificially plagiarised and it provides a meta-file mentioning the offsets of plagiarised and non-plagiarized parts for each suspicious document. We consider that each suspicious document is written by single author and it is either partially plagiarised or not plagiarised and we try to identify the text-segments that differ in writing style compared to the whole document. We train an SMO (Platt, 1998) classifier in Weka3.6 (Hall et al., 2009) by using 10 fold cross-validation. Then the classification performances are observed with different feature sets according to the standard precision, recall and F-score.
The next sections are organized as follows: section 2 discusses related works and section 3 briefly describes information theoretical and statistical features. The text segmentation and windowing process is summarized in section 4 while the experimental framework and baseline feature sets are discussed in section 5. Section 6 compares the classification performances with different feature sets and finally, the paper concludes in section 7.
Related Work
A series of regular studies on plagiarism detection were started following the first international competition for plagiarism detection, the PAN 1 workshop in 2009. Potthast et al. (2009) provides an overview on PAN'09 including the corpus design for plagiarism detection, quality measurements and the methods of plagiarism detection developed by the participants.
Zu Eissen and Stein (2006) proposed the first method for IPD and presented a taxonomy of plagiarism with methods for analysis. They also proposed some features including average sentence length, part-of-speech features, average stopword number and averaged word frequency class for quantifying the writing style. Some researchers used character n-gram profiles for the task of IPD (Stamatatos, 2009;Kestemont et al., 2011). Oberreuter et al. (2011 proposed word n-gram based method and they assumed that different writers use different sets of words that they repeat frequently. Tschuggnall and Specht (2012) proposed the Plag-Inn algorithm that finds plagiarized sentences in a suspicious document by comparing grammar trees of the sentences. Stamatatos (2009) introduced sliding window and proposed a distance function for calculating the dissimilarity between two texts based on a character tri-gram profile. Stamatatos (2011) employed n-grams of function word sequence with different lengths and found significant impact to distinguish between plagiarised and non-plagiarized texts. We employ function words differently as skip n-gram profiles for measuring entropy, relative entropy and correlation coefficient as discussed in Section 5.2. Stein et al. (2011) employed unmasking technique and proposed a set of features of different types for example POS, function words etc for intrinsic plagiarism analysis. Seaward and Matwin (2009) and Chudá and Uhlík (2011) proposed compression based methods for IPD. They measured the Kolmogorov complexity of the distributions of different parts-ofspeech and word classes in the sentences. For calculating the complexity a binary string is generated for each distribution and later the string is compressed by a compression algorithm.
Information Theoretical and Statistical Features
Shannon Entropy (Shannon, 1948) has a great impact on communication theory or theory of information transmission, it measures the uncertainty of a random variable. Mathematically, entropy is defined as in equation (1).
H(X) = − n i=1 p(x i ) log 2 (p(x i )) (1) KLD (p||q) = x∈X p(x) log 2 p(x) q(x) (2) r = 1 n − 1 n i=1 X i −X s X Y i −Ȳ s Y(3)
We measure entropy of n-gram frequency profile generated from each text-window (X) for quantifying the writing style. Manning and Schütze (1999) measured the distance between two probability distributions by using Relative entropy or Kullback-Leibler divergence (KLD) which is calculated by using the equation (2). The Pearson correlation coefficient (Pearson, 1920) or simply correlation coefficient measures the linear correlation between two samples that is calculated by the equation (3). Since the task of IPD does not use any reference document we require a robust method for comparing small sections of the document relative to the whole document under question. Measuring the relative entropy and correlation coefficient between a small section and the rest of the document are possible methods. We use the frequency profiles of n-grams generated from the individual text-window (X) and the complete suspicious document (Y) separately for calculating relative entropy and correlation coefficient. The probability distributions of n-gram frequencies (P and Q) is calculated from n-gram frequency profiles (from X and Y) for measuring the relative entropy.
Text Segmentation and windowing
To define the small sections of text for comparison to the rest of the document, we experiment with window of different lengths (1000, 2000, 5000 characters). To prepare the corpus for training and testing to support this additional experimentation, we separate plagiarised and non-plagiarized sections of the documents in the corpus according to the offsets (as indicated in the meta-file). By doing this we can guarantee that the smaller texts we generate are still accurately annotated as to whether the content is plagiarised or not. The whole procedure is illustrated in figure 1.
Experimental Framework and Feature Sets
This section illustrates the experimental framework of IPD task by combining the preprocessing and classification tools, the framework is graphically described in figure 2. After extracting and windowing the corpus, we calculate different feature values for generating the feature vectors. Before calculating the features, several text preprocessing tasks, for example, tokenizing, sentence detection and POS-tagging are employed. We gen- Figure 2: Experimental framework erate several feature vectors for different baseline feature sets and proposed feature set. Then a classifier model is trained with the feature sets, we train SMO classifier with 10 fold cross validation in Weka 3.6 explorer interface. Equal number of plagiarized and non-plagiarized text samples are trained with the classifier. We train the classifier with 8, 100 text segments from each class where each segment initially contains 5, 000 characters. Finally, the classification performances are observed for different feature sets.
Baseline feature sets
We used three different baseline feature sets for the experiment which are listed below:
• Baseline-1 (feature set used by Stein et al. (2011)): used 30 features that includes lexical and syntactical features, surface features, vocabulary richness and readability measurement-based features, n-gram-based features, POS-based features etc.
• Baseline-2 (feature set used by Seaward and Matwin (2009)): calculated the Kolmogorov complexity of function words and different parts-of-speech.
• Baseline-3 (distance function proposed by Stamatatos (2009)): measured distance function or style-change score of the textwindows with respect to the whole suspicious document by using their character tri-gram profiles.
Proposed feature set
We propose 36 features for IPD including entropy, relative entropy, correlation coefficient, skip n-grams of function words etc. Lavergne et al. (2008) and Zhao et al. (2006) used relative entropy for fake content detection and authorship attribution accordingly. Islam et al. (2012) classified readability levels of texts by using both entropy and relative entropy. Stamatatos (2011) used function word n-grams for exterinsic plagiarism detection but here we generate several skip n-grams of function words instead of simple n-grams. Guthrie et al. (2006) used 1 to 4 skip n-grams for modelling unseen sequences of words in the text. Here we summarize the proposed feature set:
• Character tri-gram frequency profile: we measure entropy for text windows and relative entropy and the correlation coefficient of the character tri-gram frequency profile for the text windows and documents. Additionally, we calculate average n-gram frequency class by using the equation of average word frequency class proposed by Zu Eissen and Stein (2006). Here we have 4 features: entropy, relative entropy, correlation coefficient and n-gram frequency class calculated from character tri-gram frequency profiles of textwindows and complete document.
• bi-gram and tri-gram frequency profile with 1, 2, 3 and 4 skips : we measure entropy, relative entropy, correlation coefficient of function-word bi-gram and tri-gram frequency profile with 1, 2, 3 and 4 skips. Additionally, we calculate the style change scores with these frequency profiles using the distance function proposed by Stamatatos (2009). For generating the skip n-gram profiles of function-words we extract the function words sequentially from each sentence.
We generate function-word skip n-gram profiles of the text segments by considering only the function words at sentence level instead of passage level as Stamatatos (2011) used.
Here we have 32 features: entropy, relative entropy, correlation coefficient and stylechange score calculated from 8 functionword skip n-gram frequency profiles.
Experimental Results
We observe that the proposed feature set achieves the highest F-Score compared to the baseline feature sets as illustrated in figure 3. All the feature sets together obtain a promising F-Score of 91% while the three baselines combined result in an F-Score around 89%. The proposed feature set achieves an 85% F-Score which is the highest compared to the three baseline feature sets. Baseline-1 and baseline-2 obtain F-Score around 68% and 62% while baseline-3 surprisingly results in an 84% F-Score as a single feature. We pair feature sets and observe their performances, figure 4 shows that the proposed feature set increases the F-Score with the combination of baseline feature sets.
Figure 5 depicts separate observations of entropy, relative entropy, correlation coefficient and distance function of function word skip n-gram frequency profiles. Here we notice that relative entropy achieves a very good F-Score of 72%, entropy and correlation coefficient also obtain better F-Scores than the distance function. Though distance function results in very good F-Score with the character tri-gram frequency profile it does not perform good enough with the function word skip n-gram frequency profile. Distance function with function word skip n-gram frequency profile obtains around a 35% F-Score which is the lowest compared to other functions with function word skip n-gram frequency profile. We also observe the effect of different window lengths (discussed in section 4) on classification performance, the classification performance increases for each feature set if the window length is increased. All the feature sets combined result in F-Score of 82% and 87% for window lengths of 1000 and 2000 characters accordingly while a 91% F-Score is achieved with the window length of 5000 characters.
Conclusion
In this paper we proposed a set of new features for intrinsic plagiarism detection that support arguments for continued research on IPD. In the future we would like to evaluate these features on human-plagiarised and different domain corpora. We are also interested in expanding the IPD task by considering the case that a suspicious document is written by multiple authors.
Figure 1 :
1Text segmentation and windowing
Figure 5 :
5Performance observation of function word skip n-gram based features
http://pan.webis.de/
AcknowledgementThis paper is a part of my master thesis work while studied at Frankfurt University of Applied Sciences. I am very thankful to my thesis supervisor Dr. Alexander Mehler and my especial thanks to IRT-SystemX for ensuring me to attend at SIGdial conference. I also thank my SIGdial mentor and reviewers for their feedback and guidance.
The plagiarism detection by compression method. Daniela Chudá, Martin Uhlík, Proceedings of the 12th International Conference on Computer Systems and Technologies. the 12th International Conference on Computer Systems and TechnologiesACMDaniela Chudá and Martin Uhlík. The plagia- rism detection by compression method. In Pro- ceedings of the 12th International Conference on Computer Systems and Technologies, pages 429-434. ACM, 2011.
A closer look at skip-gram modelling. David Guthrie, Ben Allison, Wei Liu, Louise Guthrie, Yorick Wilks, Proceedings of the 5th international Conference on Language Resources and Evaluation (LREC-2006). the 5th international Conference on Language Resources and Evaluation (LREC-2006)David Guthrie, Ben Allison, Wei Liu, Louise Guthrie, and Yorick Wilks. A closer look at skip-gram modelling. In Proceedings of the 5th international Conference on Language Re- sources and Evaluation (LREC-2006), pages 1- 4, 2006.
The weka data mining software: an update. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, Ian H Witten, ACM SIGKDD Explorations Newsletter. 111Mark Hall, Eibe Frank, Geoffrey Holmes, Bern- hard Pfahringer, Peter Reutemann, and Ian H Witten. The weka data mining software: an up- date. ACM SIGKDD Explorations Newsletter, 11(1):10-18, 2009.
Text readability classification of textbooks of a low-resource language. Zahurul Islam, Alexander Mehler, Rashedur Rahman, A G Texttechnology, Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation.(Accepted). the 26th Pacific Asia Conference on Language, Information, and Computation.(Accepted)Zahurul Islam, Alexander Mehler, Rashedur Rah- man, and AG Texttechnology. Text readabil- ity classification of textbooks of a low-resource language. In Proceedings of the 26th Pa- cific Asia Conference on Language, Informa- tion, and Computation.(Accepted), 2012.
Intrinsic plagiarism detection using character trigram distance scores. Mike Kestemont, Kim Luyckx, Walter Daelemans, Proceedings of the PAN. the PANMike Kestemont, Kim Luyckx, and Walter Daele- mans. Intrinsic plagiarism detection using char- acter trigram distance scores. Proceedings of the PAN, 2011.
Detecting fake content with relative entropy scoring. Thomas Lavergne, Tanguy Urvoy, François Yvon, PAN. Thomas Lavergne, Tanguy Urvoy, and François Yvon. Detecting fake content with relative en- tropy scoring. In PAN, 2008.
Foundations of statistical natural language processing. D Christopher, Hinrich Manning, Schütze, MIT Press999Christopher D Manning and Hinrich Schütze. Foundations of statistical natural language pro- cessing, volume 999. MIT Press, 1999.
Approaches for intrinsic and external plagiarism detection. Gabriel Oberreuter, Gaston Lâȃźhuillier, A Sebastián, Juan D Ríos, Velásquez, Proceedings of the PAN. the PANGabriel Oberreuter, Gaston LâȂŹHuillier, Se- bastián A Ríos, and Juan D Velásquez. Ap- proaches for intrinsic and external plagiarism detection. Proceedings of the PAN, 2011.
Notes on the history of correlation. Karl Pearson, Biometrika. 131Karl Pearson. Notes on the history of correlation. Biometrika, 13(1):25-45, 1920.
Sequential minimal optimization: A fast algorithm for training support vector machines. John C Platt, ADVANCES IN KERNEL METHODS -SUPPORT VECTOR LEARNING. Technical reportJohn C. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Technical report, ADVANCES IN KERNEL METHODS -SUPPORT VECTOR LEARNING, 1998.
Overview of the 1st international competition on plagiarism detection. Martin Potthast, Benno Stein, Andreas Eiselt, Alberto Barrón-Cedeno, Paolo Rosso, 3rd PAN WORK-SHOP. UNCOVERING PLAGIARISM, AU-THORSHIP AND SOCIAL SOFTWARE MIS-USE. Martin Potthast, Benno Stein, Andreas Eiselt, Alberto Barrón-Cedeno, and Paolo Rosso. Overview of the 1st international competition on plagiarism detection. In 3rd PAN WORK- SHOP. UNCOVERING PLAGIARISM, AU- THORSHIP AND SOCIAL SOFTWARE MIS- USE, 2009.
An Evaluation Framework for Plagiarism Detection. Martin Potthast, Benno Stein, Alberto Barrón-Cedeño, Paolo Rosso, Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010). the 23rd International Conference on Computational Linguistics (COLING 2010)Beijing, ChinaAssociation for Computational LinguisticsMartin Potthast, Benno Stein, Alberto Barrón- Cedeño, and Paolo Rosso. An Evaluation Framework for Plagiarism Detection. In Pro- ceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), Beijing, China, August 2010. Association for Computational Linguistics.
Intrinsic plagiarism detection using complexity analysis. Leanne Seaward, Stan Matwin, Proc. SEPLN. SEPLNLeanne Seaward and Stan Matwin. Intrinsic pla- giarism detection using complexity analysis. In Proc. SEPLN, pages 56-61, 2009.
A mathematical theory of communication. Claude Elwood Shannon, ACM SIGMOBILE Mobile Computing and Communications Review. 51Claude Elwood Shannon. A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5(1): 3-55, 1948.
Intrinsic plagiarism detection using character n-gram profiles. Efstathios Stamatatos, Proceedings of the PAN. the PANEfstathios Stamatatos. Intrinsic plagiarism detec- tion using character n-gram profiles. Proceed- ings of the PAN, pages 38-46, 2009.
Plagiarism detection based on structural information. Efstathios Stamatatos, Proceedings of the 20th ACM international conference on Information and knowledge management. the 20th ACM international conference on Information and knowledge managementACMEfstathios Stamatatos. Plagiarism detection based on structural information. In Proceedings of the 20th ACM international conference on Informa- tion and knowledge management, pages 1221- 1230. ACM, 2011.
Intrinsic plagiarism analysis. Language Resources and Evaluation. Benno Stein, Nedim Lipka, Peter Prettenhofer, 45Benno Stein, Nedim Lipka, and Peter Pretten- hofer. Intrinsic plagiarism analysis. Language Resources and Evaluation, 45(1):63-82, 2011.
Plaginn: intrinsic plagiarism detection using grammar trees. Michael Tschuggnall, Günther Specht, Natural Language Processing and Information Systems. SpringerMichael Tschuggnall and Günther Specht. Plag- inn: intrinsic plagiarism detection using gram- mar trees. In Natural Language Processing and Information Systems, pages 284-289. Springer, 2012.
Using relative entropy for authorship attribution. Ying Zhao, Justin Zobel, Phil Vines, formation Retrieval Technology. SpringerYing Zhao, Justin Zobel, and Phil Vines. Using relative entropy for authorship attribution. In In- formation Retrieval Technology, pages 92-105. Springer, 2006.
Intrinsic plagiarism detection. Sven Meyer, Zu Eissen, Benno Stein, Advances in Information Retrieval. SpringerSven Meyer Zu Eissen and Benno Stein. Intrinsic plagiarism detection. In Advances in Informa- tion Retrieval, pages 565-569. Springer, 2006. |
524,122 | Translation Knowledge Recycling for Related Languages | An increasing interest in multi-lingual translation systems demands a reconsideration of the development costs of machine translation engines for language pairs. This paper proposes an approach that reuses the existing translation knowledge resources of high-quality translation engines for translation into different, but related languages. The lexical information of the target representation is utilized to generate the corresponding translation in the related language by using a transfer dictionary for the mapping of words and a set of heuristic rules for the mapping of structural information. Experiments using a Japanese-English translation engine for the generation of German translations show a minor decrease of up to 5% in the acceptability of the German output compared with the English translation of unseen Japanese input. | [] | Translation Knowledge Recycling for Related Languages
Michael Paul [email protected]
ATR Spoken Language Translation Research Laboratories
2-2-2 Hikaridai, Seika-cho, Soraku-gun619-0288KyotoJapan
Translation Knowledge Recycling for Related Languages
reuse of knowledge resourcesmulti-lingual extensionsrelated languages
An increasing interest in multi-lingual translation systems demands a reconsideration of the development costs of machine translation engines for language pairs. This paper proposes an approach that reuses the existing translation knowledge resources of high-quality translation engines for translation into different, but related languages. The lexical information of the target representation is utilized to generate the corresponding translation in the related language by using a transfer dictionary for the mapping of words and a set of heuristic rules for the mapping of structural information. Experiments using a Japanese-English translation engine for the generation of German translations show a minor decrease of up to 5% in the acceptability of the German output compared with the English translation of unseen Japanese input.
Introduction
One of the biggest problems for the development of highquality, multi-lingual translation engines is the high cost of adapting the underlying translation algorithm to multiple language pairs. In particular, the lack of resources (dictionaries, bilingual corpora, etc.) for "uncommon" language pairs forms a bottleneck for multi-lingual extensions.
The basic idea of this paper, as described in Section 2, is to devote efforts to the development of translation engines between the main linguistically different languages and to reuse the translation knowledge of these systems for translation into languages closely related to the target language. These languages must have similar grammatical characteristics so that the linguistic information contained in the target representation, e.g. a parse tree, can be mapped to a corresponding representation for the related language and so that this information can be used to generate the translation output.
Our approach does not depend on any specific language or translation engine, but simply requires an internal target representation containing structural and word information. Section 3 describes the translation engine and the knowledge resources used for the evaluation of our approach. The results of generating German output based on the translation of Japanese spoken-language utterances into English are summarized in Section 4.
In this approach, the English parse tree is mapped to a corresponding German one by substituting word phrases according to an English-to-German transfer dictionary and applying heuristic rules defining grammatical equivalences in both languages. Some comments on the feasibility of our approach and future perspectives are given in Section 5.
Translation Knowledge Recycling
Our aim is to find an inexpensive way to provide multilingual extensions of existing translation engines. A simple way to achieve this goal is the concatenation of the respective engines by using a text-based interface as illustrated in Figure 1. In this system, the source language (SL) is translated into an intermediate language (IL) by the first translation engine (TE1), and this translated text becomes the input of the second engine (TE2) that generates the output in the target language (TL).
Figure 1: Text-to-text interface
The drawbacks of this scenario are the high costs of developing both translation engines as well as error magnification due to the isolated translation steps. However, if we could find a way to reprocess the translation knowledge of the intermediate language to directly generate the output in the target language, the costs would be drastically reduced.
The risk of such a recycling step is the lack of linguistic knowledge required in the target language. Therefore, the reuse of translation knowledge makes sense only for closely related languages that share similar grammatical characteristics.
Language rep resentatives
According to their historical relatedness, languages can be grouped into so-called language families as illustrated in Figure 2. The families are marked in bold face. The languages on the same "branch" share certain features not shared by languages of other families. Based on the identification of similar characteristics between languages within the same language family, we might be able to find a representative for each family.
Indo Indo--European European
Germanic Germanic
• • English English • • German German • • Dutch Dutch • • Swedish Swedish • • … … Italic Italic • • Italian Italian • • French French • • Spanish Spanish • • Latin Latin • •… … Slavic Slavic • • Russian Russian • • Polish Polish • • Czech Czech • • Slovak Slovak • • … … … … • • … … • • … … • • … … • • … … Sino Sino--Tibetan Tibetan Sino Sino--Tibetan Tibetan • • Mandarin Mandarin Chinese Chinese • • Burmese Burmese • • Tibetan Tibetan • • … … Japanese Japanese Korean Korean Afro Afro--Asiatic Asiatic Indo Indo--Iranian Iranian … … Indo Indo--European European Germanic Germanic • • English English • • German German • • Dutch Dutch • • Swedish Swedish • • … … Italic Italic • • Italian Italian • • French French • • Spanish Spanish • • Latin Latin • •… … Slavic Slavic • • Russian Russian • • Polish Polish • • Czech Czech • • Slovak Slovak • • … … … … • • … … • • … … • • … … • • … … Indo Indo--European European Germanic Germanic • • English English • • German German • • Dutch Dutch • • Swedish Swedish • • … … Italic Italic • • Italian Italian • • French French • • Spanish Spanish • • Latin Latin • •… … Slavic Slavic • • Russian Russian • • Polish Polish • • Czech Czech • • Slovak Slovak • • … … … … • • … … • • … … • • … … • • … … Sino Sino--
Mapping of tr anslation knowledge
The basic units required for the generation of the translation output are words, their inflectional characteristics, and the features determining the grammatical context in which they occur. On the word level, bilingual dictionaries can be used to define equivalent expressions. However, single words of the first language might correspond to more complex word phrases of the second language, and vice versa. Therefore, a word phrase dictionary is required to reduce word selection ambiguity caused by 1:1 word translation. Moreover, a morphological dictionary for the definition of the inflectional attributes of the target words is necessary for generating the translation output.
On the sentence structure level, the grammatical role of a specific word phrase is defined in the linguistic representation of the first language. Due to the relatedness of the languages, grammatical functions should be marked in a similar way in both languages. The identification of corresponding generation markers allows us to define rules mapping the linguistic knowledge from the first to the second representation. However, even if the grammatical functions are similar, the realization of the grammatical role during generation might differ between the languages, e.g., word order variations. This kind of language-dependent information has to be encoded in the generation process.
Recycling effe ct
Our approach of reusing existing translation knowledge leads reduced costs of multi-lingual extension, because we can limit the number of language pairs to language representatives.
Moreover, the costs of multiple full-scale translation engines can be reduced to those of developing a transfer dictionary and a generation dictionary, and these knowledge resources are already frequently available, at least for common languages like English.
The most difficult part of the translation process is carried out within the translation engine, e.g., a Japaneseto-English translation engine has to deal with problems like the recovery of the sentence subject, which is frequently omitted in Japanese but required in English (Yamamoto & Sumita, 1998). Similar to English, German is also a language that requires a subject. Thus we could benefit from the Japanese-to-English efforts by simply mapping and reusing the recovered subject for the generation of German translations. Furthermore, the number of generation markers utilized in a specific language is limited. Therefore the compilation of mapping rules for related languages becomes inexpensive.
The disadvantage of knowledge resource recycling is the possible lack of translation knowledge required in the target language, i.e., grammatical information of the target language omitted or without any equivalence in the source language. How far this phenomenon limits the feasibility of our approach will be discussed in Section 4.3.
Framework
In Section 3.1 we give an overview of the translation engine. The knowledge resources and mapping algorithm of the proposed system are described in Section 3.2. Finally, an example of the reuse of translation knowledge is given in Section 3.3.
Translation E ngine
The translation engine used for our experiments consists of a spoken-language machine translation system capable of bilingual translations between Japanese/English (JE). This transfer-driven translation system (TDMT) uses a constituent boundary parsing method (CBP) in an examplebased framework. The input sentence is incrementally parsed by matching meaningful units of linguistic structure (patterns) with a chart-parsing algorithm. Given a set of translation examples, TDMT tries to find the "closest" examples to the structured input by using a semantic distance calculation (SDC) (Sumita et al., 1999).
By simulating the translation of the closest examples, the empirical transfer knowledge is applied to the source structure, resulting in a corresponding target structure, that can be used to generate the translation (cf. Figure 3).
Recycling sys tem
The input of the proposed system (JeG) consists of the linguistic knowledge contained in the target representation of the JE system. The mapping algorithm for recycling the English translation knowledge is introduced in Section 3.2.3. First, it substitutes the English words in the English parse tree with corresponding German words by using a transfer dictionary (cf. Section 3.2.1). In the second step, the generation markers at each node of the parse tree are mapped by using a set of heuristic rules (cf. Section 3.2.2). The resulting German parse tree is then utilized to generate the translation output as described in Section 3.2.4.
Transfer Dictio nary
The EG transfer dictionary for mapping English word compounds to corresponding German ones is created automatically from existing resources.
In order to reduce costs, we reused available Japaneseto-English (JE) and Japanese-to-German (JG) dictionaries created for the domain of our evaluation data by simply joining both dictionaries while using Japanese as the pivot language.
In general, any available EG dictionary could be used, but the joining of the JE (J=1 to E=n words) and the JG (J=1 to G=m words) dictionaries results in a word phrase dictionary for EG (E=n to G=m). Each entry consists of one (1) tree ← parse-tree (translate_JE(input));
(2) E-words ← extract_words(tree);
(3) until E-words=Ø do (4) (E-phrase, G-phrase)
← look_up_longest_match(E-words);
(5) tree ← substitute (G-phrase, E-phrase, tree); (6) E-words ← remove (E-phrase, E-words); (7) end(until);
Step 1: map word sequence (8) depth-first (tree) (9) foreachnode in tree do @ @ @ /* LHS /* LHS --left_hand_side left_hand_sideof rule*/ of rule*/ @ @ @ /* RHS /* RHS --right_hand_side right_hand_sideof rule*/ of rule*/ (10) rule ← match_LHS(rules, node); (11) tree← substitute (RHS, LHS, tree); (12) end(foreach); (13) end(depth-first); (14) return tree;
Step 2: map generation marker
(1) tree ← parse-tree (translate_JE(input));
(2) E-words ← extract_words(tree); Additional costs for hand-checking automatically created dictionaries cannot be avoided, but research efforts are already under way to minimize these costs (Bond, 2001).
(3) until E-words=Ø do (4) (E-phrase, G-phrase) ← look_up_longest_match(E-words); (5) tree ← substitute (G-phrase, E-phrase,
Mapping Rules
The heuristic mapping rules are defined by hand. First, we extracted all grammatical markers used in the JE training data and assigned German equivalents, e.g., the English direct object marker OBJ is mapped to the accusative complement marker AKK-OBJ, as illustrated in Figure 5. In the second step, the created rules were verified by using a subset of the JE training data. One thousand utterances were translated by the JeG system, and the mapping rules were adjusted for translation errors in the context of the training sentences.
Figure 5: Mapping rules
An investigation into the resulting rule set revealed the following rule clustering according to their functionality.
• sentence structure, e.g., type of subordinated sentences • phrasal structure, e.g., word order within a phrase • inflectional marker, e.g., number or tense • omission, e.g., E markers without G equivalent
Mapping Algor ithm
The mapping algorithm of the translation knowledge consists of two steps as described in Fejl! Et bogmaerke kan ikke henvise til sig selv..
Figure 6: Mapping algorithm
First, the source words in the parse tree were replaced with corresponding target ones according to the word phrase dictionary. The sequence of words contained in the nodes was extracted from the parse tree. If this sequence could be matched in the dictionary, the respective source words were replaced with the corresponding target words. Otherwise, the word sequence was reduced from right to left by one word and the dictionary look-up was repeated until a match was found.
In the second step, the parse tree was traversed depthfirst, substituting source with target generation markers according to the defined mapping rules. The left-hand side of each mapping rule was applied at each node and in the case of a match the substructure is modified according to the right-hand side of the selected mapping rule. (PP "until" ) (PP "until" ) (PP "until" ) (PP "until" ) tomorrow tomorrow (PP "at" ) (PP "at" ) (PP "at" ) (PP "at" ) stay stay Nara Hotel Nara Hotel
Input Output
(OBJ $x) → (AKK-OBJ $x) ({ING+} $x) → $x ({PAST+} $x) → (ATTR $x
Generation
In contrast to more configurational languages like English, languages with partially free word order like German im-pose some additional burden on the generation of lin-guistic knowledge contained in the target representation. We utilize an approach for clause syntax of the target language by employing the notion of topological fields, whereby sentence patterns are described as combinations of structural units. The linearization of these fields determines the clausal word order within the respective sentence pattern. The constituents of the target representation are updated to these fields according to their grammatical role marked in the mapped parse tree (Paul et al., 1998).
In order to generate the mapped German translation knowledge and to take into account word order variations of German, we only had to extend the topological field definitions used for English.
On the word level, we used a morphological dictionary, automatically extracted from the CELEX database (Piepenbrock, 1995), to generate the surface words based on the grammatical context of the respective phrases.
Recycling Exa mple
An example of the mapping of an English parse tree and the generation of the corresponding German translation is given in Figure 7.
Some generation markers, like the subject marker <1sg> (first person, singular), are simply passed along without any modification. Others, like the English inflection marker {ING+} (progressive form), do not have any equivalence in German, and thus are omitted. The majority of rules, however, assign markers for corresponding grammatical functions, e.g., the prepositional phrase (PP "until" …) is converted to a temporal expression TIME and (PP "at" …) is mapped as a locative expression PLACE in order to take into account word order variations of the underlying topological structure of the German target representation.
Evaluation
In order to prove the feasibility of our approach, we applied our system to the same data set (cf. Section 4.1) using the same criteria (cf. Section 4.2) as for the evaluation of the JE system (Sumita et al., 1999).
Evaluation da ta
The data used for our experiments consist of the ATR-ITL Speech and Language Database containing Japanese-English spoken-language dialogs in the travel domain (Takezawa, 1999).
Figure 8: Evaluation data sets
To evaluate our approach we utilize three different data sets as illustrated in Figure 8. Each set consists of 100 randomly selected utterances. The TRAIN data was used for the training of the JE translation engine and the JeG system. In contrast, the TEST set consists of JE training utterances unseen by the JeG system. Finally, the utterances of the OPEN set were used for the open evaluation of both systems.
Evaluation cr iteria
The JeG system was applied to all three data sets and the translation results were evaluated by two German natives with knowledge of Japanese based on the same guidelines used to evaluate the JE system.
First, the evaluators read only the translation and retained the information they gathered. Then they referred to the Japanese input and identified the main information that has to be expressed in the translation. The extent to which the main information in the Japanese input corresponds to the translation output is evaluated based on the following four ranking options. tomorrow tomorrow stay stay Nara Hotel -"übernachten" V … ["in" PRAEP "Nara Hotel" EIGENNAME ]
["bis" PRAEP "morgen" ADV ] … "werden" V <1sg> -SATZ (A) complete and accurate translation: all of the main information is covered and expressed naturally, and the translation is immediately understandable. (B) fair translation: the information is partially lacking or incorrect. There are some grammar mistakes or missing or misleading parts. However, the main information in the Japanese input can be easily obtained from the translation.
F-post V-INF … PLACE TIME … V-FIN SUB F-pre type - "übernachten" V … ["in" PRAEP "Nara Hotel" EIGENNAME ] ["bis" PRAEP "morgen" ADV ] … "werden" V <1sg> - SATZ F-post V-INF … PLACE TIME … V-FIN SUB F-
(C) acceptable translation: at first glance, it is difficult to obtain the information in the Japanese input from the output. However, based on the context, it is possible to reconstruct parts of the information from the input.
(D) invalid translation: the primary information is lacking, seriously incorrect, or one cannot make any sense out of what is being said.
Evaluation re sults
The results of our experiments are summarized in Table 1 For the TRAIN set we achieved an acceptability of 100%, but the existence of 9% of rank C sentences shows that not all of the target phenomena could be covered accurately.
The TEST results show a large drop in accurate translations for correct English input sentences unseen by the system, but 79% are still at least fair and only 8% of the data were not acceptable.
Comparing the results of the OPEN test set, we see only a minor performance drop of up to 5% between the JE and JeG system, proving the feasibility of our recycling approach for related languages like English and German.
Furthermore, the implementation of the JeG system lasts only several months. Most of the time was spent on hand-checking the EG dictionary and verifying the heuristic rules. However, compared to the development costs for the translation engine, the results are quite promising.
Discussion
The evaluation results show that our approach still has to deal with translation problems like word disambiguation or structural differences even for related languages. However, the minor decrease in performance for open test data and low development costs demonstrates the feasibility of our approach to recycling translation knowledge for related languages.
The performance and coverage of our system depends on the utilized translation engine. However, the resources used for the mapping of translation knowledge are easy to extend and language-dependent target information, e.g. word inflection, can be handled with appropriate generation models for the target language. Therefore, upscaling the system to other domains should not lead to a tremendous increase in costs or decrease in the system performance.
In our experiments, we used English as the representative language for the Germanic language family, even if German would be a better choice due to its lexical richness. In that scenario, the omission of translation knowledge could be eased by mapping a richer representation to a poorer one. However, considering the available resources for other languages, English seems to be the most obvious choice.
Furthermore, we applied our recycling approach to the output language of our translation engine. However, we might also be able to apply our approach for languages related to the source language of an engine. Given a parser for the related language, we could map the internal representation to the source language and reuse our engine for the translation into the target language, e.g. from German to English to Japanese.
We also plan to apply our system to related languages outside the same language family, e.g. translation between Japanese and Italian through English.
References
Figure 2 :
2Language families
Figure 3 :
3JE translation knowledge or more part-of-speech tagged source words assigned to one or more target expressions as illustrated in Fejl! Et bogmaerke kan ikke henvise til sig selv..
Figure 4 :
4Word phrase dictionary
Figure 7 :
7:tense IMPERFEKT) (SEN (RELP "which") $x) → (REL-S (RELATIVPRONOMEN "welches") $x)) 1:1 (ADJ "German") → (ADJEKTIV "deutsch") (CN "hotel") → (NOMEN "Hotel") 1:m (CN "vacancy") → (NP (ADJEKTIV "freies") (NOMEN "Zimmer")) (V "hurry") → (VP (REFLEXIVPRONOMEN "sich") (VERB "beeilen")) n:1 (ADJ "additional") (CN "charge") → (NOMEN "Aufpreis") (CN "baggage") (CN "claim") (CN "area") → (NOMEN "Gepäckausgabe") n:m (BEV "be") (ADJ "different") → (VP (ADJEKTIV "verschieden") (HILFSVERB "sein")) (V "turn") (ADV "left") → (PP (PRAEP "nach") (ADVERB "links")) (VERB "abbiegen") Recycling example
"
Ich werde bis morgen im Nara Hotel übernachten" "I'll be staying at the Nara Hotel until tomorrow"
Tibetan TibetanSino
Sino--Tibetan Tibetan
•
• Mandarin
Mandarin
Chinese
Chinese
•
• Burmese
Burmese
•
• Tibetan
Tibetan
•
• …
…
Sino
Sino--Tibetan Tibetan
Sino
Sino--Tibetan Tibetan
•
• Mandarin
Mandarin
Chinese
Chinese
•
• Burmese
Burmese
•
• Tibetan
Tibetan
•
• …
…
Japanese
Japanese
Korean
Korean
Afro
Afro--Asiatic Asiatic
Indo
Indo--Iranian Iranian
…
…
TE1
TE2
text-to-text
SL
TL
IL
TE1
TE2
TE1
TE2
text-to-text
SL
TL
IL
Examples for such representatives could be English for
the Germanic languages, Russian for Slavic languages, or
Mandarin Chinese for Sino-Tibetan languages. This
would enable us to focus development efforts on high-
quality translation engines between representative lan-
guages. Accordingly, we could concentrate on complex
translation problems between completely different lan-
guages, whereas related languages could be dealt with in a
more ad-hoc way. Moreover, there are several languages
such as Japanese and Korean that do not belong to the
same family but are grammatically similar, and thus
potential candidates for the recycling of translation
knowledge.
.Table 1: Evaluation of the JeG systemrank
TRAIN
TEST
OPEN
OPEN (JE)
A
B
C
76%
15%
9%
58%
21%
13%
56%
18%
13%
56%
23%
11%
D
0%
8%
13%
10%
A+B
91%
79%
74%
79%
A+B+C
100%
92%
87%
90%
Design and Construction of a machine-tractable Japanese-Malay Dictionary. F Bond, R B Sulong, T Yamazaki, K Ogura, Proc. of the Machine Translation Summit VIII. of the Machine Translation Summit VIIISantiago de Compostella, Spainto appearBond, F., Sulong, R.B., Yamazaki, T., and Ogura, K. (2001). Design and Construction of a machine-tractable Japanese-Malay Dictionary. In Proc. of the Machine Translation Summit VIII (to appear). Santiago de Compostella, Spain.
Field Structure and Generation in Transfer-Driven Machine Translation. M Paul, E Sumita, H Iida, Proc. of the 4 th Annual Meeting of the NLP. of the 4 th Annual Meeting of the NLPFukuoka, JapanPaul, M, Sumita, E., and Iida, H. (1998). Field Structure and Generation in Transfer-Driven Machine Translation. In Proc. of the 4 th Annual Meeting of the NLP (pp. 504--507). Fukuoka, Japan.
Version 2.5. Max Planck Institute of Psycholinguistics. R Piepenbrock, English, German; Nijmegen, NetherlandsCELEX Lexical Database (DutchPiepenbrock, R. (1995). CELEX Lexical Database (Dutch, English, German), Version 2.5. Max Planck Institute of Psycholinguistics. Nijmegen, Netherlands.
Solutions to Problems Inherent in Spoken-language Translation: The ATR-MATRIX approach. E Sumita, S Yamada, K Yamamoto, M Paul, K Kashioka, K Ishikawa, S Shirai, Proc. of the Machine Translation Summit VII. of the Machine Translation Summit VIISingaporeSumita, E., Yamada, S., Yamamoto, K., Paul, M., Kashi- oka, K., Ishikawa, K., and Shirai, S. (1999). Solutions to Problems Inherent in Spoken-language Translation: The ATR-MATRIX approach. In Proc. of the Machine Translation Summit VII (pp. 229--235). Singapore.
Building a bilingual travel conversation database for speech translation research. T Takezawa, Proc. of Oriental COCOSDA Workshop. of Oriental COCOSDA WorkshopTakezawa, T. (1999). Building a bilingual travel conver- sation database for speech translation research. In Proc. of Oriental COCOSDA Workshop.
Feasibility Study for ellipsis resolution in dialogues by machine-learning techniques. K Yamamoto, E Sumita, Proc. of the 17 th COLING. of the 17 th COLINGMontreal, CanadaYamamoto, K. and Sumita, E. (1998). Feasibility Study for ellipsis resolution in dialogues by machine-learning techniques. In Proc. of the 17 th COLING (pp. 1428-- 1434). Montreal, Canada. |
259,108,368 | Actively Supervised Clustering for Open Relation Extraction | Current clustering-based Open Relation Extraction (OpenRE) methods usually adopt a two-stage pipeline. The first stage simultaneously learns relation representations and assignments. The second stage manually labels several instances and thus names the relation for each cluster. However, unsupervised objectives struggle to optimize the model to derive accurate clustering assignments, and the number of clusters has to be supplied in advance. In this paper, we present a novel setting, named actively supervised clustering for OpenRE. Our insight lies in that clustering learning and relation labeling can be alternately performed, providing the necessary guidance for clustering without a significant increase in human effort. The key to the setting is selecting which instances to label. Instead of using classical active labeling strategies designed for fixed known classes, we propose a new strategy, which is applicable to dynamically discover clusters of unknown relations. Experimental results show that our method is able to discover almost all relational clusters in the data and improve the SOTA methods by 10.3% and 5.2%, on two datasets respectively. * Equal Contributions. † Corresponding authors. Clustering Assignments Relation Labeling Query Label Active Supervision Relation Labeling Unsupervised two-stage OpenRE Clustering Model Clustering Model BERT [CLS] [E1] Bill Gates [\E1] was born in [E2] Seattle[\E2] in 1995 [SEP]Active Supervised ClusteringFigure 2: Overview of the training pipeline for our actively supervised clustering setting. In each iteration, a few key points are selected for relation labeling. The rest instances are clustered to the nearest key points. Some highly reliable cluster assignments are used as pseudo-labels for relation representation learning. | [
53080736,
7658338,
184486746,
214802812,
195477534
] | Actively Supervised Clustering for Open Relation Extraction
Long PapersCopyright Long PapersJuly 9-14, 2023
Jun Zhao
School of Computer Science
Fudan University
Yongxin Zhang [email protected]
School of Computer Science
Fudan University
Qi Zhang
†
Tao Gui [email protected]
School of Computer Science
Fudan University
Institute of Modern Languages and Linguistics
Fudan University
†
Zhongyu Wei [email protected]
School of Data Science
Fudan University
Minlong Peng
Cognitive Computing Lab Baidu Research
Mingming Sun
Cognitive Computing Lab Baidu Research
Actively Supervised Clustering for Open Relation Extraction
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
the 61st Annual Meeting of the Association for Computational LinguisticsLong Papers1July 9-14, 2023
Current clustering-based Open Relation Extraction (OpenRE) methods usually adopt a two-stage pipeline. The first stage simultaneously learns relation representations and assignments. The second stage manually labels several instances and thus names the relation for each cluster. However, unsupervised objectives struggle to optimize the model to derive accurate clustering assignments, and the number of clusters has to be supplied in advance. In this paper, we present a novel setting, named actively supervised clustering for OpenRE. Our insight lies in that clustering learning and relation labeling can be alternately performed, providing the necessary guidance for clustering without a significant increase in human effort. The key to the setting is selecting which instances to label. Instead of using classical active labeling strategies designed for fixed known classes, we propose a new strategy, which is applicable to dynamically discover clusters of unknown relations. Experimental results show that our method is able to discover almost all relational clusters in the data and improve the SOTA methods by 10.3% and 5.2%, on two datasets respectively. * Equal Contributions. † Corresponding authors. Clustering Assignments Relation Labeling Query Label Active Supervision Relation Labeling Unsupervised two-stage OpenRE Clustering Model Clustering Model BERT [CLS] [E1] Bill Gates [\E1] was born in [E2] Seattle[\E2] in 1995 [SEP]Active Supervised ClusteringFigure 2: Overview of the training pipeline for our actively supervised clustering setting. In each iteration, a few key points are selected for relation labeling. The rest instances are clustered to the nearest key points. Some highly reliable cluster assignments are used as pseudo-labels for relation representation learning.
Introduction
Relation extraction (RE) aims to detect and extract the potential relation between the given entity pair in unstructured text. The extracted relation facts play a vital role in many downstream applications, such as knowledge base population (Ji and Grishman, 2011), search engine (Schlichtkrull et al., 2018), and question answering (Yu et al., 2017). To deal with the emerging unknown relational types in the real world, Open Relation Extraction (OpenRE) has been widely studied.
The clustering-based unsupervised relation discovery is a classical paradigm for OpenRE (Yao Actively Supervised OpenRE Figure 1: Compared with the existing unsupervised two-stage methods, our method can provide explicit supervision for clustering by alternately performing clustering learning and relation labeling. Note that the human effort of the two settings is comparable. et al., 2011;Marcheggiani and Titov, 2016;Elsahar et al., 2017). It can discover potential relations, by grouping several instances into relational clusters, and then manually labeling a few instances to name the relation of each cluster. Recently, Hu et al. (2020) introduced a deep clustering framework (Caron et al., 2018) into OpenRE. They iteratively cluster the relation representations that are produced by large pretrained models and use the cluster assignments as pseudo-labels to refine the representations. Unfortunately, the above unsupervised methods struggle to learn good enough representations, and the cluster assignments are error-prone. When multiple relations are mixed in a cluster, it becomes difficult to name the cluster. Hence, instead of regarding OpenRE as a totally unsupervised task, researchers leverage the labeled data of predefined relations to provide explicit supervision signals for clustering learning (Wu et al., 2019;Zhao et al., 2021), and achieve superior results. Different from the above two-stage methods, in this work, we present a new setting named actively supervised clustering for OpenRE (ASCORE). As shown in fig. 1, our insight lies in that clustering learning (i.e, deep clustering) and relation labeling can be alternately performed. In an iteration, a small number of key instances are selected for labeling. The unknown relations expressed by these instances are correspondingly discovered. More importantly, these labeled instances can provide explicit supervisory signals for clustering learning. The improved relation representations form a better cluster structure, which in turn is able to benefit the discovery of the neglected relations. Since potential relations are dynamically discovered in iterations, the number of clusters does not need to be provided in advance.
Along with this setting, we design an active labeling strategy tailored for clustering. First, all instances are encoded to points in the representation space, where the clustering is performed. The goal of the strategy is to select the most informative points for labeling. Intuitively, two points that are far from each other in representation space usually express different relations. To discover as many relations as possible, we introduce a distance regularization to the strategy, so that diversified relation discovery can be facilitated. To prevent over-fitting caused by training with limited active labeled instances, all the selected key points are required to be the points of maximum local density. By doing so, a large number of high-quality pseudo-labels can be obtained, by assigning active labels to unlabeled data in a small neighborhood. To mitigate the error propagation issue, different loss functions are assigned to active labels and pseudo-labels with different reliability for clustering learning. Experimental results show that (1) the actively supervised method improves the SOTA two-stage methods by a large margin without a significant increase in human effort. (2) the proposed active strategy can discover more relational clusters, compared with the classical active strategy.
To summarize, the main contributions of this work are as follows: (1) We present a new setting named actively supervised clustering for OpenRE, providing the necessary guidance for clustering without a significant increase in human effort. (2) Design of a new active labeling strategy tailored for clustering, that can effectively discover potential relational clusters in unlabeled data. (3) This method improves the SOTA two-stage methods by 10.3% and 5.2% on two well-known datasets, respectively.
Related Work
Clustering-based OpenRE: The clustering-based paradigm considers relation discovery as a twostage pipeline, which clusters relational data first, and then manually labels relational semantics for each cluster. Conventional methods cluster instances by human-defined linguistic features (Yao et al., 2011;Marcheggiani and Titov, 2016;Elsahar et al., 2017), such as entity words/type, dependency paths, trigger words, context POS tags. Recently, many studies have shown that pretrained models learn diversified linguistic knowledge (Jawahar et al., 2019;Clark et al., 2019;Goldberg, 2019;Zhao et al., 2022). Hu et al. (2020) leverage the self-supervised signals provided by the pretrained model to iteratively learn relation representations and optimize clustering. Due to the lack of strong supervision, it is difficult for the above methods to produce satisfactory clustering results. Although some works (Wu et al., 2019;Zhao et al., 2021) try to use the labeled data of predefined relations to complete the missing supervision, the semantic gap between predefined and open relations leads to negative clustering bias, especially when these relations come from different domains (Zhao et al., 2021). By performing clustering learning and relation labeling alternately, our actively supervised method can provide strong supervision and improve the two-stage methods by a large margin. In the main results (sec. 5), we achieve this improvement at the cost of only two active labels for each relation on average. For two-stage methods, relation labeling for each cluster requires at least one (usually more) instance to be manually observed. Therefore, there is no significant increase in human effort. Active Learning: Active learning is a research field with high relevance to the proposed methods. In the research field, a classical method is uncertainty-based sampling (Roth and Small, 2006;Wang and Shang, 2014a;Tong and Koller, 2001). The uncertainty can be defined based on the posterior probability of a predicted class or the distances to the decision boundaries. In the context of deep learning, MC Dropout (Gal et al., 2017) is an effective way for uncertainty estimation, but the computationally inefficient limits its application in large-scale datasets. Recently, representative sampling is attracting lots of attention (Sener and Savarese, 2018;Ash et al., 2019a), which selects data points that represent the distribution of an unlabeled pool. Unlike the classical labeling strategies designed for fixed classes, the proposed strategy is encouraged to discover new relational clusters while improving the clustering of relations that have been discovered.
Approach
In this work, we present a new setting named actively supervised clustering for OpenRE (AS-CORE), which fuses the isolated two-stage pipeline to guide clustering learning. Fig. 2 illustrates the training pipeline. The OpenRE problem addressed in this work is formally stated as follows. Given as input an open relational dataset D = {x i |i = 1, .., N }, the goal is to discover and label potential relations R = {r i |i = 1, .., K} in the open data and cluster the corresponding instances. Note that the number of relations K in D is unknown.
Overview
The ASCORE is based on deep clustering (Caron et al., 2018), which is a common practice for OpenRE (Hu et al., 2020;Zhao et al., 2021), that iteratively clusters the representations of input instances, and uses the cluster assignments as pseudo-labels to learn the relation representations. We introduce explicit supervision to deep clustering by alternately performing clustering learning and relation labeling. The actively labeled points can serve as a basis to facilitate accurate pseudo-label estimation and improve representation learning. The improved relational representation in turn benefits relation labeling to discover more relations. As illustrated in Figure 2, the training pipeline of ASCORE consists of the following steps:
Encoding
Step: This step aims to obtain the relation representation h i of each input instance x i ∈ D, laying the groundwork for clustering and relation discover. First, the contextual information of x i is encoded to the entity pair representation h ent i , using a pretrained BERT (Devlin et al., 2018) encoder. To avoid data sparsity and low efficiency of clustering in high-dimensional space, an autoencoder is used to transform h ent i into a low-dimensional clustering-friendly representation h i .
Labeling
Step: This step aims to discover potential relations in open dataset D and guide clustering. At the s th iteration, a set D * s ∈ D is actively labeled, including B key points. These key points are required to be local maximum in density, so a large number of high-quality pseudo labels can be obtained by assigning active labels to unlabeled data in a small neighborhood. Instead of focusing only on the improving clustering of the discovered relations, all key points in D * = D * 1 ∪ .. ∪ D * s are required to be far away from each other to facilitate the discovery of new relations.
Learning
Step: This step aims to learn clustering relational data, using an actively labeled D * . Specifically, each unlabeled point x i ∈ D is clustered to the nearest key point x * j ∈ D * and the pseudo label isŷ i = y * j . The reliability ofŷ i increases as the distance in representation space between x i and x * j decreases. Cross-entropy loss (resp. divergence-based contrastive loss) is used for pseudo labels with high (resp. moderate) reliability, to optimize relation representations and thus to improve clustering. With the help of active supervision, separated subclusters expressing the same relation approach each other, while mixed subclusters expressing different relations are separated. Existing unsupervised methods are inherently difficult to handle such errors.
The above three steps are performed iteratively to gradually improve the model performance. In the following sections, we will elaborate on the model structure, labeling strategy, and training methods involved in the above three steps. ,h [E2] ) and ⟨·|·⟩ is the concatenation operator. However, clustering data in high-dimensional space is time-consuming and the data sparsity leads to sub-optimal clustering results. Therefore, an autoencoder is trained by reconstruction loss L rec and the encoder part is retained to transform highdimensional h ent to a low-dimensional clusteringfriendly relation representation h.
Relation Representation Encoder
x i into a fixed-length representation h ent i = f (x i ) ∈ R d . The encoder f is implemented as BERT. Specifically: h 1 , ..., h n = BERT(w 1 , ..., w n ) (1) h ent = h [E1] |h [E2] .(2)
Key Point Selection Module
In this section, the proposed key point selection method will be explained, including the labeling strategy and the conditions for stopping annotation. Labeling strategy. The labeling strategy is based on the following criteria. First, the selected key points are the local maximum in density. Generally, labels do not drastically change within a small neighborhood, and therefore, the first criteria enable to find a lot of unlabeled data within a small neighborhood of each key point, and accurately estimate their pseudo-labels. To find these local maximum, it is calculated the euclidean distance between the relation representations {h i } i=1,2,...,N obtained in encoding step, and the distance matrix D ∈ R N ×N is constructed as follows:
D ij = ∥h i − h j ∥ 2 2 ,(3)
where D ij is the distance between two relational instance x i and x j . The potential computational cost to process large-scale datasets can be solved by sampling a small subset. Based on distance matrix D, a density ρ i is further defined for each relation instance x i . A larger ρ i indicates a larger number of instances around x i :
ρ i = N j=1 sign(D c − D ij ),(4)
where sign() is the sign function and D c is a threshold.
To avoid the problem that all the labeled points are concentrated in several high-density areas and missing most long-tail relations, the second criteria is to keep these key points away from each other in clustering space. Specifically, a sparsity index ξ i is defined for each instance x i ∈ D.
ξ i = min j,ρ j >ρ i D ij , ρ i < ρ max max j D ij ρ i = ρ max(5)
Intuitively, a larger ξ i indicates that the instance x i is a local maximum of density in a larger radius. Based on the density ρ i and sparsity index ξ i of each instance, the labeling strategy can be formally stated as follows. In each iteration, choose B points with the highest density and their distance from each other is greater than ξ c .
D * s = TopB ρ {x i |ξ i > ξ c , x i ∈ D}(6)
To effectively support iterative labeling and maintain the diversity of key points, in s th iteration, each new key point x i should be as far away from the existing key point in D * = D * 1 ∪ ... ∪ D s−1 as possible. Therefore, for each instance x i , the sparsity index are modified as follows:
d = min x j ∈D * ||h i − h j || 2 2 (7) ξ i = min(ξ i , d),(8)
After the s th iterations, the result is the new active labeled set D * = D * ∪ D * s . Conditions for stopping annotation. Too few queries will lead to missing some relations, while too many queries will lead to unnecessary costs. Here we give a simple strategy to determine when to stop labeling. (1) First, users can determine the maximum number of actively labeled instances, N * , based on their annotation budget. (2) Since our labeling strategy takes into account the diversity of key points, new relations are constantly discovered during the initial phase of iteration. When no new relations are discovered in two or more consecutive iteration steps (it means that most relations have been found), the labeling should be stopped.
Training Methods
Pseudo Label Estimation. Given the active labeled set D * , each of the rest unlabeled points x i ∈ D is clustered to the nearest key point x * j ∈ D * andŷ i is estimated as y * j . Intuitively, the accuracy of the pseudo label decreases with the increase of the distance between x i and x * j . The reliability r of pseudo labels is defined as follows:
r i = h i − h * j −1 2 ,(9)
where h i and h * j denote the representation of x i and x * j , respectively. ∥ · ∥ −1 2 denotes reciprocal of L 2 norm. Model Optimization. Given the pseudo labelŷ i and its reliability r i for each unlabeled data x i ∈ D, the relation representation is refined to improve clustering in the next iteration. Specifically, we first filter out a highly reliable subset D h = {(x i ,ŷ i )|r i > r h } and use a softmax classifier to convert entity pair representation h ent i into the probability distribution on discovered relations (denoted as P i ). The model is optimized with cross entropy loss for fast convergence:
L ce = CrossEntropy(ŷ i , P i ).(10)
Note that the number of instances in D h is small.
To avoid that the model only learns simple features, the threshold is broaden, and a moderately reliable
subset D m = {(x i ,ŷ i )|r i > r m } containing more instances is built.
To mitigate the negative impact of noise in D m , a binary contrastive loss is further introduced:
L bce = Lŷ i =ŷ j + Lŷ i ̸ =ŷ j(11)Lŷ i =ŷ j = D kl (P * i ||P j ) + D kl (P i ||P * j ) (12) Lŷ i ̸ =ŷ j = H σ (D kl (P * i ||P j )) + H σ (D kl (P i ||P * j ))(13)H σ (x) = max(0, σ − x),(14)
Algorithm 1: ASCORE Perform the learning step (sec. 3.4). Estimate pseudo labels and reliability scores. Use the corresponding loss for representation learning. 7 until convergence; 8 Return Discovered relation set R and the cluster assignmentŷ i ∈ R of each instance
Input: A open dataset D = {x i } N i=1 1 repeat 2 Perform the encoding step (sec. 3.2). Get relation representation {h i } N i=1 for instances in D;x i ∈ D.
where σ is a hyperparameter and P * denotes that P is assumed to be a constant for asymmetry. D kl denotes KL divergence. The probability distribution P will be pulled closer or farther depending on whether the labels of the sample pairs are the same. In each iteration, if the annotator finds new relations, the parameters of the softmax classifier are reinitialized to deal with the new relations. Alg.1 shows an algorithm flow that is able to clearly summarize the proposed method.
Experimental Setup
Datasets
Experiments are conducted on two standard and one constructed dataset. Note that the compared baselines follow different settings. As will be described in sec FewRel (Han et al., 2018). FewRel is a manually annotated dataset that contains 80 types of relations, each of which has 700 instances. However, in real-world OpenRE scenarios, unseen relations in unlabeled data usually follow a long-tailed distribution. To eliminate this inconsistency and accurately evaluate model performance in realworld scenarios, we construct a long-tail FewRel dataset as follows.
FewRel-LT. The FewRel-LongTail dataset. We number the 40 unseen relations in FewRel from 0 to 39, and calculate the number of samples according to y = 700 0.5 * id+1 . The number of samples in the predefined subset remains unchanged.
Compared Methods
To evaluate the effectiveness of the actively supervised clustering, the following SOTA twostage OpenRE methods are used for comparison. HAC with Re-weighted Word Embeddings (RW-HAC) (Elsahar et al., 2017). A clustering-based OpenRE method. The model constructs relational feature based on entity types and the weighted sum of pretrained word embeddings. Relational Siamese Network (RSN) (Wu et al., 2019). This method learns similarity metrics of relations from labeled data of pre-defined relations and then transfers the relational knowledge to identify novel relations in unlabeled data. Self-supervised Feature Learning for OpenRE (SelfORE) (Hu et al., 2020). SelfORE exploits weak, self-supervised signals in pretrained language model for adaptive clustering on contextualized relational features. A Relation-oriented Clustering Method (Ro-CORE) (Zhao et al., 2021). RoCORE leverages the labeled data of predefined relations to learn a clustering-friendly representation, which is used for new relations discovery.
To show the superiority of the proposed labeling strategy, the actively supervised clustering is combined with the following classical active learning strategies for comparison. Specifically, RANDOM, CONFIDENCE (Wang and Shang, 2014b), MARGIN (Roth and Small, 2006), EN-TROPY (Wang and Shang, 2014a) and GRADIENT (Ash et al., 2019b) are included. We provide a brief introduction to these methods in appendix A.1.
Implementation Details
Following Hu et al. (2020) and Wu et al. (2019), 20% of the data in each dataset are held out for validation and hyperparameter selection. We use the Adam (Kingma and Ba, 2014) as the optimizer, with a learning rate of 1e − 4 and batch size of 100 for all datasets. The threshold D c is given by the value of an element ranked top 40% in D from large to small. For each iteration, we label B = 20 samples. ξ is set to the value when the number of candidates is 1.2B. Some important hyperparameters r h and r m are analyzed in sec. 6.3. For a fair comparison, all active strategies select the same number of key points for labeling. Specifically, 40, 80 and 80 key points are labeled on the three datasets TACRED, FewRel, and FewRel-LT respectively. All experiments are conducted with Pytorch 1.7.0, using an NVIDIA GeForce RTX 3090 with 24GB memory. Table 1 shows the model performances on three datasets. In this section, the experiment focuses on the following two questions.
Main Results
Does inaccurate estimation of the number of relations have an impact on clustering?
One drawback of the most existing two-stage OpenRE methods is that the number of clusters K has to be given in advance, which is impractical in real applications. When K is underestimated (from to ), the clustering performance of the SOTA unsupervised method, SelfORE, on the three datasets decreases by an average of 7.13%, while the same metric regarding RoCORE, the SOTA supervised method, is 18.10%. Furthermore, it is observed an extremely unbalanced precisionrecall value in the B 3 metric (much lower precision and higher recall), which indicates that the model tends to mix multiple relations in the same cluster. Obviously, such clustering results will have a negative impact on the relation labeling. In other words, it is difficult to determine which relation this cluster corresponds to. When K is overestimated (due to space limitation, please look at table 4 for results of overestimation), the same relation tends to be clustered into multiple subclusters. Repeated labeling of these subclusters brings a significant increase in human effort. In contrast, the ASCORE dynamically discovers relational clusters through active iteration, breaking the impractical Table 1: Main results on three relation extraction datasets. and represent that the number of relations is known and unknown, respectively (please look at appendix A.2 for more details). U, P and A respectively indicate Unsupervised setting, supervised by Predefined relation setting and Actively supervised setting. The proposed method outperforms the SOTA method and does not need to specify the number of clusters in advance. assumption that K is known in advance.
Is the actively supervised setting better
than the two-stage setting?
The two settings are compared, from the following two perspectives.
In terms of clustering performance, the actively labeled data can provide valuable supervision signals for clustering learning. Compared with RoCORE, a strong baseline supervised by predefined relations, the proposed method improves the four metrics by an average of 10.3% and 5.2% on long-tail TACRED and FewRel-LT, respectively. Long-tail relation distribution is very common in real world. On the uniform FewRel dataset, the ASCORE achieves comparable results. It is worth noting that the number of clusters K has to be supplied for RoCORE. When K is unknown, the improvement will be even enlarged.
Regarding labeling costs, both settings are comparable. Note that in the main results, only two instances for each relation are labeled on average. For the two-stage methods, in order to label the relational semantics of a cluster, the annotator has to observe at least one sample. Obviously, the ASCORE does not lead to a significant increase in human efforts. which is particularly beneficial for long-tail relation discovery in real applications. Additionally, tab. 3 shows that this strategy is also the best in terms of clustering performance. Benefitting from assigning reasonable loss functions to pseudo labels with different reliability, more pseudo labels can be used for learning without significantly increasing the risk of over-fitting noise.
The Effect of Numbers of Actively Labeled Instances
In order to more comprehensively evaluate the labeling strategies, experiments are conducted to compare these labeling strategies by varying the number of actively labeled instances, N * . Figure 3 shows the effect of N * . Surprisingly, it is found that the random strategy is a very competitive baseline that beats most of the classical labeling strategies given different N * . This suggests that classical labeling strategies may be better at tasks with known and fixed categories. Although the proposed strategy consistently outperforms all baselines, it is obvious that it has not been fully optimized. It is the authors' belief that it is sufficient to serve as a reasonable baseline for the actively supervised clustering setting and proceeding some useful guidance for future research in this field. Additionally, with the increase of N * , the performance of the model is improved, but the growth rate is gradually slow. It means that the cost performance of human effort gradually decreases. Therefore, for users with limited budgets, it is also a good choice to discover the primary relations through only a few queries.
Hyperparameter Analysis
In this section, we study the effects of reliability threshold r h and r m on optimization. Their values are given by the values of the elements ranked θ ce % and θ bce % from small to large. From Figure 4 it is possible see that: (1) When θ ce and θ bce gradually increase from a small value, more training data are used for model optimization, and the performance of the model gradually improves. (2) When the value exceeds a certain threshold, further increasing the θ ce and θ bce will introduce more errors into the optimization, which degrades the performance of the model. (3) Compared with L bce , L ce loss will make the model converge faster, so its optimal threshold of L ce should be less than that of L bce to prevent the overfitting of wrong labels.
Limitations
Considering that the golden labels of all instances have been given in the datasets, we directly use these labels as manual labels without performing the manual labeling process. The practice implicitly assumes that all the manual labels are correct. However, with the increase of labeling scale, problems such as (1) inconsistent labeling granularity across annotators and (2) noise in manual labels gradually emerge. How to effectively improve the labeling quality and the robustness of the clustering model are worthy of attention.
A Appendix
A.1 Compared Active Labeling Strategy
To show the superiority of the proposed labeling strategy, the actively supervised clustering is combined with the following classical active learning strategies for comparison. RANDOM The naive baseline of randomly selecting k samples to query labels. CONFIDENCE (Wang and Shang, 2014b) An uncertainty-based active learning algorithm that select k samples with smallest predicted class probability max{f θ (x) i } i=1,...,C MARGIN (Roth and Small, 2006) An uncertaintybased active learning algorithm that selects the bottom k sorted according to the example's multiclass margin, defined as f θ (x)ŷ − f θ (x) y ′ , whereŷ and y ′ are the indices of the largest and second largest entries of f θ (x). ENTROPY(Wang and Shang, 2014a) An uncertainty-based active learning algorithm that selects the top k samples according to the entropy of the sample's class distribution. GRADIENT (Ash et al., 2019b) A loss based active learning algorithm. The uncertainty is measured as the gradient magnitude with respect to parameters in the output layer.
A.2 Additional Results
In this section, more detailed experimental settings and experimental results are given. Specifically, in TACRED dataset, 21 relations are held out as open relations to be discovered. In the underestimation (resp. overestimation) setting, we assume that the number of clusters is 10 (resp. 40). In FewRel and FewRel-LT datasets, 40 relations are held out as open relations and we assume that the number of clusters is 20 (resp. 80) for underestimation (resp. overestimation) setting. The results of the two settings are listed in tab. 4. When K is underestimated, it is observed that the precision of B 3 is far lower than recall, which indicates that the model tends to mix multiple relations in the same cluster. When K is overestimated, the recall is far lower than precision, which indicates that the same relation tends to be clustered into multiple subclusters. Although the F 1 of B 3 metric seems to be tolerable, such imbalance clustering assignments cause great difficulties in relation labeling. If a cluster contains more than one relation, labeling the cluster as any relation will lead to the misidentification of other relations. If Table 4: Main results on three relation extraction datasets. and represent that the number of relation types in unlabeled data is correctly and incorrectly estimated, respectively. In addition, -and + denotes the underestimation and overestimation, respectively. a relation is clustered into multiple sub-clusters, the annotators have to label the same relation repeatedly, which leads to a significant increase in labeling costs.
ACL 2023 Responsible NLP Checklist
A For every submission:
A1. Did you describe the limitations of your work?
we discuss the limitations in the Limitations section.
A2. Did you discuss any potential risks of your work?
we discuss the risks in the Limitations section.
A3. Do the abstract and introduction summarize the paper's main claims?
In the abstract and introduction section.
A4. Have you used AI writing assistants when working on this paper?
Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? section 4
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
4996
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? section 4
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? section 4,5
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? section 4 D Did you use human annotators (e.g., crowdworkers) or research with human participants?
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
a relation instance x i = {w 1 , w 2 , ..., w n }, in which four reserved word pieces [E1], [\E1], [E2], [\E2] are used to mark the beginning and end of each entity, the relation representation encoder f aims to encode the contextual relational information of instance
labeling step (sec. 3.3). Get the s th active labeled D * = D * ∪ D s .;
. 4. 2 ,
2RSN and RoCORE leverage labeled data of predefined relations, while RW-HAC and SelfORE follow unsupervised setting. To fairly compare all methods in a uniform setting, the first half of the relations in each dataset are held out as predefined relations. Specifically, in TACRED, 21 relations are held out, while in FewRel and FewRel-LT, the number is 40. TACRED (Zhang et al., 2017). TACRED is a largescale manually annotated RE dataset, covering 41 relations. Following Wu et al. (2019); Hu et al. (2020); Zhao et al. (2021), the instances labeled as no_relation are removed and the rest are used 4989 for training and evaluation.
Figure 3 :Figure 4 :
34Performance Performance with different hyperparameter settings.
B
Did you use or create scientific artifacts? section 4 B1. Did you cite the creators of artifacts you used? section 4 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? section 4
Table 2 :
2The number of relations discovered by the labeling strategies. (GRAD. as a reference point.)6 Analysis and Discussions
6.1 Analysis on Labeling Strategy
Main results (sec. 5) have shown the advantages
of the proposed actively supervised clustering
setting over the two-stage setting. However, some
serious readers may still think the comparison
across settings is unfair. To further reduce readers'
concerns and show the effectiveness of our labeling
strategy, we combine the actively supervised
clustering settings with the various active labeling
strategies and compare them in terms of relation
discovery and clustering performance. Note that
the number of key points selected by each strategy
is the same (two points per relation on average).
The results are shown in tab.2 and tab. 3. It can
be seen from tab. 2 that the proposed labeling
strategy finds the most relations. Different from
the classical strategies that focus only on improv-
ing the recognition of relations that have been
discovered, the proposed strategy appropriately
explores new relations by distance regularization,
Table 3 :
3Comparison results of labeling strategies on three datasets.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank.B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank.C Did you run computational experiments?
section 4
ConclusionsIn this work, we present a new setting, named actively supervised clustering for OpenRE, which provides the necessary guidance for clustering without a significant increase in human efforts. Along with this setting, a labeling strategy tailored for clustering is proposed, maximizing the clustering performance while discovering as many relations as possible. Different loss functions are assigned to pseudo labels with different reliability, which mitigate the risk of over-fitting to noise in pseudo labels. Experimental results show that this method significantly outperforms the existing two-stage methods for OpenRE.
AcknowledgementsThe authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.62076069,62206057,61976056), Shanghai Rising-Star Program (23QA1400200), and Natural Science Foundation of Shanghai (23ZR1403500).
Deep batch active learning by diverse, uncertain gradient lower bounds. T Jordan, Chicheng Ash, Akshay Zhang, John Krishnamurthy, Alekh Langford, Agarwal, arXiv:1906.03671arXiv preprintJordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2019a. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv preprint arXiv:1906.03671.
Deep batch active learning by diverse. Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, Alekh Agarwal, uncertain gradient lower bounds. CoRR, abs/1906.03671Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2019b. Deep batch active learning by diverse, uncertain gradient lower bounds. CoRR, abs/1906.03671.
Deep clustering for unsupervised learning of visual features. Mathilde Caron, Piotr Bojanowski, Armand Joulin, Matthijs Douze, Proceedings of the European conference on computer vision (ECCV). the European conference on computer vision (ECCV)Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In Proceedings of the European conference on computer vision (ECCV), pages 132-149.
What does BERT look at? an analysis of BERT's attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D Manning, 10.18653/v1/W19-4828Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPFlorence, ItalyAssociation for Computational LinguisticsKevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
Unsupervised open relation extraction. Hady Elsahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, Frederique Laforest, European Semantic Web Conference. SpringerHady Elsahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Frederique Laforest. 2017. Unsupervised open relation extraction. In European Semantic Web Conference, pages 12-16. Springer.
Deep bayesian active learning with image data. Yarin Gal, Riashat Islam, Zoubin Ghahramani, PMLRInternational Conference on Machine Learning. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR.
Assessing bert's syntactic abilities. Yoav Goldberg, abs/1901.05287ArXiv. Yoav Goldberg. 2019. Assessing bert's syntactic abilities. ArXiv, abs/1901.05287.
FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, Maosong Sun, 10.18653/v1/D18-1514Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsXu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803-4809, Brussels, Belgium. Association for Computational Linguistics.
SelfORE: Self-supervised relational feature learning for open relation extraction. Xuming Hu, Lijie Wen, Yusong Xu, Chenwei Zhang, Philip Yu, 10.18653/v1/2020.emnlp-main.299Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Xuming Hu, Lijie Wen, Yusong Xu, Chenwei Zhang, and Philip Yu. 2020. SelfORE: Self-supervised relational feature learning for open relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3673-3682, Online. Association for Computa- tional Linguistics.
What does BERT learn about the structure of language. Ganesh Jawahar, Benoît Sagot, Djamé Seddah, 10.18653/v1/P19-1356Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsGanesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.
Knowledge base population: Successful approaches and challenges. Heng Ji, Ralph Grishman, Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies. the 49th annual meeting of the association for computational linguistics: Human language technologiesHeng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 1148-1158.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Discrete-State Variational Autoencoders for Joint Discovery and Factorization of Relations. Diego Marcheggiani, Ivan Titov, 10.1162/tacl_a_00095Transactions of the Association for Computational Linguistics. 4Diego Marcheggiani and Ivan Titov. 2016. Discrete- State Variational Autoencoders for Joint Discovery and Factorization of Relations. Transactions of the Association for Computational Linguistics, 4:231- 244.
Margin-based active learning for structured output spaces. Dan Roth, Kevin Small, Machine Learning: ECML 2006. Berlin, Heidelberg; Berlin HeidelbergSpringerDan Roth and Kevin Small. 2006. Margin-based active learning for structured output spaces. In Machine Learning: ECML 2006, pages 413-424, Berlin, Heidelberg. Springer Berlin Heidelberg.
Modeling relational data with graph convolutional networks. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den, Ivan Berg, Max Titov, Welling, The Semantic Web. ChamSpringer International PublishingMichael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web, pages 593-607, Cham. Springer International Publishing.
Active learning for convolutional neural networks: A core-set approach. Ozan Sener, Silvio Savarese, Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach.
Matching the blanks: Distributional similarity for relation learning. Livio Baldini, Nicholas Soares, Jeffrey Fitzgerald, Tom Ling, Kwiatkowski, arXiv:1906.03158arXiv preprintLivio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. arXiv preprint arXiv:1906.03158. |
259,376,768 | Steno AI at SemEval-2023 Task 6: Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks | A legal document is usually long and dense requiring human effort to parse it. It also contains significant amounts of jargon which make deriving insights from it using existing models a poor approach. This paper presents the approaches undertaken to perform the task of rhetorical role labelling on Indian Court Judgements as part of SemEval Task 6: understanding legal texts, shared subtask A (Modi et al., 2023). We experiment with graph based approaches like Graph Convolutional Networks and Label Propagation Algorithm, and transformer-based approaches including variants of BERT to improve accuracy scores on text classification of complex legal documents. | [
234469858,
52967399,
258212587,
222141043,
246430932
] | Steno AI at SemEval-2023 Task 6: Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks
July 13-14, 2023
Anshika Gupta
Shaz Furniturewala
Vijay Kumari
Yashvardhan Sharma
Pilani,RajasthanBits Pilani
Steno AI at SemEval-2023 Task 6: Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks
Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)
the The 17th International Workshop on Semantic Evaluation (SemEval-2023)July 13-14, 2023(f20200111,f20200025)@pilani.bits-pilani.ac.in (p20190065,yash)@pilani.bits-pilani.ac.in
A legal document is usually long and dense requiring human effort to parse it. It also contains significant amounts of jargon which make deriving insights from it using existing models a poor approach. This paper presents the approaches undertaken to perform the task of rhetorical role labelling on Indian Court Judgements as part of SemEval Task 6: understanding legal texts, shared subtask A (Modi et al., 2023). We experiment with graph based approaches like Graph Convolutional Networks and Label Propagation Algorithm, and transformer-based approaches including variants of BERT to improve accuracy scores on text classification of complex legal documents.
Introduction
Rhetorical Role Labelling for Legal Documents refers to the task of classifying sentences from court judgements into various categories depending on their semantic function in the document. This task is important as it not only has direct applications in the legal industry but also has the ability to aid several other tasks on legal documents such as summarization and legal search. This task is still in it's early stages, with huge scope for improvement over the current state-of-the-art.
To facilitate automatic interpretation of legal documents by dividing them into topic coherent components, a rhetorical role corpus was created for Task 6, sub-task A of The International Workshop on Semantic Evaluation (Modi et al., 2023). Several applications of legal AI, including judgment summarizing, judgment outcome prediction, precedent search, etc., depend on this classification.
Related Works with Comparison
The predominant technique used in Rhetorical Role Labeling over large datasets is based on the use Model F1 score LEGAL-BERT 0.557 LEGAL-BERT + Neural Net 0.517 ERNIE 2.0 0.505 Table 1: Summary of related works on the task of rhetorical role labelling on legal text. (Parikh et al., 2022) of transformer-based models like LEGAL-BERT (Chalkidis et al., 2020) and ERNIE 2.0 (Sun et al., 2020), augmented by various heuristics or neural network models. The accuracy of these approaches has remained low over the years. The results are summarized in Table 1. The dataset (Parikh et al., 2022) used to implement the above approaches is relatively small, consisting only of a few hundred annotated documents and 7 sentence classes.
Dataset
The dataset (Kalamkar et al., 2022) is made up of publicly available Indian Supreme Court Judgements. It consists of 244 train documents, 30 validation documents and 50 test documents making a total of 36023 sentences. For every document, each sentence has been categorized into one of 13 semantic categories as follows:
1. PREAMBLE: The initial sentences of a judgement mentioning the relevant parties 2. FAC: Sentences that describe the events that led to the filing of the case 3. RLC: Judgments given by the lower courts based on which the present appeal was made to the present court 4. ISSUE: Key points mentioned by the court upon which the verdict needs to be delivered 5. ARG_PETITIONER: Arguments made by the petitioner 6. ARG_RESPONDENT: Arguments made by the respondent 1858 7. ANALYSIS: Court discussion of the facts, and evidence of the case 8. STA: Relevant statute cited 9. PRE_RELIED: Sentences where the precedent discussed is relied upon 10. PRE_NOT_RELIED: Sentences where the precedent discussed is not relied upon 11. Ratio: Sentences that denote the rationale/reasoning given by the Court for the final judgement 12. RPC: Sentences that denote the final decision given by the Court for the case 13. None: A sentence not belonging to any of the 12 categories
Proposed Techniques and Algorithms
We try several different approaches for the task at hand. All our models use LEGAL-BERT as their base, and use various methods for further processing and refining of results. The LEGAL-BERT family of models is a modified pretrained model based on the architecture of BERT (Devlin et al., 2019). The variant used in this paper is LEGAL-BERT-BASE, a model with 12 layers, 768 hidden units, and 12 attention heads. It has a total of 110M parameters and is pretrained for 40 epochs on a corpus of 12 GB worth of legal texts.
This model was fine-tuned on the task dataset for 2 epochs with a learning rate of 1e-5 using the Adam optimizer and Cross entropy loss
Direct Classification of CLS tokens
First, we used the default classifier of LEGAL-BERT to find the first set of predictions, to establish a baseline for our further experiments. Our next step used the CLS tokens extracted from the final hidden layer of this trained model.
Similar to the methodology of Gaoa et al.(2020) and Furniturewala et al.(2021) we utilised the CLS tokens from LEGAL-BERT for further classification models. This CLS token is a 768-dimensional semantic feature that represents BERT's understanding of the text input. It is a fixed embedding present as the first token in BERT's output to the classifier and contains all the useful extracted information present in the input text.
We tried directly applying various multi-layer neural networks to the extracted CLS tokens. These two models served as a baseline to assess the efficacy of our methods.
Graph-Based Approaches
We implemented classificaton systems based on graph architectures. We modeled the data into a graph using cosine similarity on the CLS tokens generated by LEGAL-BERT. An edge was created between two sentences if and only if their CLS tokens had cosine similarity greater than 0.5, with the cosine similarity acting as edge weight. The threshold was included to minimize the presence of noise-heavy edges in the graph.
cos(x, y) = n i=1 x i y i n i=1 (x i ) 2 n i=1 (y i ) 2(1)
The cosine similarity between two nodes, X and Y, is defined in equation (1), where x and y are the CLS tokens for nodes X and Y respectively, and n is the length of the CLS token, i.e. 768 in this case. The function for the final adjacency matrix is defined equation (2).
A XY = cos(x, y) if cos(x, y) > 0.5 0 otherwise(2)
On this graph, we performed the label diffusion algorithm (Zhou et al., 2003), to establish a graphbased baseline for our system. Random walk label diffusion assigns labels to an unlabeled node using the average of it's neighbours, weighted by their distance from the node.
F t+1 = α · P · F t + (1 − α) * Y(3)P = D −1/2 · A · D −1/2 (4) F * = (1 − α) * (I − αP ) −1 · Y(5)
To implement it, we combined the train and validation label array, one-hot encoded it and masked the validation labels. We then used equation (5) to generate predictions for each sentence. Here P is the normalised adjacency matrix, Y is the array of one-hot encoded labels, α is a hyper-parameter, D is the degree matrix, and Z is the array of predicted labels.
The matrix P is obtained via equation (4), normalizing the adjacency matrix A using the square root inverse of the degree matrix D. For our experimentation, we used α = 0.5.
Furthermore, we used a two-layer Graph Convolution Network (GCN) (Kipf and Welling, 2016) to perform classifications on the data. Inspired by the methodology of BERTGCN (Lin et al., 2021), we used the LEGAL-BERT embeddings of each sentence as the node representation for our graph, and then performed graph convolutions on it.
The GCN architecture uses trainable weights to identify the optimal weightage that each neighbour of each node should have on its label. The use of two layers allows us to incorporate the context of one-hop neighbours into the label of a particular node.
Z = f (X, A)(6)
= sof tmax(Â · ReLU (ÂXW (0) )W (1) ) (7)
We used equation (7) to predict the labels of the validation set. Here, Â represents the symmetrically normalized adjacency matrix, X is the feature vector which in this case is the LEGAL-BERT embeddings of the nodes, W i is the matrix of trainable weights in layer i.
The calculations required for this approach were extremely computationally expensive, so we were not able to train the model on the entire training set on a V100 server. We used half of the training documents for graph building and the prediction of labels. However, the LEGAL-BERT embeddings were generated by fine-tuning the model on all training documents.
Context-Based LEGAL-BERT
Our final approach was a Context-Based LEGAL-BERT. We cleaned each sentence by removing all stopwords (such as 'a', 'an', 'the') present using the NLTK library. Then we created a 5 sentence input corresponding to any given input by concatenating its two preceeding sentences and its two succeeding sentences in order. These 5 sentences were separated using LEGAL-BERT's separater token </s>. Sentences at the beginning or end of a document were padded using a string of <pad> tokens.
These 5 sentence inputs were then tokenized using LEGAL-BERT's tokenizer and fed into the model using the baseline parameters. We used the default classifier to perform classification on these context-based inputs.
Results
We trained the models and tested them on the validation set. The accuracy scores have been reported in Table 2.
We see that the performance of these models is significantly better than the previous attempts at this problem. The improvement of the results of previously studied models can be attributed to the increase in dataset size, along with other changes in the structure of the task.
However, our Context-based LEGAL-BERT approach outperforms the other frameworks by a significant margin. This exhibits that the context of each sentence is critically important in determining its label, and that we are successful in incorporating the context of each sentence into its representation.
We saw that graph-based approaches did not significantly improve performance compared to the current state-of-the-art models. However, it is important to note that we were unable to run the Graph Convolution Network using the entire train dataset due to compute constraints.
Despite such constraints, there might be other reasons for the mediocre performance of graphbased models. One possible reason is that the representation of the sentences used for building the model was not able to capture information necessary to make better predictions. This also explains how the Context-based LEGAL-BERT performed so much better -it improved the quality of sentence representation, successfully capturing a wider range of features pertaining to the task at hand.
Conclusion and Future Work
In this paper, we tried several different techniques to perform a sentence classification task on legal documents. Through our experiments, we show that incorporating context into the CLS tokens of sentences offers a significant improvement of 5.5 percentage points over LEGAL-BERT. Moreover, through our experiments on graphbased models, we show that improving the CLS tokens results in a better classification, compared to the regular CLS tokens used in a variety of different ways. The Context-based LEGAL-BERT model was not only more accurate but also less resource intensive.
For future improvements on these models, we could try the Graph Convolutional Network approach on the complete dataset. We could also try the various methods of classification, such as a custom neural network or label diffusion, on the context-based CLS tokens.
Moreover, we could further try to incorporate more sentences for context of each target sentence. This would require the use of a long-former model, since the total number of tokens passed into the model will increases.
Figure 1 :
1Extracting CLS Tokens(Furniturewala, 2021)
Figure 2 :
2GCN Architecture(Kipf and Welling, 2016)
Table 2 :
2Summary of results obtained by the models on validation dataset
LEGAL-BERT: The muppets straight out of law school. 10.18653/v1/2020.findings-emnlp.261Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. Findings of the Association for Computational Linguistics: EMNLP 2020Ilias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2898- 2904, Online. Association for Computational Lin- guistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Legal text classification and summarization using transformers and joint text features. Shaz Furniturewala, Racchit Jain, Vijay Kumari, Yashvardhan Sharma, Shaz Furniturewala, Racchit Jain, Vijay Kumari, and Yashvardhan Sharma. 2021. Legal text classification and summarization using transformers and joint text features.
Legal text classification model based on text statistical features and deep semantic features. Jiaming Gaoa, Hui Ninga, Zhongyuan Han, Leilei Kongb, Haoliang Qib, Jiaming Gaoa, Hui Ninga, Zhongyuan Han, LeiLei Kongb, and Haoliang Qib. 2020. Legal text clas- sification model based on text statistical features and deep semantic features.
Vivek Raghavan, and Ashutosh Modi. 2022. Corpus for automatic structuring of legal documents. Prathamesh Kalamkar, Aman Tiwari, Astha Agarwal, Saurabh Karn, Smita Gupta, Proceedings of the Thirteenth Language Resources and Evaluation Conference. the Thirteenth Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationPrathamesh Kalamkar, Aman Tiwari, Astha Agarwal, Saurabh Karn, Smita Gupta, Vivek Raghavan, and Ashutosh Modi. 2022. Corpus for automatic struc- turing of legal documents. In Proceedings of the Thirteenth Language Resources and Evaluation Con- ference, pages 4420-4429, Marseille, France. Euro- pean Language Resources Association.
Semi-supervised classification with graph convolutional networks. Thomas Kipf, Max Welling, Thomas Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks.
BertGCN: Transductive text classification by combining GNN and BERT. Yuxiao Lin, Yuxian Meng, Xiaofei Sun, Qinghong Han, Kun Kuang, Jiwei Li, Fei Wu, 10.18653/v1/2021.findings-acl.126Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsYuxiao Lin, Yuxian Meng, Xiaofei Sun, Qinghong Han, Kun Kuang, Jiwei Li, and Fei Wu. 2021. BertGCN: Transductive text classification by combining GNN and BERT. In Findings of the Association for Com- putational Linguistics: ACL-IJCNLP 2021, pages 1456-1462, Online. Association for Computational Linguistics.
Sachin Malhan, and Vivek Raghavan. 2023. SemEval-2023 Task 6: LegalEval: Understanding Legal Texts. Ashutosh Modi, Prathamesh Kalamkar, Saurabh Karn, Aman Tiwari, Abhinav Joshi, Shouvik Sai Kiran Tanikella, Guha, Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023). the 17th International Workshop on Semantic Evaluation (SemEval-2023)Toronto, CanadaACLAshutosh Modi, Prathamesh Kalamkar, Saurabh Karn, Aman Tiwari, Abhinav Joshi, Sai Kiran Tanikella, Shouvik Guha, Sachin Malhan, and Vivek Ragha- van. 2023. SemEval-2023 Task 6: LegalEval: Un- derstanding Legal Texts. In Proceedings of the 17th International Workshop on Semantic Evalua- tion (SemEval-2023), Toronto, Canada. Association for Computational Linguistics (ACL).
Association for Computing Machinery. Vedant Parikh, Upal Bhattacharya, Parth Mehta, Ayan Bandyopadhyay, Paheli Bhattacharya, Kripa Ghosh, Saptarshi Ghosh, 10.1145/3503162.3506571Arindam Pal, Arnab Bhattacharya, and Prasenjit Majumder. 2022. Aila 2021: Shared task on artificial intelligence for legal assistance. FIRE '21. New York, NY, USAVedant Parikh, Upal Bhattacharya, Parth Mehta, Ayan Bandyopadhyay, Paheli Bhattacharya, Kripa Ghosh, Saptarshi Ghosh, Arindam Pal, Arnab Bhattacharya, and Prasenjit Majumder. 2022. Aila 2021: Shared task on artificial intelligence for legal assistance. FIRE '21, page 12-15, New York, NY, USA. As- sociation for Computing Machinery.
Ernie 2.0: A continual pre-training framework for language understanding. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hua Hao Tian, Haifeng Wu, Wang, 10.1609/aaai.v34i05.6428Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie 2.0: A continual pre-training framework for language un- derstanding. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8968-8975.
Learning with local and global consistency. Dengyong Zhou, Olivier Bousquet, Thomas Lal, Jason Weston, Bernhard Schölkopf, Advances in Neural Information Processing Systems. MIT Press16Dengyong Zhou, Olivier Bousquet, Thomas Lal, Jason Weston, and Bernhard Schölkopf. 2003. Learning with local and global consistency. In Advances in Neural Information Processing Systems, volume 16. MIT Press. |
250,390,553 | Leveraging Three Types of Embeddings from Masked Language Models in Idiom Token Classification | Many linguistic expressions have idiomatic and literal interpretations, and the automatic distinction of these two interpretations has been studied for decades. Recent research has shown that contextualized word embeddings derived from masked language models (MLMs) can give promising results for idiom token classification. This indicates that contextualized word embedding alone contains information about whether the word is being used in a literal sense or not. However, we believe that more types of information can be derived from MLMs and that leveraging such information can improve idiom token classification. In this paper, we leverage three types of embeddings from MLMs; uncontextualized token embeddings and masked token embeddings in addition to the standard contextualized word embeddings and show that the newly added embeddings significantly improve idiom token classification for both English and Japanese datasets. | [
3626819,
52967399,
1999816,
202781416,
51873576,
227230507,
15607400,
235248108,
227231665,
3549266,
67856404,
233189591,
2390655
] | Leveraging Three Types of Embeddings from Masked Language Models in Idiom Token Classification
July 14-15, 2022
Ryosuke Takahashi [email protected]
Graduate School of Informatics
Nagoya University
Ryohei Sasano
Graduate School of Informatics
Nagoya University
Koichi Takeda [email protected]
Graduate School of Informatics
Nagoya University
Leveraging Three Types of Embeddings from Masked Language Models in Idiom Token Classification
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics
the 11th Joint Conference on Lexical and Computational SemanticsJuly 14-15, 2022
Many linguistic expressions have idiomatic and literal interpretations, and the automatic distinction of these two interpretations has been studied for decades. Recent research has shown that contextualized word embeddings derived from masked language models (MLMs) can give promising results for idiom token classification. This indicates that contextualized word embedding alone contains information about whether the word is being used in a literal sense or not. However, we believe that more types of information can be derived from MLMs and that leveraging such information can improve idiom token classification. In this paper, we leverage three types of embeddings from MLMs; uncontextualized token embeddings and masked token embeddings in addition to the standard contextualized word embeddings and show that the newly added embeddings significantly improve idiom token classification for both English and Japanese datasets.
Introduction
Potentially idiomatic phrases are often used both in the idiomatic and literal sense. For example, "blew whistle" in (1) is used in the literal sense, whereas that in (2) is used in the idiomatic sense, that is, the meaning of the phrase has shifted and in this case it means accuse. Deciding whether each occurrence of a potentially idiomatic phrase is a literal or idiomatic usage is an essential process for text understanding. We call this processing idiom token classification following Salton et al. (2016).
(1) The referee blew the whistle to end the match.
(2) I blew the whistle on government corruption.
Recently, contextualized word embeddings have been shown to be useful for word sense disambiguation (Hadiwinoto et al., 2019). Furthermore, * Ryosuke Takahashi is currently at SB Technology Corp. Shwartz and Dagan (2019) showed that the contextualized embeddings including BERT (Devlin et al., 2019) are useful for recognizing meaning shift of words in idioms. However, they only used contextualized embeddings, even though comparing them with the standard embeddings of the target word can be beneficial for precise detection of meaning shifts. Thus, in this paper, we propose a method to improved a BERT-based idiom token classifier by leveraging uncontextualized word embeddings.
Specifically, we use the token embedding of BERT, which is the uncontextualized embedding that is input to BERT and the same vector as is used for the prediction in the task of masked language model. Our assumption can be explained using (1) and (2) as follows: since "whistle" in (2) is used as a part of an idiomatic phrase, its contextualized embedding differs more from the uncontextualized embedding of "whistle" than in the case of (1).
Furthermore, we also leverage the masked token embedding of the target word in BERT, which is generated when the target phrase constituents are masked. This embedding can be considered to represent the meaning inferred from its context, and we assume that if the target phrase is used in the literal sense, as in (1), the output embedding will not significantly differ from the original embedding and thus the differences between the BERT embeddings without masking and those with masking are expected to be small.
Task and Baseline
Datasets and Settings
We focus on the idiom token classification of phrases consisting of verb-noun pairs in English and Japanese. As the English dataset, we use the VNC-Tokens dataset 1 (Cook et al., 2008). This dataset consists of 2,984 sentences containing 53 different potentially idiomatic verb-noun pairs in English, where each sentence is labeled with "I" (idiomatic), "L" (literal), or "Q" (unknown). We use 28 out of the 53 idioms that have similar numbers of idiomatic and literal occurrences and only those sentences labeled as "I" or "L" following Salton et al. (2016).
As the Japanese dataset, we use the OpenMWE Corpus 2 (Hashimoto and Kawahara, 2008). This dataset consists of 102,846 sentences containing 146 different potentially idiomatic verb-noun pairs in Japanese, where each sentence is labeled with "I" (idiomatic) or "L" (literal). We use 90 out of the 146 idioms for which more than 50 examples for both idiomatic and literal usages are available following Hashimoto and Kawahara (2008).
In this study, we adopt the zero-shot setting because we are interested in detecting meaning shifts of words that are not included in the training data. Specifically, we employ the one-versus-rest scheme with the fully zero-shot setting. That is, we build a classifier for each phrase, which is trained on the phrases that contain neither the verb nor the noun that makes up the target phrase. For example, when building a classifier for blew whistle, we exclude phrases whose verb is blew or whose noun is whistle from the training data. We take one fifth of each training dataset as development data.
Baseline Systems
As the baseline system, we adopted a minimal Embed-Encode-Predict model (Shwartz and Dagan, 2019) that uses only contextualized embeddings of the constituent words of the target phrase as input. The reason for adopting a relatively simple model as a baseline is that the purpose of this study is to confirm the effectiveness of the newly added embeddings. Figure 1 shows the outline of the model, which consists of an input layer, a hidden layer, and an output layer. The output layer predicts whether the input phrase is idiomatic or literal. The size of the hidden layer is half of the input embedding size in all models in the paper. We applied dropout on the input embeddings and hidden layer. The dropout rates are both 50%.
As the input, we used [v V ; v N ], a concatenation of the contextualized embeddings of the verb and noun that comprise the target phrase. We used the pre-trained models BERT-Base, Uncased 3 for English and BERT-Base, WWE 4 for Japanese. Both models have 12 layers and 768 hidden dimensions per token. Japanese sentences were tokenized by Juman++ 5 in advance. We used the development data to determine the number of training epochs and to determine which BERT hidden layer to use as the input embeddings of the Embed-Encode-Predict model. We refer to this model as BERT[v V ; v N ]. In addition, we developed models that only leverages one of the contextualized embeddings v V and v N to confirm the importance of each embedding. We refer to them as BERT[v V ] and BERT[v N ], respectively.
For reference, we also implemented support vector machine (SVM) based models with the features used in previous work. For English, we employed Salton et al. (2016)'s model that leveraged Skip-Thought Vectors (Kiros et al., 2015) as features. For Japanese, we implemented the features used by Hashimoto and Kawahara (2008), consisting of POS, lemma, token n-gram, hypernym, domain, voice, negativity, modality, adjacency, and adnominal information. Table 1 lists the macro-averaged accuracy for each baseline model with the accuracy of the majority baseline. Each accuracy is the average of 5 runs with different random seeds. For both English and Japanese dataset, BERT[v V ; v N ] achieved the highest accuracy, which demonstrates that BERT embeddings are useful for idiom token classification even in a zero-shot setting and supposedly capture the general characteristic of idiomaticity. We measured the statistical significance between BERT[v V ; v N ] and the other models with an approximate randomization test (Chinchor, 1992) with 99,999 iterations and significance level α = 0.05 after Bonferroni correction. We found significant differences against the Majority Baseline and Salton et al. (2016) with respect to English and against Majority Baseline and Hashimoto and Kawahara (2008) with respect to Japanese.
Leveraging Additional Embeddings
The relatively high performance of BERT[v V ; v N ] in a zero-shot setting indicates that the standard BERT embeddings contain information about how much the meaning differs from the standard meaning of the words that comprise the phrase. However, the performance of idiom token classification can be improved by explicitly incorporating the standard meaning of the constituent words and the meaning inferred from its context.
Additional embeddings
We add two types of embeddings to BERT[v V ;v N ]: uncontextualized token embeddings and masked token embeddings of the phrase constituents.
Uncontextualized token embeddings We use the token embedding of BERT, which is the uncontextualized embedding that is input to BERT and the same vector as is used for the prediction in the task of masked language model in BERT. This embedding can be considered to represent the standard meaning of the word and thus if the target phrase is used in the literal sense, the BERT embeddings, which are contextualized, should be similar to the token embeddings. We refer to the uncontextualized token embeddings of a verb and a noun as v V_t and v N_t , respectively.
Masked token embeddings We use the hidden layer of BERT when the target token is replaced with a special token [MASK]. This embedding can be considered to represent the meaning inferred from its context. If the target phrase is used in the literal sense, the differences between the BERT embeddings without masking and those with masking are expected to be small. We refer to the masked _ ; token embeddings of a verb and a noun as v V_m and v N_m , respectively. Figure 2 shows the overview of the proposed model. When a sentence containing the target phrase is given, a masked sentence, in which the verb and noun that comprise the phrase are masked, is generated and input to the BERT in addition to the original sentence. Then, v V , v V_t , v V_m , v N , v N_t , and v N_m are extracted and their concatenation is input to the Embed-Encode-Predict model.
BERT
Embed-Encode-Predict
Experiments and analysis
We performed the idiom token classification experiments with the additional embeddings. Table 2 lists the macro-averaged accuracy for different combinations of input embeddings. We can confirm that leveraging uncontextualized token embeddings and masked token embeddings in addition to the standard BERT embeddings is beneficial for idiom token classification. The statistical significance test shows that the difference between the ac-
curacy of BERT[v V ; v V_t ; v V_m ; v N ; v N_t ; v N_m ]
and that of BERT[v V ; v N ] are significant for both English and Japanese datasets. The accuracy of BERT[v V ; v V_t ; v N ; v N_t ] was slightly better than that of BERT[v V ; v V_m ; v N ; v N_m ]. We can say that the difference between the standard BERT embeddings and the uncontextualized token embed- dings should be a good indicator of idiomaticity. We assumed that when the target phrase is used in the literal sense, the uncontextualized token embeddings and the masked token embeddings tend to be similar to the standard BERT embeddings. To verify this assumption, we calculated the means of their cosine similarities for the literal and idiomatic cases, respectively. Table 3 lists the means of the cosine similarities. For English dataset, the mean of the cosine similarities between the uncontextualized token embeddings and standard BERT embeddings for the literal cases was 0.157, which was larger than that for the idiomatic cases, 0.122. Similarly, the mean of the cosine similarities between the masked token embeddings and standard BERT embeddings for the literal cases was 0.593, which was larger than that for the idiomatic cases, 0.517. The same trend can be observed for the Japanese dataset. It has been confirmed that all the differences are statistically significant. These results support our assumption.
Related Work
Several researchers have tackled the task of idiom token classification. Hashimoto and Kawahara (2008) is one of the earliest works. They created a Japanese annotated data for idiom token classification and proposed an SVM-based model with a set of features that commonly used for WSD. Fazly et al. (2009) proposed statistical measures that quantify the degree of lexical, syntactic, and overall fixedness of a verb noun combination. Sporleder and Li (2009) proposed a model for unsupervised idiom token classification based on the observation that literally used expressions typically exhibit cohesive ties with the surrounding discourse, while idiomatic expressions do not. Li and Sporleder (2010) explored various features, such as global lexical context, discourse cohesion, syntactic structure, and local lexical features. They reported that global lexical context and discourse cohesion were most effective for idiom token classification. Peng et al. (2014) treated id-iom token identification as a problem of outlier detection. They extracted topics from paragraphs containing idioms and from paragraphs containing literals by using Latent Dirichlet Allocation (LDA).
A broad range of neural network-based models have been proposed in recent years. Gharbieh et al. (2016) obtained phrase representations by averaging skip-gram (Mikolov et al., 2013) vectors of words that appear around the target phrase and applied them to idiom token classification. Salton et al. (2016) constructed an SVM-based classifier using the distributed representation of sentences generated by the Skip-Thought model (Kiros et al., 2015). King and Cook (2018) improved the performance of word embedding-based methods by incorporating syntactic and lexical patterns of idiomatic expressions.
More recently, methods using contextualized word embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have been proposed. Shwartz and Dagan (2019) showed that the contextualized embeddings of constituent words were useful for recognizing meaning shifts of phrases. Hashempour and Villavicencio (2020) and Kurfalı and Östling (2020) worked on the idiom token classification task using BERT embeddings and reported that the BERT-based model achieved high accuracy in a phrase-specific setting. Garcia et al. (2021) proposed probing measures to examine how accurately idiomaticity in noun compounds is captured in vector space models and concluded that idiomaticity is not yet accurately represented by contextualized word embeddings.
Studies that used multiple types of embeddings in BERT, similar to our method, include the work by Zhang et al. (2020) and Yamada et al. (2021). Zhang et al. used the weighted sum of the input embedding and the mask embedding for spelling error correction whereas Yamada et al. used the weighted sum of the input embedding and the mask embedding for semantic frame induction.
Conclusion
We demonstrate that leveraging uncontextualized token embeddings and masked token embeddings in addition to the standard contextualized word embeddings significantly improve idiom token classification in a zero-shot setting. We also show that the results of investigating the similarities of these embeddings for each of the literal and idiomatic cases support our assumption that the uncontextu-alized token embeddings and the masked token embeddings tend to be similar to the standard BERT embeddings when the target phrase is used in the literal meaning. One of the advantages of the proposed method is that it does not require training a new model because it extracts and uses embeddings with different properties from the same language model. We believe that the three types of embedding introduced in this study can be applied to other natural language tasks.
Figure 1 :
1The Embed-Encode-Predict model.
2 http://openmwe.sourceforge.jp/Idiom/corpus/ OpenMWE-Corpus-0.02.tar.bz2Models
English Japanese
Majority Baseline
0.672
0.629
Salton et al. (2016)
0.780
-
Hashimoto and Kawahara (2008)
-
0.740
BERT[vV]
0.829
0.816
BERT[vN]
0.836
0.821
BERT[vV; vN]
0.840
0.823
Table 1: Macro-averaged accuracy for baseline systems.
Figure 2: Overview of the proposed model.model
;
;
_ ;
_
_ ;
match
whistle
the
blew
referee
the
to end
The
match
[MASK]
the
[MASK]
referee
the
to end
The
Embeddings
English Japanese
vV; vN
0.840
0.823
vV; vV_t; vN; vN_t
0.859
0.842
vV; vV_m; vN; vN_m
0.852
0.829
vV; vV_t; vV_m; vN; vN_t; vN_m 0.865
0.847
Table 2 :
2Macro-averaged accuracy for different combinations of input embeddings.
Table 3 :
3Means of the cosine similarities of standard
BERT embeddings (v) against uncontextualized token
embeddings (v t ) and masked token embeddings (v m )
for literal and idiomatic cases, respectively.
https://people.eng.unimelb.edu.au/paulcook/ English_VNC_Cook.zip
https://storage.googleapis.com/bert_models/2018_ 10_18/uncased_L-12_H-768_A-12.zip 4 http://nlp.ist.i.kyoto-u.ac.jp/nl-resource/ JapaneseBertPretrainedModel/Japanese_L-12_H-768_ A-12_E-30_BPE_WWM.zip 5 https://github.com/ku-nlp/jumanpp
AcknowledgementsThis work was supported by JSPS KAKENHI Grant Numbers 21K12012.
The statistical significance of the MUC-4 results. Nancy Chinchor, Proceedings of the 4th Message Understanding Conference (MUC). the 4th Message Understanding Conference (MUC)Nancy Chinchor. 1992. The statistical significance of the MUC-4 results. In Proceedings of the 4th Mes- sage Understanding Conference (MUC), pages 30- 50.
The VNC-tokens dataset. Paul Cook, Afsaneh Fazly, Suzanne Stevenson, Proceedings of the LREC Workshop on Towards a Shared Task for Multiword Expressions (MWE). the LREC Workshop on Towards a Shared Task for Multiword Expressions (MWE)Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2008. The VNC-tokens dataset. In Proceedings of the LREC Workshop on Towards a Shared Task for Multiword Expressions (MWE), pages 19-22.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 4171-4186.
Unsupervised type and token identification of idiomatic expressions. Afsaneh Fazly, Paul Cook, Suzanne Stevenson, Computational Linguistics. 351Afsaneh Fazly, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.
Probing for idiomaticity in vector space models. Marcos Garcia, Tiago Kramer Vieira, Carolina Scarton, Marco Idiart, Aline Villavicencio, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL). the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL)Marcos Garcia, Tiago Kramer Vieira, Carolina Scarton, Marco Idiart, and Aline Villavicencio. 2021. Probing for idiomaticity in vector space models. In Proceed- ings of the 16th Conference of the European Chap- ter of the Association for Computational Linguistics (EACL), pages 3551-3564.
A word embedding approach to identifying verb-noun idiomatic combinations. Waseem Gharbieh, Virendra Bhavsar, Paul Cook, Proceedings of the 12th Workshop on Multiword Expressions (MWE). the 12th Workshop on Multiword Expressions (MWE)Waseem Gharbieh, Virendra Bhavsar, and Paul Cook. 2016. A word embedding approach to identifying verb-noun idiomatic combinations. In Proceedings of the 12th Workshop on Multiword Expressions (MWE), pages 112-118.
Improved word sense disambiguation using pre-trained contextualized word representations. Christian Hadiwinoto, Wee Chung Hwee Tou Ng, Gan, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Christian Hadiwinoto, Hwee Tou Ng, and Wee Chung Gan. 2019. Improved word sense disambiguation us- ing pre-trained contextualized word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5297-5306.
Leveraging contextual embeddings and idiom principle for detecting idiomaticity in potentially idiomatic expressions. Reyhaneh Hashempour, Aline Villavicencio, Proceedings of the Workshop on the Cognitive Aspects of the Lexicon (CogALex). the Workshop on the Cognitive Aspects of the Lexicon (CogALex)Reyhaneh Hashempour and Aline Villavicencio. 2020. Leveraging contextual embeddings and idiom princi- ple for detecting idiomaticity in potentially idiomatic expressions. In Proceedings of the Workshop on the Cognitive Aspects of the Lexicon (CogALex), pages 72-80.
Construction of an idiom corpus and its application to idiom identification based on WSD incorporating idiom-specific features. Chikara Hashimoto, Daisuke Kawahara, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP)Chikara Hashimoto and Daisuke Kawahara. 2008. Con- struction of an idiom corpus and its application to idiom identification based on WSD incorporating idiom-specific features. In Proceedings of the 2008 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 992-1001.
Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations. Milton King, Paul Cook, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). the 56th Annual Meeting of the Association for Computational Linguistics (ACL)Milton King and Paul Cook. 2018. Leveraging dis- tributed representations and lexico-syntactic fixed- ness for token-level prediction of the idiomaticity of English verb-noun combinations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 345-350.
Skip-thought vectors. Ryan Kiros, Yukun Zhu, R Russ, Richard Salakhutdinov, Raquel Zemel, Antonio Urtasun, Sanja Torralba, Fidler, Proceedings of Advances in Neural Information Processing Systems 28 (NIPS). Advances in Neural Information Processing Systems 28 (NIPS)Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of Advances in Neural Information Processing Sys- tems 28 (NIPS), pages 3294-3302.
Disambiguation of potentially idiomatic expressions with contextual embeddings. Murathan Kurfalı, Robert Östling, Proceedings of the Joint Workshop on Multiword Expressions and Electronic Lexicons (MWE). the Joint Workshop on Multiword Expressions and Electronic Lexicons (MWE)Murathan Kurfalı and Robert Östling. 2020. Disam- biguation of potentially idiomatic expressions with contextual embeddings. In Proceedings of the Joint Workshop on Multiword Expressions and Electronic Lexicons (MWE), pages 85-94.
Linguistic cues for distinguishing literal and non-literal usages. Linlin Li, Caroline Sporleder, Proceedings of the 23rd International Conference on Computational Linguistics (COLING). the 23rd International Conference on Computational Linguistics (COLING)Linlin Li and Caroline Sporleder. 2010. Linguistic cues for distinguishing literal and non-literal usages. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING), pages 683- 691.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS). the 26th International Conference on Neural Information Processing Systems (NIPS)Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed representa- tions of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS), pages 3111-3119.
Classifying idiomatic and literal expressions using topic models and intensity of emotions. Jing Peng, Anna Feldman, Ekaterina Vylomova, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jing Peng, Anna Feldman, and Ekaterina Vylomova. 2014. Classifying idiomatic and literal expressions using topic models and intensity of emotions. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2019-2027.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227-2237.
Idiom token classification using sentential distributed semantics. Giancarlo Salton, Robert Ross, John Kelleher, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). the 54th Annual Meeting of the Association for Computational Linguistics (ACL)Giancarlo Salton, Robert Ross, and John Kelleher. 2016. Idiom token classification using sentential distributed semantics. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (ACL), pages 194-204.
Still a pain in the neck: Evaluating text representations on lexical composition. Vered Shwartz, Ido Dagan, Transactions of the Association for Computational Linguistics. 7Vered Shwartz and Ido Dagan. 2019. Still a pain in the neck: Evaluating text representations on lexical composition. Transactions of the Association for Computational Linguistics, 7:403-419.
Unsupervised recognition of literal and non-literal use of idiomatic expressions. Caroline Sporleder, Linlin Li, Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL). the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL)Caroline Sporleder and Linlin Li. 2009. Unsupervised recognition of literal and non-literal use of idiomatic expressions. In Proceedings of the 12th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL), pages 754-762.
Semantic frame induction using masked word embeddings and two-step clustering. Kosuke Yamada, Ryohei Sasano, Koichi Takeda, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP). the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP)Kosuke Yamada, Ryohei Sasano, and Koichi Takeda. 2021. Semantic frame induction using masked word embeddings and two-step clustering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 811-816.
Spelling error correction with soft-masked BERT. Shaohua Zhang, Haoran Huang, Jicong Liu, Hang Li, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL). the 58th Annual Meeting of the Association for Computational Linguistics (ACL)Shaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 882-890. |
255,928,513 | Gradual Modifications and Abrupt Replacements: Two Stochastic Lexical Ingredients of Language Evolution | The evolution of the vocabulary of a language is characterized by two different random processes: abrupt lexical replacements, when a complete new word emerges to represent a given concept (which was at the basis of the Swadesh foundation of glottochronology in the 1950s), and gradual lexical modifications that progressively alter words over the centuries, considered here in detail for the first time. The main discriminant between these two processes is their impact on cognacy within a family of languages or dialects, since the former modifies the subsets of cognate terms and the latter does not. The automated cognate detection, which is here performed following a new approach inspired by graph theory, is a key preliminary step that allows us to later measure the effects of the slow modification process. We test our dual approach on the family of Malagasy dialects using a cladistic analysis, which provides strong evidence that lexical replacements and gradual lexical modifications are two random processes that separately drive the evolution of languages. | [
14244160,
52012037,
12759814,
6880619,
16228781,
7641265,
15251605,
8247738,
490045
] | Gradual Modifications and Abrupt Replacements: Two Stochastic Lexical Ingredients of Language Evolution
Michele Pasquini [email protected]
Maurizio Serva [email protected]
Davide Vergni [email protected]
Dipartimento di Ingegneria e Scienze dell'Informazione e Matematica
Istituto per le Applicazioni del Calcolo "Mauro Picone" -CNR
RomeItaly
Istituto per le Applicazioni del Calcolo "Mauro Picone" -CNR
Università dell'Aquila
RomeL'AquilaItaly, Italy
Gradual Modifications and Abrupt Replacements: Two Stochastic Lexical Ingredients of Language Evolution
10.1162/coli
The evolution of the vocabulary of a language is characterized by two different random processes: abrupt lexical replacements, when a complete new word emerges to represent a given concept (which was at the basis of the Swadesh foundation of glottochronology in the 1950s), and gradual lexical modifications that progressively alter words over the centuries, considered here in detail for the first time. The main discriminant between these two processes is their impact on cognacy within a family of languages or dialects, since the former modifies the subsets of cognate terms and the latter does not. The automated cognate detection, which is here performed following a new approach inspired by graph theory, is a key preliminary step that allows us to later measure the effects of the slow modification process. We test our dual approach on the family of Malagasy dialects using a cladistic analysis, which provides strong evidence that lexical replacements and gradual lexical modifications are two random processes that separately drive the evolution of languages.
19th century, when French admiral Jules Dumont d'Urville collected comparative word lists from several languages of the Pacific area during the naval expedition aboard the Astrolabe, the corvette he commanded from 1826 to 1829. Although he mainly dealt with the geographical aspects of the expedition, in his account of the voyage the idea of comparing terms from different languages with the same meaning clearly emerges (D'Urville 1832).
Glottochronology, the application of lexicostatistical methods with the goal of establishing when a language separated into derived languages, was introduced by Morris Swadesh in the 1950s (Swadesh 1950(Swadesh , 1951(Swadesh , 1952(Swadesh , 1954(Swadesh , 1955). Swadesh's approach, as its main formula clearly reveals, was inspired by the success of the carbon-14 dating technique developed at that time: Swadesh's hypothesis was that, just as the radioactive decay of a carbon-14 atom into a more stable one occurs at a constant rate, the replacement of a term with a synonym in a language is a rare but observable event whose probability rate is constant along the centuries.
In a formula:
M(t) = M e −λt(1)
where M is the initial number of words in a basic list and M(t) the number of words not yet replaced at time t. Swadesh's estimate for the substitution rate was λ 0.14 per millennium (Swadesh 1955). Following the analogy, the logarithm of the fraction of unchanged terms is proportional to the temporal separation between ancestor and descendant languages
t = − 1 λ ln M(t) M(2)
just as the logarithm of the ratio between carbon-14 to total carbon is proportional to the age of an archaeological finding or a fossil. Starting from this assumption, Swadesh only needed a way to determine the number of unreplaced words. For this purpose, he introduced a list of M universal concepts (named after him) and prepared word-lists for different languages corresponding to the same concepts (Swadesh 1950). By means of an accurate linguistic analysis, he was able to count the number of cognate pairs in two languages and by a simple probabilistic reasoning he was able to estimate the number M(t) of unreplaced words in a single language. More often the comparisons concern coeval languages with synchronous lists of M concepts, where experts try to evaluate the number M(t) of non-cognate pairs between two languages; therefore a factor 1 2 has to be added in formula (2), to obtain the temporal separation between the two coeval languages and the first common ancestor.
After Swadesh, many scholars in the following decades continued along his approach-see, for example, Embleton (1986) and McMahon and McMahon (2005)and the most sensitive point was surely the linguistic comparison between terms of the same concept from different languages: As it is well known, there are many mechanisms that induce changes in a language, even if we limit ourselves to the lexical sphere. Certainly this is a key issue, especially from an epistemological point of view: Could a scientific investigation rely on deep, thoughtful, but ultimately personal and subjective judgments about the cognacy of words? Can linguists with different origin, training, and experience ensure the reproducibility of a linguistic measure?
However, criticisms of Swadesh's method have been directed mainly to other aspects. In particular, it has often been argued that sets of cognates should be cleaned of linguistic loans (see, e.g., Starostin 2000); the rate of replacement λ is not universal and depends on concepts (van der Merwe 1966;Dyen and Cole 1967); the probability of retention of older words diminishes (rate of replacement λ decreases over time) (Starostin 2000), and, more recently, the stability ranking of concepts varies for different families of languages (Pasquini and Serva 2021).
Apart from strictly linguistic aspects, the remaining tasks of glottochronology consist of methodologies that computational mathematics had already successfully applied in evolutionary biology: matrices of distances, cladistics, family trees. The only appreciable new proposal has been the introduction of automated algorithms: Instead of deciding whether two words are cognates or not, the words are treated as strings and their similarity is fixed by an algorithm, or a well-defined procedure; for instance, the Levenshtein distance (LD) (Levenshtein 1966) (also known as edit distance), which corresponds to the minimum number of operations (insertions, deletions, or substitutions) required to obtain one string from another, can be used. This is clearly a first response to what we believe is the most important weakness of Swadesh's method: the objectivity and then the reproducibility from independent researchers. Actually, the search for automated methods in comparative linguistics has aroused increasing interest in recent years. In Section 3 the reader can find a quick review of most popular ones.
A linguist's decision about the cognacy between two words is equivalent to assigning a dichotomic distance, 0 if the two terms are cognates, 1 otherwise. By normalizing the LD between words (Nerbonne and Heeringa 1997) to a real number between 0 and 1, we can measure the distance between words (it is a measure in a strictly mathematical sense). In this way an efficient and automated procedure to construct language distance that obtains very accurate phylogenetic reconstructions and family trees Petroni and Serva 2008, 2010a,b, 2011 has been introduced.
The state of the art suggests that two issues are ready to be further explored: first, an automated cognacy decision may be taken by using more reliable algorithms; second, the model for the process of word change can be improved with respect to the simple Poisson model historically associated with the radioactive decay. Actually, the standard Swadesh approach does not consider all the random events that lead two cognate terms of different languages to slowly diverge from each other through small changes century after century. But, clearly, the random process of gradual lexical modifications beyond the well-known random process of abrupt lexical replacements also plays a crucial role in language evolution.
In this work we study the random process of gradual lexical modification and we introduce a simple but extremely efficient algorithm able to identify the subset of cognate words among a well-represented family of languages or dialects (the collection of lists must cover all varieties of the family). In this way we keep distinct the replacement random processà la Swadesh, which induces the separation into different cognate subsets, from the other lexical random process acting inside the subsets of cognates. We are able to quantify the effects of this second process and by means of a statistic and phylogenetic analysis we show that both processes are fundamental ingredients for language evolution. This means that the lexical modifications process also has the potential to reveal enlightening information for understanding the evolution of a language family.
Linguistic Context, Data, and Mathematical Tools
Madagascar ( Figure 1) is a wide island (about three times the area of Great Britain), colonized by Indonesian sailors about 1,400 years ago (an essential review of the The map of Madagascar with the names of the towns/villages where each Swadesh list was collected. The names of the ethnicities are missing. Colors correspond to the classification proposed in Serva and Pasquini (2020). In both Toliara (south-west coast) and Morondava (west coast), two different dialects coexist, but in Morondava they are close to each other (both are blue), whereas in Toliara they are distant (one is yellow and unreported, the other is blue). literature on this point can be found in Serva and Pasquini [2020]). Since then, the Austronesian language of the settlers has begun to differentiate into regional varieties as their descendants spread all over the island (Dahl 1938(Dahl , 1951Dyen 1953;Dez 1963;Vérin, Kottak, and Gorlin 1969;Hudson 1967;Adelaar 1995Adelaar , 2006Beaujard 2003;Blench 2007;Blench and Walsh 2009;Serva 2012). There are two main reasons for choosing Madagascar for our analysis:
(1)
The Malagasy language has evolved into dozens of dialects with little external contamination (only rare traces of Bantu words can be found [Dahl 1954;Adelaar 2012;Blench 2008;Serva and Pasquini 2022]);
(2) we have a complete database of Malagasy dialects (Malagasy Version 1.0 of the database was published in Serva and Pasquini (2020); however, a small number of entries have been updated in the meantime (0.28% of the total, almost all in the Sakalava [Mahajanga] list). The name of each list shows the ethnicity followed by the location in parentheses. We emphasize that in each list every concept has a single corresponding entry, chosen by the interviewer as the most commonly used by the respondents. For the sake of completeness, we also mention that we reduced the Swadesh lists to 205 concepts, neglecting the items ice and snow, too often misleading for Malagasy speakers.
In this article, N is the number of languages/dialects and M is the number of concepts (N = 60 and M = 205 for the Malagasy database, respectively); Greek letters α, β, ... will always be associated with languages, while roman letters i, j, ... will point to concepts. If we think of Swadesh lists as a series of side-by-side columns, then the database is an M × N matrix whose generic entry:
W α,i (α = 1, . . . , N ; i = 1, . . . , M)(3)
is the word used in the language α to represent the i-th concept. The α-th column {W α,i } i=1,...,M is the whole Swadesh list for the language α, while the i-th row {W α,i } α=1,...,N shows all the different ways in which the i-th concept is expressed in the different dialects. Following Nerbonne and Heeringa (Nerbonne and Heeringa 1997), we use Normalized Levenshtein Distance (NLD) to measure the degree of similarity between two words W α,i and W β,j :
NLD(W α,i , W β,j ) = LD(W α,i , W β,j ) max length(W α,i , W β,j )(4)
that is, the ratio of the LD(W α,i , W β,j ) between the two words (i.e., the number of insertions, deletions, or substitutions to transform W α,i into W β,j or vice versa) and the length of the longest of them. In this way we obtain a real number between 0 (two identical words) and 1 (two completely different words), with all shades in between.
The definition of a distance, D NLD α,β , for two languages α and β immediately follows from (4), considering only pairs of words belonging to the same concept ( j = i) and then averaging over all M concepts:
D NLD α,β = 1 M M i=1 NLD(W α,i , W β,i )(5)
This definition of language distance based on NLD has been systematically used since 2008 Petroni and Serva 2008;Bakker et al. 2009;Petroni and Serva 2010a, 2010bCiobanu and Dinu 2018). This lexical distance can be associated with a genealogical distance T NLD α,β -as has been typically done in cladistics studies since the seminal works of Swadesh (Swadesh 1950(Swadesh , 1955-which is the time depth from the last common ancestor of α and β:
T NLD α,β = − τ 2 ln(1 − D NLD α,β )(6)
where τ, measured in millennia, has to be fixed by external information (by historical facts, or taken from another linguistic context Petroni and Serva 2008]). Notice that λ = 1 τ corresponds to the Swadesh number of substitutions per millennium; the extra factor 1 2 is due to the fact that we are comparing contemporary languages.
Actually, the parameter τ is relevant only if one is interested in determining the time depth of a language family, but can be neglected if the focus is only on phylogenetic reconstruction.
Automated Cognate Detection
Attempts to make some aspects of the linguist's work automatic has greatly increased in recent years, especially concerning the detection of cognate pairs. Since the pioneering work of Covington (Covington 1996), many combinations of metrics for word similarity and criteria for cognate set partitioning have been proposed (e.g., see Hauer and Kondrak 2011;List 2012;Dinu 2014a,b, 2015;List, Lopez, and Bapteste 2016;Jäger, List, and Sofroniev 2017;Rama et al. 2018;Rama and List 2019; and also see List 2014 for an historical overview). Since 2008 NLD distance has been repeatedly used as a metric tool (see List, Greenhill, and Gray 2017 for a recent example), but NLD as well as other metrics are often coupled with partitioning strategies that do not perform particularly well. Some authors limit the decision about cognacy only to the pair under consideration, neglecting how other languages express the same concept; with this approach there is no way to find out that the word leche (Spanish) and γάλα (Greek) are actually cognates. Other scholars, more cleverly, take all the decisions about cognacy for the same concept as a single step using partitioning algorithms, such as UPGMA-like variants, where a global threshold or a maximal threshold is used to create the subsets of cognates. Our approach, as it will be clear in the following, is different since the threshold only is used to create links, without any restriction on average distance or maximal distance in a subset. In this way the detection of leche and γάλα as cognates only needs a sufficiently large number of intermediate words. This is why a broad collection of the varieties of a language family is a crucial aspect of our methodology.
We also think that NLD is the right metric to be used since it is the only one sufficiently sensitive and precise to grasp the gradual modifications and not confuse them with abrupt replacements. NLD distance deals with deletions, insertions, and substitutions of elements in a string, which are the genuine essence of lexical modifications; therefore, it is the natural tool to handle this kind of process. On the other hand, when a replacement occurs it is reasonable to expect, at least from a probabilistic point of view, that the new synonym is not a cognate of the previous one. We can provide a couple of examples: classical Latin caput (head) replaced by testa in late Latin (original meaning: clay pot); classical Latin caseus (cheese) replaced by formaticum in late Latin, from forma (mold) + the noun-forming −aticum suffix. In both cases the replacing synonym and the original term are not cognates.
To show how NLD deals with pairs of non-cognate words, we use the Malagasy database. We randomly choose pairs of words associated with different concepts from different dialects that surely are not cognate, we compute the NLD distances between these words, and we display the distance statistics. Figure 2 (purple bars) shows the result: Almost all the distribution is concentrated in the right half of the NLD axis (0.5 ≤ NLD ≤ 1). NLD distribution for pairs of words corresponding to the same concept in different languages (which can or cannot be cognate) is plotted in the inset of Figure 2 (red bars): Apart from about 30% of identical words, the distribution covers more or less uniformly the NLD axis. This distribution is the superposition of the distribution of distances between cognate pairs (still unknown) and the distribution of distances between non-cognate pairs (represented by purple bars in Figure 2). The last
Figure 2
Main picture (purple bars): Percentage frequency distribution of NLD out of more than 600,000 random draws of pairs of terms from the Malagasy database. Each pair is composed of two words, W α,i and W β,j which belong to different languages (α = β) and to different concepts (i = j). The first half of NLD values (NLD < 0.5) has a probability less than 1.5%. Inset (red bars): Percentage frequency distribution of NLD for pairs of words of the same concept (α = β, i = j). remark suggests a different approach in introducing a threshold for automated cognacy: Instead of using the threshold to identify which pairs are surely not cognate, since a certain value has overcome the threshold (the usual policy), we can use it just to confirm which pairs are surely cognates. By comparing the two distributions in Figure 2, we can conclude that if a pair has an NLD less than about 0.5, the two words almost certainly are cognates. At the end of this section, a novel, objective methodology for obtaining the optimal threshold for identifying two direct cognate terms will be presented.
Here, an algorithm for automated cognacy detection is introduced. Given a cognacy threshold D T and a given concept i, all possible N(N − 1)/2 pairs of words are checked and a direct cognacy link, L i α,β , is established for those pairs whose NLD is below the threshold:
L i α,β = 1 if NLD(W α,i , W β,i ) < D T ⇔ direct cognacy link 0 if NLD(W α,i , W β,i ) ≥ D T(7)
In this way, the N words associated with the concept i are split into a certain number of non-overlapping subsets or, more precisely, a certain number of disjoint subgraphs.
In fact, an unweighted undirected graph, G i = (V i , E i ), can be defined for each given concept i, where V i , the set of vertices, are the N words, {W α,i } α=1,...,N , associated with the i-th concept in the different dialects α and E i , the set of undirected edges, are composed of those paired words, (W α,i , W β,i ), with a direct cognacy link (L i α,β = 1; i.e., their NLD is below the threshold D T ).
With this simple link-generation rule, the graph G i is naturally divided into connected subgraphs whose vertices are naturally identified as the subsets of cognates. In other words, we define cognates as all the words that belong to each given subgraph and, conversely, two words belonging to different subgraphs are not cognate. It is very important to stress that, even if two words are in the same subgraph (that is, are cognates), their NLD is not necessarily less than the cognacy threshold D T (see the following example of concept ash and Figure 1).
The connection with graph theory gives us a powerful context to formalize the interesting quantities to be studied and it will prove to be very useful in the next sections of the article. For example, we can easily handle the cognacy relation between a pair of words W α,i and W β,i exploiting definition (7) to introduce a discrete variable C i α,β , whose value is 1 if words are cognates (i.e., if vertices α and β belong to the same subgraph), 0 otherwise:
C i α,β = 1 if exist {γ j } j=1,...,n such that: L i α,γ 1 · L i γ 1 ,γ 2 · . . . · L i γ n−1 ,γ n · L i γ n ,β = 1 0 otherwise (8)
That is, two words are cognates if they are linked by a sequence of direct cognates.
An example of well-known words can help to fix the idea. Consider the concept ash in the languages listed in Table 1. Using the above-discussed algorithm, the words are split into two disjoint subgraphs (Germanic family and Romance family); therefore in Figure 3 we have two subsets of cognates: as, asche, aschen, ash, aska, aske, jiske (blue) and cender, cendra, cendre, cenere, ceniza, cenusa, cinza (red). Each line indicates a direct cognacy link, that is, the NLD distance of pairs of words is below the cognacy threshold D T = 0.5, which results in an undirect edge between the two terms. Figure 3 shows the subgraphs resulting from the application of the automated cognacy procedure, and, in particular, it is important to stress that words belonging to the same subgraph are
Figure 3
Example of cognacy subgraphs, one for the Germanic family (on the left, in blue) and one for the Romance family (on the right, in red) associated with the concept ash (D T = 0.5).
considered cognates but they do not necessarily have a direct link: For example, aschen (Luxembourgish) and jiske (Frisian) have NLD = 5/6 0.83 > D T , and cenere (Italian) and cinza (Portuguese) have NLD = 4/6 0.67 > D T . However, their cognacy relation is assured by the fact that they belong to the same subgraph given the presence of the intermediate words asche (German) and aske (Danish) in the Germanic family, and cendra (Catalan) and ceniza (Spanish) in the Romance family. This small example clearly highlights the peculiarities of our algorithm: It is fast and it has a single parameter D T , but it needs a large representation of languages. If we did not know the existence of ceniza (Spanish) we would have lost the cognate connection between cinza (Portuguese) and the rest of the Romance subset. This is the most important reason to use the Malagasy dataset that fully covers all varieties.
An objective method for determining the optimal threshold depending on the language family under consideration will now be discussed. Choosing a value for D T , the automated algorithm returns back the cognate subsets for each concept i, and the resulting NLD distribution for all non-cognate pairs g(NLD) can be easily computed. We have previously obtained an example of the distribution g(NLD) simulating it in Figure 2 by means of f (NLD) (purple bars), the distribution of pairs of terms taken from different concepts and different languages (which are definitely not cognates). A strategy to fix the only parameter of our approach immediately follows: The optimal D T minimizes the distance between g(NLD) and f (NLD).
In order to compare these two distributions, we need to slightly modify them. First, we go from percentage frequency to absolute frequency (by simply dividing by 100); then, we split the range of values of NLD (0 ≤ NLD ≤ 1) into a certain number of successive intervals labeled by k, computing the cumulative absolute frequencies {g k } and { f k } for both distributions in all intervals. The interval width is not fixed, but has been made variable so that an adequate number of distances between non-cognate pairs falls in each interval; in other words, we always have values f k that are not too small to carry out a statistically significant computation. Just to fix the idea, in the first half 0 ≤ NLD < 0.5 we have few large intervals, while in the second half 0.5 ≤ NLD ≤ 1 the intervals are much more dense and smaller. Our criterion is that f k reaches at least the minimal value of 0.5% in each interval ( f k ≥ 0.005 ∀k, since the { f k } are absolute frequencies).
The distance between the distribution { f k } (different languages, different concepts, forced non-cognates pairs resulting from the data collected on field) and the distribution {g k } (different languages, same concepts, non-cognate pairs resulting from our algorithm) can be measured with Le Cam distance (Le Cam 1986)-a normalized statistical distance derived from the symmetrized version of Pearson χ 2 divergence-defined as:
χ LC = 1 2 k f k − g k 2 f k + g k(9)
The results for the Malagasy database can be appreciated in Figure 4, where the χ LC distance is plotted at varying of the threshold D T . As it was already possible to see from Figure 2, the best reconstruction of non-cognate NLD distribution is performed by automated cognate detection around the value 0.5 for the threshold D T . Indeed, numerical data show that the objective choice is precisely D T = 0.5. Finally, the minimum of Le Cam distance χ LC 0.18 means a good agreement between real non-cognate NLD distribution and our equivalent automated reconstruction.
In conclusion, the unique parameter D T is objectively chosen to be 0.5. Using this value, for each concept we can identify all cognate pairs and measure their NLD distances, as we can do with the complementary set of non-cognate pairs. These distances are the ingredient of all analysis and results in next sections. Finally, we would like to mention the work of Rama et al. (2018), where threshold is automatically inferred for each meaning by a graph approach.
Madagascar: Analysis of the Results
The application of our automated cognate detection algorithm to the Malagasy dataset with the optimal threshold D T = 0.5 reveals to be a source of considerable non-trivial information.
As a first test, we have plotted the NLD distribution of all cognate pairs, which is the counterpart of the non-cognate NLD distribution of Figure 2 (main figure). The result can be found in Figure 5, where the purple bars show that a non-negligible portion of cognates (about 19%) have NLD equal to or greater than the threshold. This means that the algorithm provides meaningful connections within the set of words associated with the same concept, revealing non-trivial cognate pairs. Notably, some of the cognate terms are totally different (NLD=1), which is the same peculiar result we have seen for the cognate words leche and γάλα.
Because all varieties of our dataset are also identified by the geodesic coordinates of the town or village where the list was collected, we have the opportunity to put the cognate subgraphs in a geographical context, and this procedure can provide significant information. We are able, in fact, to detect a significant relation between geographical and linguistic proximity. This is a consequence of the osmosis among the vocabulary of populations that live nearby. This phenomenon has been studied and quantified in Serva et al. (2017) and, in perspective, it can be used to eventually detect migration events that have occurred in the past history of Madagascar.
Some clarifying examples can be found in Figure 6, where the resulting subsets of cognates of four concepts ( fish, guts, tree, and woman) are represented: Each color corresponds to a different subset of cognates and two towns/villages are joined only if the corresponding terms have a NLD below the threshold D T (direct cognacy link). Checking the figure carefully, it can be noted that some dots are not directly connected to all other dots of the same color since the NLD distance between the corresponding words is above the threshold, despite belonging to the same cognate subgraph.
The phenomenology shown by figures is another relevant evidence of the goodness of our automated cognate detection technique; in fact, divisions into linguistic subsets for words have a clear geographical equivalent for the corresponding locations on the map. This can be visually perceived for many concepts-a more accurate quantitative test of this correspondence could be an argument of future research.
Quantitative linguistic analysis concerning Malagasy dialects is rare in the literature and limited to a small number of varieties and a few words. Our database is by far the most reliable resource of knowledge about the Malagasy language from the point of view of lexicostatistics. We hope that the sharing of our free dataset arouses the interest of linguistics experts into this kind of investigation so as to compare our automated detection tool with detection performed by experts with traditional techniques. In the meanwhile, the best we can do is to compare our approach to the LexStat-Infomap algorithm (List, Greenhill, and Gray 2017;Rama et al. 2018), one of the most popular and efficient tools for the task of cognate detection, setting its threshold to 0.55, as reported in the above-quoted papers. To evaluate the similarity of the two groups of cognate subsets we compute the B-cubed F-scores (Amigó 2009); this turns out to be 0.94, indicating a high degree of agreement between the two procedures. The comparison between our algorithm and the LexStat-Infomap method will be further explored in the last section, devoted to phylogenetic reconstruction.
The Random Process of Gradual Lexical Modifications
As we have already mentioned, Swadesh and subsequent scholars have always exclusively used the lexical replacement process to determine the composition of language families, their temporal depth, and the moments of internal separations. The gradual modification of cognate terms has never been considered as a possible source of information for language evolution.
Let's consider the typical situation of two languages that begin to differentiate due to an elementary cause, physical distance, as for example has happened since
Figure 5
Percentage frequency NLD distribution for cognate pairs according to our automated detection procedure. The red and purple bars distinguish pairs below and over the threshold D T = 0.5. The total amount of percentage frequency over the threshold is about 19%.
Figure 6
Result of the automated cognate detection for the fish, guts, tree, woman concepts of the Malagasy database with threshold D T = 0.5. Each color identifies a different subset of cognates (terms are in the legend). The dots on the map geographically locate where a word of that subset has been collected, while the lines indicate pairs of words whose NLD is less than the direct cognacy threshold D T . the 9th century to the old Norse spoken in Scandinavia and the old Norse of the first Scandinavian settlers of Iceland. The classic approach of glottochronology is to identify those terms of the common original language that along the following centuries were replaced in Norway or in Iceland. The timing of a replacement is not a priori predictable; every year there is a very small probability that it take place for a given concept. Therefore, the random replacements are well described by a stochastic process (specifically a Poisson process).
The lexical replacement can be considered an event of a certain relevance; but also minor changes, which we call gradual lexical modifications, that frequently occur in the lexicon (modification of a vowel, truncations, or final additions, etc.) have a central role, as we will show later in this article. Modifications may be due to different causes, such as vowel reduction, consonant and vowel shift, morphological truncation, consonant lenition, and other causes that are not always identifiable. This phenomenon is the counterpart of the gradual genomic random modification in biology, and it may happen simply because the transfer of vocabulary from one generation to the next is forcefully imperfect.
The term associated with a concept, initially identical for the two populations, can, over the centuries, be altered by little changes either in Scandinavia or Iceland (or both, almost surely in a different way!): A linguist has almost no difficulty in recognizing the close relationship of the two new words and classifiying them as cognates. Even these small changes can be considered a stochastic process, which it is reasonable to assume occurs at a constant rate: The effects are smaller, but at a faster rate than lexical replacements.
In conclusion, we have shown that from the statistical analysis of the differences between cognates, it is reasonable to expect qualitatively similar information to that obtained from the analysis of lexical replacements. But there is an important difference: In lexical replacement, the change takes place (if it occurs) in a definitive way and two words either remain cognates, or not. In this context, the only reasonable measure to assign to their distance is the dichotomic one: 0 for cognate words, 1 otherwise. Conversely, in gradual lexical modifications between cognates, the changes can repeat and add up and a quantitative measure can be introduced to quantify the degree of similarity: In this case, NLD is the right measure to use. Our idea is first to compare the two different lexical distances (dichotomic distance and NLD) for the two different random processes (respectively, abrupt replacements and gradual modifications), then to use both of them in order to get more information with respect to that furnished by the lexical replacements distance alone.
The lexical replacements distance D R α,β , between two languages α and β, is simply the ratio between the number of non-cognate terms and the total number of concepts. In fact, for each concept i the distance of the pair W α,i and W β,i is fixed to 0 if they are cognates, 1 otherwise; D R α,β immediately follows averaging over all the M concepts. Recalling that C i α,β carries the information about the cognacy of two words (see Equation (8)), we have that the above dichotomic distance is (1 − C i α,β ), which implies
D R α,β = 1 M M i=1 1 − C i α,β = 1 − M α,β M(10)
where
M α,β = M i=1 C i α,β(11)
is the number of concepts for which the terms of the languages α and β are cognates (0 ≤ M α,β ≤ M). Eq. (10) is the traditional lexical distance between languages used in glottochronology since Swadesh's works in the 1950s. The distance D M α,β associated with the random process of gradual lexical modifications, which uses NLD when words are cognates, reads as follows:
D M α,β = 1 M α,β M i=1 C i α,β · NLD(W α,i , W β,i )(12)
Let us stress that the presence of C i α,β , both explicit as a multiplicative factor of NLD and implicit in the M α,β definition, has the effect of constraining the average only on those concepts where languages are cognates, while lexical replacement is not considered at all. This choice is consistent when considering only gradual lexical modifications, acting within a subset of cognates.
Two different language distances given by two different mechanisms (lexical replacements and gradual lexical modifications) has been introduced and a test of mutual consistency is required. In Figure 7 the distance D M α,β as a function of the distance D R α,β is shown for all the pairs of languages. At a glance, the data reveal a good proportionality between the two distances, well confirmed by the statistical analysis (correlation 0.73). This is a very interesting result: Although the two processes of lexical modification and lexical replacement are very different mechanisms of linguistic evolution, they both contribute, with a good relationship, to the modification of a language. Therefore we can safely affirm that the random process of gradual modification of the language is a real phenomenon and that it is reasonable to expect that a cladistic reconstruction of the evolution of Malagasy from the D M α,β distance will also provide reasonable results. Because our hypothesis is that the two stochastic processes of gradual lexical modifications and abrupt lexical replacements occur in parallel, it is natural to introduce a new overall measure that merges the two previously introduced measures. Let's take a step back to the definition of distance between two words, W α,i and W β,i , related to the same concept i. If these terms are cognates (C i α,β = 1), then the best definition of distance is clearly the NLD, because it is gradual and sensitive to small variations; if they are not (C i α,β = 0), there is no relationship between the two words and so it is natural to assign the maximum possible distance, that is, 1.
In other words, a new measure of similarity between words can be introduced by merging D M α,β and D R α,β , choosing the NLD for the cognates and the dichotomic distance otherwise. All that remains is to average out all the concepts to define a new distance between languages, D MR α,β . In symbols:
D MR α,β = 1 M M i=1 C i α,β · NLD(W α,i , W β,i ) + 1 − C i α,β(13)
Comparing this last distance with the initial D NLD α,β in (5), the improvement obtained is evident. When two words are cognates (C i α,β = 1), the element in the sum in Equation (5) coincides with the corresponding element in (13), but when they are not, D NLD α,β keeps using NLD, causing loss of information; in fact, we are now handling two unrelated words and their typical 0.5 < NLD < 1 value (see Figure 2) is somewhat incidental, due to spurious lexical coincidences, while logically it has to be 1 (as it is according to D MR α,β ).
Figure 7
Language distance D M α,β as a function of the language distance D R α,β for all the N(N − 1)/2 = 1, 770 couples of dialects of the Malagasy database (cognacy threshold D T = 0.5). The data exhibits a very significant correlation of 0.73, confirming the working hypothesis.
The Definitive Test: The Comparison on Cladistics
Every linguistic model, every hypothesis on the temporal evolution of languages, must eventually be verified through a realistic application: cladistics, or phylogenetic reconstruction. Up to now we have examined the language distances associated with the processes of lexical replacements (D R α,β ) and gradual lexical modifications (D M α,β ), while for the combination of the two (D MR α,β ) and for the NLD language distance (D NLD α,β ) we have just given the definition.
However, regardless of the chosen distance, the values of distance for each language pair (α, β) has been treated individually and no comparative study of the various languages has been made, that is exactly what cladistics does. Cladistics examines the family of N(N − 1)/2 distances D X = {D X α,β } 1≤α<β≤N as a whole (where X means R, M, MR, or NLD), checking if there is an actual overall coherence, a structural relationship between different language families. The test is significant because the phylogenetic reconstruction must be consistent with other information available about the populations involved (from history, geography, anthropology, etc.).
In order to perform this analysis, the four families of lexical distances D R , D M , D MR , and D NLD , are first transformed into their equivalent genealogical distances T R , T M , T MR , and T NLD , as in Equation (6). We have chosen to calculate Unweighted Pair Group Method Average (UPGMA) trees, which are best suited for temporal analysis. The cladograms are drawn in Figure 8, where each branch (each Swadesh list of the Malagasy database) is identified by the ethnicity and the location (in parentheses) where the list was collected (the geographical positions can be found in Figure 1).
The NLD case is used as term of comparison; the associated tree was already published in Serva and Pasquini (2020) and the colors of the branches are maintained here (the same as the map in Figure 1). The confidence we place in the quality of the NLD tree is confirmed in its excellent agreement with other external information: The main divisions exactly correspond to the geographical locations (red in the north, green in the center and east, blue in west and south-west, yellow in south); the ethnic groups are all preserved and even minor details are correct, such as, for instance, the partially secluded position of the Mikea (Ampoakafo) leaf that corresponds to the most isolated hunter-gatherer population of Madagascar; the total depth of the tree fixes the root around 650 CE (borrowing for parameter τ the value for Romance family; see Serva and Pasquini [2020] for details), in remarkable agreement with estimates from genetics and archeology. However, there is a minor difference between the T NLD cladogram of Figure 8 and the analogue published in Serva and Pasquini (2020)-that is, the Sakalava (Mahajanga) branch fits a little differently. This is due to the few corrections introduced in the database since the publication of Serva and Pasquini (2020). Although Sakalava (Mahajanga) now seems more linked to the northern group, it continues to maintain an intermediate position between the red and green blocks (not surprisingly it was classified with an exclusive color orange, as Mahajanga is historically a town of maritime trade, inhabited for a long time by different ethnic groups, many arrived from the hinterland of the island).
Returning to our main hypothesis, that the language separation process consists of two distinct independent random mechanisms (gradual lexical modifications and abrupt lexical replacements), let us take a look at T R and T M trees, keeping the T NLD cladogram in mind.
At a glance, it is clear that both reconstructions substantially maintain the main geographical structure and almost always correctly bring together lists belonging to the same ethnic group. The most significant result is that the cladogram based on gradual lexical modifications, T M , turns out to be surprisingly accurate and this is clear evidence that the stochastic process of gradual lexical modifications within cognates is an essential element in language evolution. Conversely, the T R -based UPGMA tree, while maintaining a substantial internal coherence, shows more imperfections. Let us remember that the T R tree relies on lexical replacements, which is the only random process taken into account by Swadesh and subsequent scholars. Surprisingly, the neglected (until now) stochastic process of gradual lexical modifications provides better results.
In summary, cladistic analysis confirms the dual nature of the language evolution process proposed here. Combining both random processes in the T MR tree, the cladogram seems to be almost identical to the T NLD one, made only of NLD distances. This is not that surprising: When words are cognates, both use NLD distance; when they are not, the combined measures shift to 1 while the NLD distance gives a distance between 0.5 and 1, as in Figure 2. UPGMA trees of Malagasy from the genealogical distances T R , T M , T MR , and T NLD , respectively, related to lexical replacements, gradual lexical modifications, combination of both stochastic processes, and pure NLD distances. Both T R and T M trees give substantially correct phylogenetic reconstructions, only a few misplacements can be seen in both of them. It should be noted that these two trees are generated by the effect of completely separated random phenomena. The remarkable similarity between T MR and T NLD is expected due to the closeness of their definitions. Nevertheless, the relevant point is not that combining the information of lexical replacements and gradual lexical modifications better describe the cladistics of varieties, but that both random processes can be separately and successfully used to build good quality trees. Incidentally, this is rather an a posteriori explanation of why NLD-based language distance gives such good results; it is because it reads the random process involving cognate words well and loses only a small amount of information concerning lexical replacements. In our opinion, this last point is particularly relevant. Comparisons are often reported in literature in which the NLD distance is not always competitive compared to other types of lexical metrics. The reason can be found here: NLD has a high sensitivity with respect to small changes, therefore is the perfect tool when words are cognate (lexical modifications), but when they are not (lexical replacements) its sensitivity captures spurious coincidences altering the result. If the cognacy relation is known, the perfect distance is D MR .
Finally, we use again the results of the LexStat-Infomap algorithm (see Section 4) to build up the equivalent of our T R cladogram (see upper left corner of Figure 8). A direct comparison can be appreciated in Figure 9, where it is evident that the two phylogenetic reconstructions are close to each other, confirming the reliability of our approach. However, both contain a few wrong placements, which, on the contrary,
Figure 9
UPGMA trees of the Malagasy database reconstructed from genealogical distances T R , inferred both with our algorithm (left, the same as the one in the upper left corner of Figure 8) and with the LexStat-Infomap method (right). The two cladograms are very similar, showing only minor imperfections. Both trees are less accurate than the two generated by NLD and MR distances (bottom panels of Figure 8). are correct both in the T MR and T NLD trees. This observation can be quantified by means of generalized quartets distance (GQD) (Pompei, Loreto, and Tria 2011) between pairs of trees as follows (LSI = LexStat-Infomap): GQD(T M , LSI) = GQD(T MR , LSI) = GQD(T NLD , LSI) = 0.11, and GQD(T R , LSI) = 0.45.
Conclusions
The characterization of the stochastic process of gradual lexical modifications is the main innovation introduced in this article. We have seen that this random process is able to give significant information about a family of languages as, for example, providing a valid phylogenetic reconstruction as accurate as, at least, thoseà la Swadesh obtained with the classic tools of glottochronology. The reason for this good performance is simple to understand: Given the lexicon of two languages from the same family, it is easy to find pairs of very different words for the same concept (probably not cognate words), but a comparable or greater number of words that partially looks alike (probably cognate candidates). In other terms, gradual lexical modifications have usually larger statistics than lexical replacements and the increased accuracy in phylogenetic reconstruction is an obvious consequence.
It is worth noting that lexical modifications can be successfully described using NLD distance, a sensitive tool for small changes, while lexical replacements are better evaluated by a dichotomic distance 0/1. We have therefore shown how the right combination of these two metrics gives the appropriate distance for evaluating language similarity.
An appropriate tool for cognate detection is essential to distinguish the effects of lexical replacement, which modifies the lexicon of a language by separating words referring to the same concept into different subsets of cognates, from the effects of lexical modification, which continues to modify cognates within their subsets. We have thus introduced an automated procedure inspired by graph theory for this task, an extremely fast algorithm with a single easy-to-quantify parameter that returns cognate subsets very close to those of the LexStat-Infomap method in the case of the Malagasy dataset. The necessary condition for such good results is to have a very rich database, with a large representation of languages that belong to the family under investigation. At the moment this is still a limitation, but in the last few years the rapid diffusion of digital resources have led to a growing amount of collected data and we expect that good and large datasets for our approach will be more and more available in the near future.
The separation into inferred subsets of cognates shows a non-trivial linguistic structure for many concepts of the Malagasy dataset, well supported by a coherent geographical distribution of the corresponding localities. In other cases, the nature of a dialect family reveals itself in a single predominant subset. Actually, this leads us to believe that our automated cognate detection can be a very versatile tool and it could be used to find a new objective way to answer the old question of whether a pair of varieties are two separated languages or two dialects of the same language. In the first case, the abrupt replacements process is the principal explanation of the lexical differences, while in the second it should be the gradual modification process.
These promising results deserve to be tested and confirmed in a different, articulated context, such as a wide family of languages, much more open to external influences, with much more differentiation and where, additionally, loanwords play a non-negligible role. We think that such an analysis could be the right development for future research.
Supplementary Material
• Dataset Malagasy Swadesh lists version 1.1 -October 2021: The complete dataset of 207-item Swadesh lists for 60 Malagasy variants in text format. Version 1.0 -November 2019 has already been published (Serva and Pasquini 2020).
• A Python version of our code that requires the LingPy 2.6.9 package (https://lingpy.org/), complete with dataset Malagasy Swadesh list 1.1 in cldf format.
In addition, code and data are available for download via GitHub at https://github .com/michelepasquini/LexMod LexRepl.
Figure 1
1Figure 1 The map of Madagascar with the names of the towns/villages where each Swadesh list was collected. The names of the ethnicities are missing. Colors correspond to the classification proposed in Serva and Pasquini (2020). In both Toliara (south-west coast) and Morondava (west coast), two different dialects coexist, but in Morondava they are close to each other (both are blue), whereas in Toliara they are distant (one is yellow and unreported, the other is blue).
Swadesh lists version 1.1 -October 2021; see Supplementary Material section), covering the entire island with all ethnic groups and all major inhabited centers, consisting of 60 Swadesh lists of 207 terms, entirely collected by one of us (MS) in a two-year span (2018-2019).
Figure 4 χ
4LC distance between NLD distributions of forcefully non-cognate pairs {f k } and automated non-cognate pairs {g k }, as a function of threshold D T . The range 0.45 ≤ D T ≤ 0.55 is almost flat, but numerical data show that the minimum is reached at D T = 0.5.
Figure 8
8Figure 8 UPGMA trees of Malagasy from the genealogical distances T R , T M , T MR , and T NLD , respectively, related to lexical replacements, gradual lexical modifications, combination of both stochastic processes, and pure NLD distances. Both T R and T M trees give substantially correct phylogenetic reconstructions, only a few misplacements can be seen in both of them. It should be noted that these two trees are generated by the effect of completely separated random phenomena. The remarkable similarity between T MR and T NLD is expected due to the closeness of their definitions.
Table 1
1Concept ash expressed in some Germanic and Romance languages commonly spoken in western Europe.Language Word
Language
Word
Catalan
cendra
Italian
cenere
Danish
aske
Ladin
cender
Dutch
as
Luxembourgish aschen
English
ash
Portuguese
cinza
French
cendre
Romanian
cenusa
Frisian
jiske
Spanish
ceniza
German
asche
Swedish
aska
AcknowledgmentsMichele Pasquini acknowledges the financial support from CNR, Istituto per le Applicazioni del Calcolo "Mauro Picone," Rome, with grant "Studio di linguistica quantitativa e lessicostatistica." Davide Vergni acknowledges the financial support from CNR project DIT.AD021.161.001 "Analisi probabilistica di dataset biologici e network dynamics."
Borneo as a cross-roads for comparative Austronesian linguistics. K Adelaar, Alexander, The Austronesians in History. Australian National University. Peter Bellwood, James Fox, and Darrell TryonANU E PressAdelaar, K. Alexander. 1995. Borneo as a cross-roads for comparative Austronesian linguistics. In Peter Bellwood, James Fox, and Darrell Tryon, editors, The Austronesians in History. Australian National University, ANU E Press, pages 75-95.
The Indonesian migrations to Madagascar: Making sense of the multidisciplinary evidence. K Adelaar, Alexander, Austronesian Diaspora and the Ethnogenesis of People in Indonesian Archipelago. Truman Simanjuntak, Ingrid H. E. Pojoh, and Muhammad HisyamJakartaLipi PressAdelaar, K. Alexander. 2006. The Indonesian migrations to Madagascar: Making sense of the multidisciplinary evidence. In Truman Simanjuntak, Ingrid H. E. Pojoh, and Muhammad Hisyam, editors, Austronesian Diaspora and the Ethnogenesis of People in Indonesian Archipelago. Lipi Press, Jakarta, pages 205-232.
K Adelaar, Alexander, 10.1353/ol.2012.0003Malagasy phonological history and Bantu influence. Oceanic Linguistics. 51Adelaar, K. Alexander. 2012. Malagasy phonological history and Bantu influence. Oceanic Linguistics, 51:123-159. https:// doi.org/10.1353/ol.2012.0003
A comparison of extrinsic clustering evaluation metrics based on formal constraints. Enrique Amigó, Julio Gonzalo, Javier Artiles, Felisa Verdejo, 10.1007/s10791-008-9066-8Information Retrieval. 124Amigó, Enrique, Julio Gonzalo, Javier Artiles, and Felisa Verdejo. 2009. A comparison of extrinsic clustering evaluation metrics based on formal constraints. Information Retrieval, 12(4):461-486. https://doi.org/10 .1007/s10791-008-9066-8
Adding typology to lexicostatistics: A combined approach to language classification. Dik Bakker, André Müller, Viveka Velupillai, Søren Wichmann, Cecil H Brown, Pamela Brown, Dmitry Egorov, Robert Mailhammer, Anthony Grant, Eric W Holman, 10.1515/LITY.2009.009Linguistic Typology. 13Bakker, Dik, André Müller, Viveka Velupillai, Søren Wichmann, Cecil H. Brown, Pamela Brown, Dmitry Egorov, Robert Mailhammer, Anthony Grant, and Eric W. Holman. 2009. Adding typology to lexicostatistics: A combined approach to language classification. Linguistic Typology, 13:167-179. https://doi.org/10.1515
. / Lity, 10.1515/LITY.2009.0092009.009/LITY.2009.009
Philippe Beaujard, Les arrivées Austronésiennesà Madagascar: Vagues ou continuum?Études Océan Indien. Beaujard, Philippe. 2003. Les arrivées Austronésiennesà Madagascar: Vagues ou continuum?Études Océan Indien, 35-36:59-147.
New palaeozoogeographical evidence for the settlement of Madagascar. Roger Blench, Marsh, 10.1080/00672700709480451Azania: Archaeological Research in Africa. 42Blench, Roger Marsh. 2007. New palaeozoogeographical evidence for the settlement of Madagascar. Azania: Archaeological Research in Africa, 42:69-82. https://doi.org/10.1080 /00672700709480451
The Austronesians in Madagascar and their interaction with the Bantu of the East African Coast: Surveying the linguistic evidence for domestic and translocated animals. Roger Blench, Marsh, Studies in Philippine Languages and Cultures. 18Blench, Roger Marsh. 2008. The Austronesians in Madagascar and their interaction with the Bantu of the East African Coast: Surveying the linguistic evidence for domestic and translocated animals. Studies in Philippine Languages and Cultures, 18:18-43.
Faunal names in Malagasy: Their etymologies and implications for the prehistory of the East African Coast. Roger Blench, Martin Marsh, Walsh, Eleventh International Conference on Austronesian Linguistics (11 ICAL). 31Blench, Roger Marsh and Martin Walsh. 2009. Faunal names in Malagasy: Their etymologies and implications for the prehistory of the East African Coast. In Eleventh International Conference on Austronesian Linguistics (11 ICAL), 31 pages.
Automatic detection of cognates using orthographic alignment. Alina Ciobanu, Liviu P Maria, Dinu, 10.3115/v1/P14-2017Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics2Ciobanu, Alina Maria and Liviu P. Dinu. 2014a. Automatic detection of cognates using orthographic alignment. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 99-105. https://doi .org/10.3115/v1/P14-2017
An etymological approach to cross-language orthographic similarity. Application on Romanian. Alina Ciobanu, Liviu P Maria, Dinu, 10.3115/v1/D14-1112Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Ciobanu, Alina Maria and Liviu P. Dinu. 2014b. An etymological approach to cross-language orthographic similarity. Application on Romanian. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1047-1058. https://doi.org/10 .3115/v1/D14-1112
Automatic discrimination between cognates and borrowings. Alina Ciobanu, Liviu P Maria, Dinu, 10.3115/v1/P15-2071Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingShort PapersCiobanu, Alina Maria and Liviu P. Dinu. 2015. Automatic discrimination between cognates and borrowings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers), pages 431-437. https://doi.org/10 .3115/v1/P15-2071
Simulating language evolution: A tool for historical linguistics. Alina Ciobanu, Liviu P Maria, Dinu, Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations. the 27th International Conference on Computational Linguistics: System DemonstrationsAssociation for Computational LinguisticsCiobanu, Alina Maria and Liviu P. Dinu. 2018. Simulating language evolution: A tool for historical linguistics. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, Association for Computational Linguistics, pages 68-72.
An algorithm to align words for historical comparison. Michael A Covington, Computational Linguistics. 224Covington, Michael A. 1996. An algorithm to align words for historical comparison. Computational Linguistics, 22(4):481-496.
Le systeme phonologique du proto-malgache. Otto Dahl, Christian, Dahl, Otto Christian. 1938. Le systeme phonologique du proto-malgache.
. Norsk Tidsskrift for Sprogvidenskap. 10Norsk Tidsskrift for Sprogvidenskap, 10:189-235.
Otto Dahl, Christian, Malgache et Maanjan: Une Comparaison Linguistique. Egede Instituttet (Arne Gimnes Forlag). OlsoDahl, Otto Christian. 1951. Malgache et Maanjan: Une Comparaison Linguistique. Egede Instituttet (Arne Gimnes Forlag), Olso.
Le substrat Bantou en Malgache. Otto Dahl, Christian, Norsk Tidsskrift for Sprogvidenskap. 17Dahl, Otto Christian. 1954. Le substrat Bantou en Malgache. Norsk Tidsskrift for Sprogvidenskap, 17:325-362.
Apersus pour une dialectologie de langue malgache. Jacques Dez, Bulletin de Madagascar. Dez, Jacques. 1963. Apersus pour une dialectologie de langue malgache. Bulletin de Madagascar, 204, 205, 206, 210.
Sur lesîles du Grand Océan. Jules D'urville, Dumont, Bulletin de la Société de Góegraphie. 17D'Urville, Jules Dumont. 1832. Sur lesîles du Grand Océan. Bulletin de la Société de Góegraphie, 17:1-21.
Language divergence and estimated word retention rate. Language. Isidore Dyen, A T James, J W L Cole, 10.2307/41139043Dyen, Isidore, A. T. James, and J. W. L. Cole. 1967. Language divergence and estimated word retention rate. Language, 43:150-171. https://doi.org/10.2307/411390
Review of Otto Dahl. Isidore Dyen, 10.2307/409983Malgache et Maanjan: Une comparaison linguistique. Language. 29Dyen, Isidore. 1953. Review of Otto Dahl, Malgache et Maanjan: Une comparaison linguistique. Language, 29(4):577-590. https://doi.org/10.2307/409983
Sheila M Embleton, Statistics in Historical Linguistics. Bochum30Embleton, Sheila M. 1986. Statistics in Historical Linguistics, volume 30. Studienverlag Brockmeyer, Bochum.
Clustering semantically equivalent words into cognate sets in multilingual lists. Bradley Hauer, Grzegorz Kondrak, Proceedings of the 5th International Joint Conference on Natural Language Processing. the 5th International Joint Conference on Natural Language ProcessingHauer, Bradley and Grzegorz Kondrak. 2011. Clustering semantically equivalent words into cognate sets in multilingual lists. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 865-873.
The Barito Isolects of Borneo: A Classification Based on Comparative Reconstruction and Lexicostatistics. Alfred B Hudson, Ithaca, New YorkCornell UniversityHudson, Alfred B. 1967. The Barito Isolects of Borneo: A Classification Based on Comparative Reconstruction and Lexicostatistics. Cornell University, Ithaca, New York.
Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists. Gerhard Jäger, Johann-Mattis List, Pavel Sofroniev, 10.18653/v1/E17-1113Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (Long Papers). the 15th Conference of the European Chapter of the Association for Computational Linguistics (Long Papers)Jäger, Gerhard, Johann-Mattis List, and Pavel Sofroniev. 2017. Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (Long Papers), pages 1204-1215. https://doi.org/10.18653/v1/E17 -1113
Asymptotic Methods in Statistical Decision Theory. Le Cam, Lucien , Springer-VerlagBerlinLe Cam, Lucien. 1986. Asymptotic Methods in Statistical Decision Theory. Springer-Verlag, Berlin.
Binary codes capable of correcting deletions, insertions, and reversals. Vladimir I Levenshtein, Soviet Physics Doklady. 108Levenshtein, Vladimir I. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707-710.
Automatic detection of cognates in multilingual wordlists. Johann-Mattis List, Proceedings of the EACL 2012. the EACL 2012List, Johann-Mattis. 2012. Automatic detection of cognates in multilingual wordlists. In Proceedings of the EACL 2012
The potential of automatic word comparison for historical linguistics. Simon J Johann-Mattis, Russell D Greenhill, Gray, 10.1371/journal.pone.0170046Joint Workshop of Visualization of Linguistic Patterns and Uncovering Language History from Multilingual Resources. Düsseldorf. ListDüsseldorf University Press1228129337Sequence Comparison in Historical LinguisticsJoint Workshop of Visualization of Linguistic Patterns and Uncovering Language History from Multilingual Resources, pages 117-125. List, Johann-Mattis. 2014. Sequence Comparison in Historical Linguistics. Düsseldorf University Press, Düsseldorf. List, Johann-Mattis, Simon J. Greenhill, and Russell D. Gray. 2017. The potential of automatic word comparison for historical linguistics. PLoS ONE, 12(1):e0170046. https://doi.org/10.1371/journal .pone.0170046, PubMed: 28129337
Using sequence similarity networks to identify partial cognates in multilingual wordlists. Johann - List, Philippe Mattis, Eric Lopez, Bapteste, 10.18653/v1/P16-2097Proceedings of the Association for Computational Linguistics. the Association for Computational Linguistics2List, Johann-Mattis, Philippe Lopez, and Eric Bapteste. 2016. Using sequence similarity networks to identify partial cognates in multilingual wordlists. Proceedings of the Association for Computational Linguistics, 2:599-605. https://doi.org/10.18653 /v1/P16-2097
Language Classification by Numbers. April Mcmahon, Robert Mcmahon, Oxford University PressMcMahon, April and Robert McMahon. 2005. Language Classification by Numbers. Oxford University Press.
Measuring dialect distance phonetically. John Nerbonne, Wilbert Heeringa, Proceedings of SIGPHON-97: 3rd Meeting of the ACL Special Interest Group in Computational Phonology. SIGPHON-97: 3rd Meeting of the ACL Special Interest Group in Computational PhonologyNerbonne, John and Wilbert Heeringa. 1997. Measuring dialect distance phonetically. In Proceedings of SIGPHON-97: 3rd Meeting of the ACL Special Interest Group in Computational Phonology, pages 11-18.
Stability of meanings versus rate of replacement of words: An experimental test. Michele Pasquini, Maurizio Serva, 10.1080/09296174.2019.1647754Journal of Quantitative Linguistics. 28Pasquini, Michele and Maurizio Serva. 2021. Stability of meanings versus rate of replacement of words: An experimental test. Journal of Quantitative Linguistics, 28:95-116. https://doi.org/10.1080 /09296174.2019.1647754
Languages distance and tree reconstruction. Filippo Petroni, Maurizio Serva, 10.1088/1742-5468/2008/08/P08012Journal of Statistical Mechanics: Theory and Experiment. 8012Petroni, Filippo and Maurizio Serva. 2008. Languages distance and tree reconstruction. Journal of Statistical Mechanics: Theory and Experiment, page P08012. https://doi.org/10.1088 /1742-5468/2008/08/P08012
Lexical evolution rates derived from automated stability measures. Filippo Petroni, Maurizio Serva, 10.1088/1742-5468/2010/03/P03015Journal of Statistical Mechanics: Theory and Experiment. 3015Petroni, Filippo and Maurizio Serva. 2010a. Lexical evolution rates derived from automated stability measures. Journal of Statistical Mechanics: Theory and Experiment, 2010:P03015. https://doi.org/10.1088 /1742-5468/2010/03/P03015
Measures of lexical distance between languages. Filippo Petroni, Maurizio Serva, 10.1016/j.physa.2010.02.004Physica A. 389Petroni, Filippo and Maurizio Serva. 2010b. Measures of lexical distance between languages. Physica A, 389:2280-2283. https://doi.org/10.1016/j.physa .2010.02.004
Automated world stability and language phylogeny. Filippo Petroni, Maurizio Serva, 10.1080/09296174.2011.533589Journal of Quantitative Linguistics. 18Petroni, Filippo and Maurizio Serva. 2011. Automated world stability and language phylogeny. Journal of Quantitative Linguistics, 18:53-62. https://doi.org /10.1080/09296174.2011.533589
On the accuracy of language trees. Simone Pompei, Vittorio Loreto, Francesca Tria, 10.1371/journal.pone.0020109PLoS ONE. 6621674034Pompei, Simone, Vittorio Loreto, and Francesca Tria. 2011. On the accuracy of language trees. PLoS ONE, 6(6):e20109. https://doi.org/10.1371/journal .pone.0020109, PubMed: 21674034
An automated framework for fast cognate detection and Bayesian phylogenetic inference in computational historical linguistics. Taraka Rama, Johann-Mattis List, 10.18653/v1/P19-162757th Annual Meeting of the Association for Computational Linguistics. Rama, Taraka and Johann-Mattis List. 2019. An automated framework for fast cognate detection and Bayesian phylogenetic inference in computational historical linguistics. In 57th Annual Meeting of the Association for Computational Linguistics, pages 6225-6235. https://doi.org/10 .18653/v1/P19-1627
Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics?. Taraka Rama, Johann-Mattis List, Johannes Wahle, Gerhard Jäger, 10.18653/v1/N18-2063Proceedings of the North American Chapter of the Association for Computational Linguistics. the North American Chapter of the Association for Computational LinguisticsRama, Taraka, Johann-Mattis List, Johannes Wahle, and Gerhard Jäger. 2018. Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics? Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 393-400. https://doi.org/10 .18653/v1/N18-2063
The settlement of Madagascar: What dialects and languages can tell us. Maurizio Serva, 10.1371/journal.pone.0030666PLoS ONE. 7222363465Serva, Maurizio. 2012. The settlement of Madagascar: What dialects and languages can tell us. PLoS ONE, 7(2):e30666. https://doi.org/10.1371/journal .pone.0030666, PubMed: 22363465
Dialects of Madagascar. Maurizio Serva, Michele Pasquini, 10.1371/journal.pone.0240170PLoS ONE. 151033007011Serva, Maurizio and Michele Pasquini. 2020. Dialects of Madagascar. PLoS ONE, 15(10):e0240170. https://doi.org/10 .1371/journal.pone.0240170, PubMed: 33007011
Linguistic clues suggest that the Indonesian colonizers directly sailed to Madagascar. Maurizio Serva, Michele Pasquini, 10.1016/j.langsci.2022.101497Language Sciences. 93101497Serva, Maurizio and Michele Pasquini. 2022. Linguistic clues suggest that the Indonesian colonizers directly sailed to Madagascar. Language Sciences, 93:101497. https://doi.org/10.1016/j.langsci .2022.101497
Indo-European languages tree by Levenshtein distance. Maurizio Serva, Filippo Petroni, 10.1209/0295-5075/81/68005EuroPhysics Letters. 8168005Serva, Maurizio and Filippo Petroni. 2008. Indo-European languages tree by Levenshtein distance. EuroPhysics Letters, 81:68005. https://doi.org/10.1209 /0295-5075/81/68005
Malagasy dialects and the peopling of Madagascar. Maurizio Serva, Filippo Petroni, Dima Volchenkov, Søren Wichmann, 10.1098/rsif.2011.0228Journal of the Royal Society Interface. 921632612Serva, Maurizio, Filippo Petroni, Dima Volchenkov, and Søren Wichmann. 2012. Malagasy dialects and the peopling of Madagascar. Journal of the Royal Society Interface, 9:54-67. https://doi.org/10 .1098/rsif.2011.0228, PubMed: 21632612
Recovering geography from a matrix of genetic distances. Maurizio Serva, Davide Vergni, Dima Volchenkov, Aneglo Vulpiani, 10.1209/0295-5075/118/48003Europhysics Letters. 11848003Serva, Maurizio, Davide Vergni, Dima Volchenkov, and Aneglo Vulpiani. 2017. Recovering geography from a matrix of genetic distances. Europhysics Letters, 118:48003. https://doi.org/10.1209 /0295-5075/118/48003
Comparative-historical linguistics and lexicostatistics. Sergei A Starostin, 10.1515/9781474473316-019Time Depth in Historical Linguistics, v. 1. The McDonald Institute for Archaeological Research. CambridgeStarostin, Sergei A. 2000. Comparative-historical linguistics and lexicostatistics. In Time Depth in Historical Linguistics, v. 1. The McDonald Institute for Archaeological Research, Cambridge, pages 223-265. https://doi.org/10 .1515/9781474473316-019
Salish internal relationships. Morris Swadesh, 10.1086/464084International Journal of American Linguistics. 16Swadesh, Morris. 1950. Salish internal relationships. International Journal of American Linguistics, 16:157-167. https://doi.org/10.1086/464084
Diffusional cumulation and archaic residue as historical explanations. Morris Swadesh, 10.1086/soutjanth.7.1.3628647Southwestern Journal of Anthropology. 7Swadesh, Morris. 1951. Diffusional cumulation and archaic residue as historical explanations. Southwestern Journal of Anthropology, 7:1-21. https:// doi.org/10.1086/soutjanth.7.1.3628647
Lexicostatistic dating of prehistoric ethnic contacts. Morris Swadesh, Proceedings of the American Philosophical Society. 96Swadesh, Morris. 1952. Lexicostatistic dating of prehistoric ethnic contacts. Proceedings of the American Philosophical Society, 96:452-463.
Perspectives and problems of Amerindian comparative linguistics. Morris Swadesh, 10.1080/00437956.1954.1165953010Swadesh, Morris. 1954. Perspectives and problems of Amerindian comparative linguistics. Word, 10:306-332. https:// doi.org/10.1080/00437956.1954.11659530
Towards greater accuracy in lexicostatistic dating. Morris Swadesh, 10.1086/200754International Journal of American Linguistics. 21Current AnthropologySwadesh, Morris. 1955. Towards greater accuracy in lexicostatistic dating. International Journal of American Linguistics, 21:121-137. https://doi.org/10.1086 /464321 van der Merwe, Nikolaas J. 1966. New mathematics for glottochronology. Current Anthropology, 7:485-500. https://doi .org/10.1086/200754
The glottochronology of Malagasy speech communities. Pierre Vérin, Conrad P Kottak, Peter Gorlin, 10.2307/3622902Oceanic Linguistics. 8Vérin, Pierre, Conrad P. Kottak, and Peter Gorlin. 1969. The glottochronology of Malagasy speech communities. Oceanic Linguistics, 8:26-83. https://doi.org/10 .2307/3622902 |
15,818,631 | Deep Lexical Segmentation and Syntactic Parsing in the Easy-First Dependency Framework | We explore the consequences of representing token segmentations as hierarchical structures (trees) for the task of Multiword Expression (MWE) recognition, in isolation or in combination with dependency parsing. We propose a novel representation of token segmentation as trees on tokens, resembling dependency trees. Given this new representation, we present and evaluate two different architectures to combine MWE recognition and dependency parsing in the easy-first framework: a pipeline and a joint system, both taking advantage of lexical and syntactic dimensions. We experimentally validate that MWE recognition significantly helps syntactic parsing. | [
11608038,
2647976,
815755,
2809518,
12408112,
3146611,
39375316,
2866454,
5723735,
10831519
] | Deep Lexical Segmentation and Syntactic Parsing in the Easy-First Dependency Framework
June 12-17, 2016
Matthieu Constant [email protected]
Université Paris-Est
LIGM
Champs-sur-MarneFrance
Alpage
INRIA
Université Paris Diderot
ParisFrance
♠
Joseph Le Roux [email protected]
LIPN
UMR 7030
Université Paris Nord
CNRS
VilletaneuseFrance
Nadi Tomeh [email protected]
LIPN
UMR 7030
Université Paris Nord
CNRS
VilletaneuseFrance
Deep Lexical Segmentation and Syntactic Parsing in the Easy-First Dependency Framework
Proceedings of NAACL-HLT 2016
NAACL-HLT 2016San Diego, CaliforniaJune 12-17, 2016
We explore the consequences of representing token segmentations as hierarchical structures (trees) for the task of Multiword Expression (MWE) recognition, in isolation or in combination with dependency parsing. We propose a novel representation of token segmentation as trees on tokens, resembling dependency trees. Given this new representation, we present and evaluate two different architectures to combine MWE recognition and dependency parsing in the easy-first framework: a pipeline and a joint system, both taking advantage of lexical and syntactic dimensions. We experimentally validate that MWE recognition significantly helps syntactic parsing.
Introduction
Lexical segmentation is a crucial task for natural language understanding as it detects semantic units of texts. One of the main difficulties comes from the identification of multiword expressions [MWE] (Sag et al., 2002), which are sequences made of multiple words displaying multidimensional idiomaticity (Nunberg et al., 1994). Such expressions may exhibit syntactic freedom and varying degree of compositionality, and many studies show the advantages of combining MWE identification with syntactic parsing (Savary et al., 2015), for both tasks (Wehrli, 2014). Indeed, MWE detection may help parsing, as it reduces the number of lexical units, and in turn parsing may help detect MWEs with syntactic freedom (syntactic variations, discontinuity, etc.).
In the dependency parsing framework, some previous work incorporated MWE annotations within syntactic trees, in the form of complex subtrees either with flat structures (Nivre and Nilsson, 2004;Eryigit et al., 2011;Seddah et al., 2013) or deeper ones (Vincze et al., 2013;Candito and Constant, 2014). However, these representations do not capture deep lexical analyses like nested MWEs. In this paper, we propose a two-dimensional representation that separates lexical and syntactic layers with two distinct dependency trees sharing the same nodes 1 . This representation facilitates the annotation of complex lexical phenomena like embedding of MWEs (e.g. I will (take a (rain check))). Given this representation, we present two easy-first dependency parsing systems: one based on a pipeline architecture and another as a joint parser.
Deep Segmentation and Dependencies
This section describes a lexical representation able to handle nested MWEs, extended from Constant and Le Roux (2015) which was limited to shallow MWEs. Such a lexical analysis is particularly relevant to perform deep semantic analysis.
A lexical unit [LU] is a subtree of the lexical segmentation tree composed of either a single token unit or an MWE. In case of a single token unit, the subtree is limited to a single node. In case of an MWE, the subtree is rooted by its leftmost LU, from which there are arcs to every other LU of the MWE. For instance, the MWE in spite of made of three single token units is a subtree rooted by in. It comprises two arcs: in → spite and in → of. The MWE make big deal is more complex as it is formed of a single token unit make and an MWE big deal. It is represented as a subtree whose root is make connected to the root of the MWE subtree corresponding to big deal. The subtree associated with big deal is made of two single token units. It is rooted by big with an arc big → deal. Such structuring allows to find nested MWEs when the root is not an MWE itself, like for make big deal. It is different for the MWE Los Angeles Lakers comprising the MWE Los Angeles and the single token unit Lakers. In that case, the subtree has a flat structure, with two arcs from the node Los, structurally equivalent to in spite of that has no nested MWEs. Therefore, some extra information is needed in order to distinguish these two cases. We use arc labels. Labeling requires to maintain a counter l in order to indicate the embedding level in the leftmost LU of the encompassing MWE. Labels have the form sub l mwe for l ≥ 0. Let U = U 1 ...U n be a LU composed of n LUs. If n = 1, it is a single token unit. Otherwise, subtree(U, 0), the lexical subtree 2 for U is recursively constructed by adding arcs subtree(U 1 , l + 1)
sub l mwe −−−−−→ subtree(U i , 0) for i = 1.
In the case of shallow representation, every LUs of U are single token units.
Once built the LU subtrees (the internal dependencies), it is necessary to create arcs to connect them and form a complete tree : that we call ex-2 The second argument l corresponds to the embedding level.
ternal dependencies. LUs are sequentially linked together: each pair of consecutive LUs with roots (w i ,w j ), i < j, gives an arc w i lex − − → w j . Figure 1 and Figure 2 respectively display the deep and shallow lexical segmentations of the sentence The Los Angeles Lakers made a big deal out of it.
For readibility, we note mwe for sub 0 mwe and submwe for sub 1 mwe.
Multidimensional Easy-first Parsing
Easy-first parsing
Informally, easy-first proposed in Goldberg and Elhadad (2010) predicts easier dependencies before risky ones. It decides for each token whether it must be attached to the root of an adjacent subtree and how this attachment should be labeled 3 . The order in which these decisions are made is not decided in advance: highest-scoring decisions are made first and constrain the following decisions.
This framework looks appealing in order to test our assumption that segmentation and parsing are mutually informative, while leaving the exact flow of information to be learned by the system itself: we do not postulate any priority between the tasks nor that all attachment decisions must be taken jointly. On the contrary, we expect most decisions to be made independently except for some difficult cases that need both lexical and syntactic knowledge.
We now present two adaptations of this strategy to build both lexical and parse trees from a unique sequence of tokens 4 . The key component is to use features linking information from the two dimensions.
Pipeline Architecture
In this trivial adaptation, two parsers are run sequentially. The first one builds a structure in one dimension (i.e. for segmentation or syntax). The second one builds a structure in the other dimension, with the result of the first parser available as features.
Joint Architecture
The second adaptation is more substantial and takes the form of a joint parsing algorithm. This adaptation is provided in Algorithm 1. It uses a single classifier to predict lexical and syntactic actions. As in easy-first, each iteration predicts the most certain head attachment action given the currently predicted subtrees, but here it may belong to any dimension. This action can be mapped to an edge in the appropriate dimension via function EDGE. Function score(a,i) computes the dot-product of feature weights and features at position i using surrounding subtrees in both dimensions 5 .
Algorithm 1 Joint Easy-first parsing 1: function JOINT EASY-FIRST PARSING(w0...wn) 2: Let A be the set of possible actions 3:
arcss,arcs l := (∅, ∅) 4:
hs,h l := w0 . . . wn, w0 . . . wn 5:
while |h l | > 1 ∨ |hs| > 1 do 6:â,î := argmax a∈A,i∈[|h d |] score(a,i) 7:
(par, lab, child, dim) := EDGE((hs, h l ),â,î) 8:
arcs dim := arcs dim ∪ (par, lab, child) 9:
h dim := h dim \{child} 10: end while 11:
return (arcs l , arcss) 12: end function 13: function EDGE((hs, h l ), (dir, lab, dim), i) 14:
if dir =← then we have a left edge 15:
return (h dim [i], lab, h dim [i + 1], dim) 16: else 17: return (h dim [i + 1], lab, h dim [i], dim) 18: end if 19: end function
We can reuse the reasoning from Goldberg and Elhadad (2010) and derive a worst-case time com-4 It is straightforward to add any number of tree structures. 5 Let us note that the algorithm builds projective trees for each dimension, but their union may contain crossing arcs. Schneider et al. (2014a). The FTB contains annotations of contiguous MWEs. We generated the dataset from the version described in Candito and Constant (2014) and used the shallow lexical representation, in the official train/dev/test split of the SPMRL shared task (Seddah et al., 2013). The Sequoia treebank contains some limited annotations of MWEs (usually, compounds having an irregular syntax). We manually extended the coverage to all types of MWEs including discontiguous ones. We also included deep annotation of MWEs (in particular, nested ones). We used a 90%/10% train/test split in our experiments. Some statistics about the data sets are provided in table 4.1. Tokens were enriched with their predicted part-of-speech (POS) and information from MWE lexicon 6 lookup as in Candito and Constant (2014).
Parser and features
Parser. We implemented our systems by modifying the parser of Y. Goldberg 7 also used as a baseline. We trained all models for 20 iterations with dynamic oracle using the following exploration policy: always choose an oracle transition in the first 2 iterations (k = 2), then choose model prediction with probability p = 0.9.
Features. One-dimensional features were taken directly from the code supporting . We added information on typographical cues (hyphenation, digits, capitalization, . . . ) and the existence of substrings in MWE dictionaries in order to help lexical analysis. Following Constant et al. (2012) and Schneider et al. (2014a), we used dictionary lookups to build a first naive segmentation and incorporate it as a set of features. Twodimensional features were used in both pipeline and joint strategies. We first added syntactic path features to the lexical dimension, so syntax can guide segmentation. Conversely, we also added lexical path features to the syntactic dimension to provide information about lexical connectivity. For instance, two nodes being checked for attachment in the syntactic dimension can be associated with information describing whether one of the corresponding node is an ancestor of the other one in the lexical dimension (i.e. indicating whether the two syntactic nodes are linked via internal or external paths). We also selected automatically generated features combining information from both dimensions. We chose a simple data-driven heuristics to select combined features. We ran one learning iteration over the FTB training corpus adding all possible combinations of syntactic and lexical features. We picked the templates of the 10 combined features whose scores had the greatest absolute values. Although this heuristics may not favor the most discriminant features, we found that the chosen features helped accuracy on the development set.
Results
For each dataset, we carried out four experiments. First we learned and ran independently two distinct 7 We started from the version available at the time of writing at https://bitbucket.org/yoavgo/ tacl2013dynamicoracles baseline easy-first parsers using one-dimensional features: one producing a lexical segmentation, another one predicting a syntactic parse tree. We also trained and ran a joint easy-first system predicting lexical segmentations and syntactic parse trees, using two-dimensional features. We also experimented the pipeline system for each dimension, consisting in applying the baseline parser on one dimension and using the resulting tree as source of twodimensional features in a standard easy first parser applied on the other dimension. Since pipeline architectures are known to be prone to error propagation, we also run an experiment where the pipeline second stage is fed with oracle first-stage trees.
Results on the test sets are provided in table 2, where LAS and UAS are computed with punctuation. Overall, we can see that the lexical information tends to help syntactic prediction while the other way around is unclear. Table 2: Results on our three test sets. Statistically significant differences (p-value < 0.05) from the corresponding "distinct" setting are indicated with †. Rows -oracle trees are the same as pipeline but using oracle, instead of predicted, trees.
Discussion
The first striking observation is that the syntactic dimension does not help the predictions in the lexical dimension, contrary to what could be expected. In practice, we can observe that variations and discontinuity of MWEs are not frequent in our data sets. For instance, Schneider et al. (2014a) notice that only 15% of the MWEs in EWT are discontiguous and most of them have gaps of one token. This could explain why syntactic information is not useful for segmentation. On the other hand, the lexical dimension tends to help syntactic predictions. More precisely, while the pipeline and the joint approach reach comparable scores on the FTB and Sequoia, the joint system has disappointing results on EWT. The good scores for Sequoia could be explained by the larger MWE coverage.
In order to get a better intuition on the real impact of each of the three approaches, we broke down the syntax results by dependency labels. Some labels are particularly informative. First of all, the precision on the modifier label mod, which is the most frequent one, is greatly improved using the pipeline approach as compared with the baseline (around 1 point). This can be explained by the fact that many nominal MWEs have the form of a regular noun phrase, to which its internal adjectival or prepositional constituents are attached with the mod label. Recognizing a nominal MWE on the lexical dimension may therefore give a relevant clue on its corresponding syntactic structure. Then, the dep cpd connects components of MWE with irregular syntax that cannot receive standard labels. We can observe that the pipeline (resp. the joint) approach clearly improves the precision (resp. recall) as compared with the baseline (+1.6 point). This means that the combination of a preliminary lexical segmentation and a possibly partial syntactic context helps improving the recognition of syntax-irregular MWEs. Coordination labels (dep.coord and coord) are particularly interesting as the joint system outperforms the other two on them. Coordination is known to be a very complex phenomenon: these scores would tend to show that the lexical and syntactic dimensions mutually help each other.
When comparing this work to state-of-the-art systems on data sets with shallow annotation of MWEs, we can see that we obtain MWE recognition scores comparable to systems of equivalent complexity and/or available information. This means that our novel representation which allows for the annotation of more complex lexical phenomena does not deteriorate scores for shallow annotations.
Conclusions and Future Work
In this paper we presented a novel representation of deep lexical segmentation in the form of trees, forming a dimension distinct from syntax. We experimented strategies to predict both dimensions in the easy-first dependency parsing framework. We showed empirically that joint and pipeline processing are beneficial for syntactic parsing while hardly impacting deep lexical segmentation.
The presented combination of parsing and segmenting does not enforce any structural constraint over the two trees 8 . We plan to address this issue in future work. We will explore less redundant, more compact representations of the two dimensions since some annotations can be factorized between the two dimensions (e.g. MWEs with irregular syntax) and some can easily be induced from others (e.g. sequential linking between lexical units).
Figure 1 :Figure 2 :
12Deep segmentation of Los Angeles Lakers made a big deal out of it represented as a tree.The Los Angeles Lakers made a big deal out of Shallow segmentation of Los Angeles Lakers made a big deal out of it represented as a tree.
Table 1: Datasets statistics. The first part describes the number of words in training sets with MWE label ratio. shallow + refersEnglish
French
Corpus
EWT
FTB
Sequoia
# words
55,590
564,798
33,829
# MWE labels
4,649
49,350
6,842
ratio
0.08
0.09
0.20
MWE rep.
shallow + shallow
deep
to a shallow representation with enriched MWE labels indicat-
ing the MWE strength (collocation vs. fixed).
plexity of O(n log n), provided that we restrict fea-
ture extraction at each position to a bounded vicinity.
4 Experiments
4.1 Datasets
We used data sets derived from three different ref-
erence treebanks: English Web Treebank (Linguis-
tic Data Consortium release LDC2012T13)[EWT],
French treebank (Abeillé et al., 2003) [FTB], Se-
quoia Treebank (Candito and Seddah, 2012) [Se-
quoia]. These treebanks have MWE annotations
available on at least a subpart of them. For EWT,
we used the STREUSLE corpus (Schneider et al.,
2014b) that contains annotations of all types of
MWEs, including discontiguous ones. We used the
train/test split from
Table 3 :
3Results on FTB development set, broken down by dependency labels. Scores correspond to recall and precision.
This is related to the Prague Dependency Treebank(Hajič et al., 2006) which encodes MWEs in tectogrammatical trees connected to syntactic trees(Bejček and Straňák, 2010).
Labels are an extension toGoldberg and Elhadad (2010)
We used the Unitex platform (www-igm.univ-mlv. fr/˜unitex/ for French and the STREUSLE corpus web site (www.ark.cs.cmu.edu/LexSem/) for English.
AcknowledgmentsThis work has been partly funded by the French Agence Nationale pour la Recherche, through the PARSEME-FR project (ANR-14-CERA-0001) and as part of the Investissements d'Avenir program (ANR-10-LABX-0083) 8 for instance, aligned arc or subtrees
Building a treebank for French. Anne Abeillé, Lionel Clément, François Toussenel, Anne AbeilléKluwerDordrechtAnne Abeillé, Lionel Clément, and François Toussenel. 2003. Building a treebank for French. In Anne Abeillé, editor, Treebanks. Kluwer, Dordrecht.
Annotation of multiword expressions in the prague dependency treebank. Language Resources and Evaluation. Eduard Bejček, Pavel Straňák, 44Eduard Bejček and Pavel Straňák. 2010. Annotation of multiword expressions in the prague dependency tree- bank. Language Resources and Evaluation, 44(1-2).
Strategies for contiguous multiword expression analysis and dependency parsing. Marie Candito, Matthieu Constant, ACL 14-The 52nd Annual Meeting of the Association for Computational Linguistics. ACL. Marie Candito and Matthieu Constant. 2014. Strategies for contiguous multiword expression analysis and de- pendency parsing. In ACL 14-The 52nd Annual Meet- ing of the Association for Computational Linguistics. ACL.
Le corpus Sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical. Marie Candito, Djamé Seddah, TALN 2012 -19e conférence sur le Traitement Automatique des Langues Naturelles. Grenoble, FranceMarie Candito and Djamé Seddah. 2012. Le corpus Sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical. In TALN 2012 -19e conférence sur le Traitement Automatique des Langues Naturelles, Grenoble, France.
Dependency representations for lexical segmentation. Matthieu Constant, Joseph Le Roux, Proceedings of the international workshop on statistical parsing of morphologically-rich languages. the international workshop on statistical parsing of morphologically-rich languagesMatthieu Constant and Joseph Le Roux. 2015. De- pendency representations for lexical segmentation. In Proceedings of the international workshop on sta- tistical parsing of morphologically-rich languages (SPMRL 2015).
Discriminative strategies to integrate multiword expression recognition and parsing. Matthieu Constant, Anthony Sigogne, Patrick Watrin, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL'12). the 50th Annual Meeting of the Association for Computational Linguistics (ACL'12)Matthieu Constant, Anthony Sigogne, and Patrick Wa- trin. 2012. Discriminative strategies to integrate mul- tiword expression recognition and parsing. In Pro- ceedings of the 50th Annual Meeting of the Associ- ation for Computational Linguistics (ACL'12), pages 204-212.
Multiword Expressions in Statistical Dependency Parsing. Gülşen Eryigit, Tugayilbay, Ozan Arkan Can, Proceedings of the Second Workshop on Statistical Parsing of Morphologically Rich Languages, SPMRL '11. the Second Workshop on Statistical Parsing of Morphologically Rich Languages, SPMRL '11Stroudsburg, PA, USAAssociation for Computational LinguisticsGülşen Eryigit, Tugayİlbay, and Ozan Arkan Can. 2011. Multiword Expressions in Statistical Depen- dency Parsing. In Proceedings of the Second Work- shop on Statistical Parsing of Morphologically Rich Languages, SPMRL '11, pages 45-55, Stroudsburg, PA, USA. Association for Computational Linguistics.
An efficient algorithm for easy-first non-directional dependency parsing. Yoav Goldberg, Michael Elhadad , Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational LinguisticsYoav Goldberg and Michael Elhadad. 2010. An effi- cient algorithm for easy-first non-directional depen- dency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguis- tics, pages 742-750. Association for Computational Linguistics.
Training deterministic parsers with non-deterministic oracles. Yoav Goldberg, Joakim Nivre, Transactions of the association for Computational Linguistics. 1Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the association for Computational Linguistics, 1:403-414.
. J Hajič, J Panevová, E Hajičová, P Sgall, P Pajas, Stěpánek, J Havelka, J Mikulová, Z M Žabokrtský, M Ševčíková Razímová, Prague dependency treebank 2.0. Linguistic Data ConsortiumJ. Hajič, J. Panevová, E. Hajičová, P. Sgall, P. Pajas, Stěpánek, Havelka J., Mikulová J., Z. M.,Žabokrtský, and M.Ševčíková Razímová. 2006. Prague depen- dency treebank 2.0. Linguistic Data Consortium.
Syntactic parsing and compound recognition via dual decomposition: Application to french. Joseph Le Roux, Antoine Rozenknop, Matthieu Constant, COLING. Joseph Le Roux, Antoine Rozenknop, and Matthieu Con- stant. 2014. Syntactic parsing and compound recogni- tion via dual decomposition: Application to french. In COLING.
Multiword units in syntactic parsing. Joakim Nivre, Jens Nilsson, Proceedings of Methodologies and Evaluation of Multiword Units in Real-World Applications (MEMURA). Methodologies and Evaluation of Multiword Units in Real-World Applications (MEMURA)Joakim Nivre and Jens Nilsson. 2004. Multiword units in syntactic parsing. Proceedings of Methodologies and Evaluation of Multiword Units in Real-World Applica- tions (MEMURA).
. Geoffrey Nunberg, Ivan A Sag, Thomas Wasow, Idioms. Language. 70Geoffrey Nunberg, Ivan A. Sag, and Thomas Wasow. 1994. Idioms. Language, 70:491 -538.
Multiword expressions: A pain in the neck for nlp. A Ivan, Timothy Sag, Francis Baldwin, Ann Bond, Dan Copestake, Flickinger, Proc. of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics. of the 3rd International Conference on Intelligent Text essing and Computational LinguisticsIvan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword ex- pressions: A pain in the neck for nlp. In In Proc. of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing- 2002, pages 1-15.
PARSEME -PARSing and Multiword Expressions within a European multilingual network. Agata Savary, Manfred Sailer, Yannick Parmentier, Michael Rosner, Victoria Rosén, Adam Przepiórkowski, Cvetana Krstev, Veronika Vincze, Beata Wójtowicz, Gyri Smørdal Losnegaard, Carla Parra Escartín, Jakub Waszczuk, 7th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics (LTC 2015). Poznań, PolandMatthieu Constant, Petya Osenova, and Federico SangatiAgata Savary, Manfred Sailer, Yannick Parmen- tier, Michael Rosner, Victoria Rosén, Adam Przepiórkowski, Cvetana Krstev, Veronika Vincze, Beata Wójtowicz, Gyri Smørdal Losnegaard, Carla Parra Escartín, Jakub Waszczuk, Matthieu Con- stant, Petya Osenova, and Federico Sangati. 2015. PARSEME -PARSing and Multiword Expressions within a European multilingual network. In 7th Language & Technology Conference: Human Lan- guage Technologies as a Challenge for Computer Science and Linguistics (LTC 2015), Poznań, Poland, November.
Discriminative lexical semantic segmentation with gaps: running the mwe gamut. Nathan Schneider, Emily Danchik, Chris Dyer, Noah A Smith, Transactions of the Association for Computational Linguistics. 2Nathan Schneider, Emily Danchik, Chris Dyer, and Noah A Smith. 2014a. Discriminative lexical se- mantic segmentation with gaps: running the mwe gamut. Transactions of the Association for Compu- tational Linguistics, 2:193-206.
Comprehensive annotation of multiword expressions in a social web corpus. Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T Mordowanec, Henrietta Conrad, Noah A Smith, Proceedings of the Ninth International Conference on Language Resources and Evaluation. Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidisthe Ninth International Conference on Language Resources and EvaluationReykjavík, IcelandNathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith. 2014b. Comprehensive anno- tation of multiword expressions in a social web cor- pus. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth Interna- tional Conference on Language Resources and Eval- uation, pages 455-461, Reykjavík, Iceland, May. ELRA.
Overview of the spmrl 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. Djamé Seddah, Reut Tsarfaty, Marie Sandra K'ubler, Jinho Candito, Richárd Choi, Jennifer Farkas, Iakes Foster, Koldo Goenaga, Yoav Gojenola, Spence Goldberg, Nizar Green, Marco Habash, Wolfgang Kuhlmann, Joakim Maier, Adam Nivre, Ryan Przepiorkowski, Wolfgang Roth, Seeker, Proceedings of the 4th Workshop on Statistical Parsing of Morphologically Rich Languages. the 4th Workshop on Statistical Parsing of Morphologically Rich LanguagesSeattle, WAYannick Versley, Veronika Vincze, Marcin Woliński, Alina Wróblewska, and Eric Villemonte de la ClérgerieDjamé Seddah, Reut Tsarfaty, Sandra K'ubler, Marie Candito, Jinho Choi, Richárd Farkas, Jennifer Fos- ter, Iakes Goenaga, Koldo Gojenola, Yoav Gold- berg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi- orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woliński, Alina Wróblewska, and Eric Villemonte de la Clérgerie. 2013. Overview of the spmrl 2013 shared task: A cross-framework evaluation of parsing morphologi- cally rich languages. In Proceedings of the 4th Work- shop on Statistical Parsing of Morphologically Rich Languages, Seattle, WA.
Dependency parsing for identifying hungarian light verb constructions. Veronika Vincze, János Zsibrita, Istvàn Nagy, T , Proceedings of International Joint Conference on Natural Language Processing. International Joint Conference on Natural Language ProcessingNagoya, JapanIJCNLP 2013Veronika Vincze, János Zsibrita, and Istvàn Nagy T. 2013. Dependency parsing for identifying hungarian light verb constructions. In Proceedings of Interna- tional Joint Conference on Natural Language Process- ing (IJCNLP 2013), Nagoya, Japan.
The relevance of collocations for parsing. Eric Wehrli, Proceedings of the 10th Workshop on Multiword Expressions (MWE). the 10th Workshop on Multiword Expressions (MWE)Gothenburg, SwedenAssociation for Computational LinguisticsEric Wehrli. 2014. The relevance of collocations for parsing. In Proceedings of the 10th Workshop on Mul- tiword Expressions (MWE), pages 26-32, Gothenburg, Sweden, April. Association for Computational Lin- guistics. |
6,574,007 | A Phrase-based Statistical Model for SMS Text Normalization | Short Messaging Service (SMS) texts behave quite differently from normal written texts and have some very special phenomena. To translate SMS texts, traditional approaches model such irregularities directly in Machine Translation (MT). However, such approaches suffer from customization problem as tremendous effort is required to adapt the language model of the existing translation system to handle SMS text style. We offer an alternative approach to resolve such irregularities by normalizing SMS texts before MT. In this paper, we view the task of SMS normalization as a translation problem from the SMS language to the English language 1 and we propose to adapt a phrase-based statistical MT model for the task. Evaluation by 5-fold cross validation on a parallel SMS normalized corpus of 5000 sentences shows that our method can achieve 0.80702 in BLEU score against the baseline BLEU score 0.6958. Another experiment of translating SMS texts from English to Chinese on a separate SMS text corpus shows that, using SMS normalization as MT preprocessing can largely boost SMS translation performance from 0.1926 to 0.3770 in BLEU score.MotivationSMS translation is a mobile Machine Translation (MT) application that translates a message from one language to another. Though there exists many commercial MT systems, direct use of such systems fails to work well due to the special phenomena in SMS texts, e.g. the unique relaxed and creative writing style and the frequent use of unconventional and not yet standardized shortforms. Direct modeling of these special phenomena in MT requires tremendous effort. Alternatively, we can normalize SMS texts into 1 This paper only discusses English SMS text normalization. grammatical texts before MT. In this way, the traditional MT is treated as a "black-box" with little or minimal adaptation. One advantage of this pre-translation normalization is that the diversity in different user groups and domains can be modeled separately without accessing and adapting the language model of the MT system for each SMS application. Another advantage is that the normalization module can be easily utilized by other applications, such as SMS to voicemail and SMS-based information query.In this paper, we present a phrase-based statistical model for SMS text normalization. The normalization is visualized as a translation problem where messages in the SMS language are to be translated to normal English using a similar phrase-based statistical MT method(Koehn et al., 2003). We use IBM's BLEU score(Papineni et al., 2002)to measure the performance of SMS text normalization. BLEU score computes the similarity between two sentences using n-gram statistics, which is widely-used in MT evaluation. A set of parallel SMS messages, consisting of 5000 raw (un-normalized) SMS messages and their manually normalized references, is constructed for training and testing. Evaluation by 5fold cross validation on this corpus shows that our method can achieve accuracy of 0.80702 in BLEU score compared to the baseline system of 0.6985. We also study the impact of our SMS text normalization on the task of SMS translation. The experiment of translating SMS texts from English to Chinese on a corpus comprising 402 SMS texts shows that, SMS normalization as a preprocessing step of MT can boost the translation performance from 0.1926 to 0.3770 in BLEU score.The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 summarizes the characteristics of English SMS texts. Section 4 discusses our method and Section 5 reports our experiments. Section 6 concludes the paper. | [] | A Phrase-based Statistical Model for SMS Text Normalization
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2006. 2006
Aiti Aw
Min Zhang [email protected]
Juan Xiao
Jian Su [email protected]
Institute of Infocomm Research
Heng Mui Keng Terrace
119613Singapore
A Phrase-based Statistical Model for SMS Text Normalization
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions
the COLING/ACL 2006 Main Conference Poster SessionsSydneyAssociation for Computational LinguisticsJuly 2006. 2006
Short Messaging Service (SMS) texts behave quite differently from normal written texts and have some very special phenomena. To translate SMS texts, traditional approaches model such irregularities directly in Machine Translation (MT). However, such approaches suffer from customization problem as tremendous effort is required to adapt the language model of the existing translation system to handle SMS text style. We offer an alternative approach to resolve such irregularities by normalizing SMS texts before MT. In this paper, we view the task of SMS normalization as a translation problem from the SMS language to the English language 1 and we propose to adapt a phrase-based statistical MT model for the task. Evaluation by 5-fold cross validation on a parallel SMS normalized corpus of 5000 sentences shows that our method can achieve 0.80702 in BLEU score against the baseline BLEU score 0.6958. Another experiment of translating SMS texts from English to Chinese on a separate SMS text corpus shows that, using SMS normalization as MT preprocessing can largely boost SMS translation performance from 0.1926 to 0.3770 in BLEU score.MotivationSMS translation is a mobile Machine Translation (MT) application that translates a message from one language to another. Though there exists many commercial MT systems, direct use of such systems fails to work well due to the special phenomena in SMS texts, e.g. the unique relaxed and creative writing style and the frequent use of unconventional and not yet standardized shortforms. Direct modeling of these special phenomena in MT requires tremendous effort. Alternatively, we can normalize SMS texts into 1 This paper only discusses English SMS text normalization. grammatical texts before MT. In this way, the traditional MT is treated as a "black-box" with little or minimal adaptation. One advantage of this pre-translation normalization is that the diversity in different user groups and domains can be modeled separately without accessing and adapting the language model of the MT system for each SMS application. Another advantage is that the normalization module can be easily utilized by other applications, such as SMS to voicemail and SMS-based information query.In this paper, we present a phrase-based statistical model for SMS text normalization. The normalization is visualized as a translation problem where messages in the SMS language are to be translated to normal English using a similar phrase-based statistical MT method(Koehn et al., 2003). We use IBM's BLEU score(Papineni et al., 2002)to measure the performance of SMS text normalization. BLEU score computes the similarity between two sentences using n-gram statistics, which is widely-used in MT evaluation. A set of parallel SMS messages, consisting of 5000 raw (un-normalized) SMS messages and their manually normalized references, is constructed for training and testing. Evaluation by 5fold cross validation on this corpus shows that our method can achieve accuracy of 0.80702 in BLEU score compared to the baseline system of 0.6985. We also study the impact of our SMS text normalization on the task of SMS translation. The experiment of translating SMS texts from English to Chinese on a corpus comprising 402 SMS texts shows that, SMS normalization as a preprocessing step of MT can boost the translation performance from 0.1926 to 0.3770 in BLEU score.The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 summarizes the characteristics of English SMS texts. Section 4 discusses our method and Section 5 reports our experiments. Section 6 concludes the paper.
Abstract
Short Messaging Service (SMS) texts behave quite differently from normal written texts and have some very special phenomena. To translate SMS texts, traditional approaches model such irregularities directly in Machine Translation (MT). However, such approaches suffer from customization problem as tremendous effort is required to adapt the language model of the existing translation system to handle SMS text style. We offer an alternative approach to resolve such irregularities by normalizing SMS texts before MT. In this paper, we view the task of SMS normalization as a translation problem from the SMS language to the English language 1 and we propose to adapt a phrase-based statistical MT model for the task. Evaluation by 5-fold cross validation on a parallel SMS normalized corpus of 5000 sentences shows that our method can achieve 0.80702 in BLEU score against the baseline BLEU score 0.6958. Another experiment of translating SMS texts from English to Chinese on a separate SMS text corpus shows that, using SMS normalization as MT preprocessing can largely boost SMS translation performance from 0.1926 to 0.3770 in BLEU score.
Motivation
SMS translation is a mobile Machine Translation (MT) application that translates a message from one language to another. Though there exists many commercial MT systems, direct use of such systems fails to work well due to the special phenomena in SMS texts, e.g. the unique relaxed and creative writing style and the frequent use of unconventional and not yet standardized shortforms. Direct modeling of these special phenomena in MT requires tremendous effort. Alternatively, we can normalize SMS texts into grammatical texts before MT. In this way, the traditional MT is treated as a "black-box" with little or minimal adaptation. One advantage of this pre-translation normalization is that the diversity in different user groups and domains can be modeled separately without accessing and adapting the language model of the MT system for each SMS application. Another advantage is that the normalization module can be easily utilized by other applications, such as SMS to voicemail and SMS-based information query.
In this paper, we present a phrase-based statistical model for SMS text normalization. The normalization is visualized as a translation problem where messages in the SMS language are to be translated to normal English using a similar phrase-based statistical MT method (Koehn et al., 2003). We use IBM's BLEU score (Papineni et al., 2002) to measure the performance of SMS text normalization. BLEU score computes the similarity between two sentences using n-gram statistics, which is widely-used in MT evaluation. A set of parallel SMS messages, consisting of 5000 raw (un-normalized) SMS messages and their manually normalized references, is constructed for training and testing. Evaluation by 5fold cross validation on this corpus shows that our method can achieve accuracy of 0.80702 in BLEU score compared to the baseline system of 0.6985. We also study the impact of our SMS text normalization on the task of SMS translation. The experiment of translating SMS texts from English to Chinese on a corpus comprising 402 SMS texts shows that, SMS normalization as a preprocessing step of MT can boost the translation performance from 0.1926 to 0.3770 in BLEU score.
The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 summarizes the characteristics of English SMS texts. Section 4 discusses our method and Section 5 reports our experiments. Section 6 concludes the paper.
Related Work
There is little work reported on SMS normalization and translation. Bangalore et al. (2002) used a consensus translation technique to bootstrap parallel data using off-the-shelf translation systems for training a hierarchical statistical translation model for general domain instant messaging used in Internet chat rooms. Their method deals with the special phenomena of the instant messaging language (rather than the SMS language) in each individual MT system. Clark (2003) proposed to unify the process of tokenization, segmentation and spelling correction for normalization of general noisy text (rather than SMS or instant messaging texts) based on a noisy channel model at the character level. However, results of the normalization are not reported. Aw et al. (2005) gave a brief description on their input pre-processing work for an English-to-Chinese SMS translation system using a wordgroup model. In addition, in most of the commercial SMS translation applications 2 , SMS lingo (i.e., SMS short form) dictionary is provided to replace SMS short-forms with normal English words. Most of the systems do not handle OOV (out-of-vocabulary) items and ambiguous inputs. Following compares SMS text normalization with other similar or related applications.
SMS Normalization versus General Text Normalization
General text normalization deals with Non-Standard Words (NSWs) and has been wellstudied in text-to-speech (Sproat et al., 2001) while SMS normalization deals with Non-Words (NSs) or lingoes and has seldom been studied before. NSWs, such as digit sequences, acronyms, mixed case words (WinNT, SunOS), abbreviations and so on, are grammatically correct in linguistics. However lingoes, such as "b4" (before) and "bf" (boyfriend), which are usually selfcreated and only accepted by young SMS users, are not yet formalized in linguistics. Therefore, the special phenomena in SMS texts impose a big challenge to SMS normalization.
SMS Normalization versus Spelling Correction Problem
Intuitively, many would regard SMS normalization as a spelling correction problem where the lingoes are erroneous words or non-words to be replaced by English words. Researches on spelling correction centralize on typographic and cognitive/orthographic errors (Kukich, 1992) Gale, 1991) that mostly model the edit operations using distance measures (Damerau 1964;Levenshtein 1966), specific word set confusions (Golding and Roth, 1999) and pronunciation modeling (Brill and Moore, 2000;Toutanova and Moore, 2002). These models are mostly character-based or string-based without considering the context. In addition, the author might not be aware of the errors in the word introduced during the edit operations, as most errors are due to mistype of characters near to each other on the keyboard or homophones, such as "poor" or "pour". In SMS, errors are not isolated within word and are usually not surrounded by clean context. Words are altered deliberately to reflect sender's distinct creation and idiosyncrasies. A character can be deleted on purpose, such as "wat" (what) and "hv" (have). It also consists of short-forms such as "b4" (before), "bf" (boyfriend). In addition, normalizing SMS text might require the context to be spanned over more than one lexical unit such as "lemme" (let me), "ur" (you are) etc. Therefore, the models used in spelling correction are inadequate for providing a complete solution for SMS normalization.
SMS Normalization versus Text Paraphrasing Problem
Others may regard SMS normalization as a paraphrasing problem. Broadly speaking, paraphrases capture core aspects of variability in language, by representing equivalencies between different expressions that correspond to the same meaning.
In most of the recent works (Barzilay and McKeown, 2001;Shimohata, 2002), they are acquired (semi-) automatically from large comparable or parallel corpora using lexical and morpho-syntactic information.
Text paraphrasing works on clean texts in which contextual and lexical-syntactic features can be extracted and used to find "approximate conceptual equivalence". In SMS normalization, we are dealing with non-words and "ungrammatically" sentences with the purpose to normalize or standardize these words and form better sentences. The SMS normalization problem is thus different from text paraphrasing. On the other hand, it bears some similarities with MT as we are trying to "convert" text from one language to another. However, it is a simpler problem as most of the time; we can find the same word in both the source and target text, making alignment easier.
Characteristics of English SMS
Our corpus consists of 55,000 messages collected from two sources, a SMS chat room and correspondences between university students. The content is mostly related to football matches, making friends and casual conversations on "how, what and where about". We summarize the text behaviors into two categories as below.
Orthographic Variation
The most significant orthographic variant in SMS texts is in the use of non-standard, selfcreated short-forms. Usually, sender takes advantage of phonetic spellings, initial letters or number homophones to mimic spoken conversation or shorten words or phrases (hw vs. homework or how, b4 vs. before, cu vs. see you, 2u vs. to you, oic vs. oh I see, etc.) in the attempt to minimize key strokes. In addition, senders create a new form of written representation to express their oral utterances. Emotions, such as ":(" symbolizing sad, ":)" symbolizing smiling, ":()" symbolizing shocked, are representations of body language. Verbal effects such as "hehe" for laughter and emphatic discourse particles such as "lor", "lah", "meh" for colloquial English are prevalent in the text collection.
The loss of "alpha-case" information posts another challenge in lexical disambiguation and introduces difficulty in identifying sentence boundaries, proper nouns, and acronyms. With the flexible use of punctuation or not using punctuation at all, translation of SMS messages without prior processing is even more difficult.
Grammar Variation
SMS messages are short, concise and convey much information within the limited space quota (160 letters for English), thus they tend to be implicit and influenced by pragmatic and situation reasons. These inadequacies of language expression such as deletion of articles and subject pronoun, as well as problems in number agreements or tenses make SMS normalization more challenging. Table 1 illustrates some orthographic and grammar variations of SMS texts.
Corpus Statistics
We investigate the corpus to assess the feasibility of replacing the lingoes with normal English words and performing limited adjustment to the text structure. Similarly to Aw et al. (2005), we focus on the three major cases of transformation as shown in the corpus: (1) replacement of OOV words and non-standard SMS lingoes; (2) removal of slang and (3) insertion of auxiliary or copula verb and subject pronoun. Table 2 shows the statistics of these transformations based on 700 messages randomly selected, where 621 (88.71%) messages required If we include the word "null" in the English vocabulary, the above model can fully address the deletion and substitution transformations, but inadequate to address the insertion transformation. For example, the lingoes "duno", "ysnite" have to be normalized using an insertion transformation to become "don't know" and "yesterday night". Moreover, we also want the normalization to have better lexical affinity and linguistic equivalent, thus we extend the model to allow many words to many words alignment, allowing a sequence of SMS words to be normalized to a sequence of contiguous English words. We call this updated model a phrase-based normalization model. normalization with a total of 2300 transformations. Substitution accounts for almost 86% of all transformations. Deletion and substitution make up the rest. Table 3 shows the top 10 most common transformations.
SMS Normalization
We view the SMS language as a variant of English language with some derivations in vocabulary and grammar. Therefore, we can treat SMS normalization as a MT problem where the SMS language is to be translated to normal English. We thus propose to adapt the statistical machine translation model (Brown et al., 1993;Zens and Ney, 2004) for SMS text normalization. In this section, we discuss the three components of our method: modeling, training and decoding for SMS text normalization.
Phrase-based Model
Given an English sentence e and SMS sentence s , if we assume that e can be decomposed into phrases with a segmentation T , such that each phrase e in can be corresponded with one phrase s in
Basic Word-based Model
The SMS normalization model is based on the source channel model (Shannon, 1948). Assuming that an English sentence e, of length N is "corrupted" by a noisy channel to produce a SMS message s, of length M, the English sentence e, could be recovered through a posteriori distribution for a channel target text given the source text P s , and a prior distribution for the channel source text .
A
Assuming that one SMS word is mapped exactly to one English word in the channel model under an alignment , we need to consider only two types of probabilities: the alignment probabilities denoted by P m and the lexicon mapping probabilities denoted by (Brown et al. 1993
= = ≈ ∑ ∑ ∑ ∏ ∑ ∏ i i i(4)
We are now able to model the three transformations through the normalization pair ( , ) The statistics in our training corpus shows that by selecting appropriate phrase segmentation, the position re-ordering at the phrase level occurs rarely. It is not surprising since most of the English words or phrases in normal English text are replaced with lingoes in SMS messages without position change to make SMS text short and concise and to retain the meaning. Thus we need to consider only monotone alignment at phrase level, i.e., k , as in equation (4). In addition, the word-level reordering within phrase is learned during training. Now we can further derive equation (4) as follows:
k a = { } 1 1 1 1 ( | ) ( | ) ( | ) k K K K a k A K k k k P s e P k e P s e = = ≈ ≈ ∑ ∏ ∏ (5)
The mapping probability is estimated via relative frequencies as follows: The alignment process given in equation (8) is different from that of normalization given in equation (7)
Training Issues
For the phrase-based model training, the sentence-aligned SMS corpus needs to be aligned first at the phrase level. The maximum likelihood approach, through EM algorithm and Viterbi search (Dempster et al., 1977) is employed to infer such an alignment. Here, we make a reasonable assumption on the alignment unit that a single SMS word can be mapped to a sequence of contiguous English words, but not vice verse. The EM algorithm for phrase alignment is illustrated in Figure 1 and is formulated by equation (8).
The Expectation-Maximization Algorithm
(1) Bootstrap initial alignment using orthographic similarities (2) Expectation: Update the joint probabilities ( , Since EM may fall into local optimization, in order to speed up convergence and find a nearly global optimization, a string matching technique is exploited at the initialization step to identify the most probable normalization pairs. The or-thographic similarities captured by edit distance and a SMS lingo dictionary 3 which contains the commonly used short-forms are first used to establish phrase mapping boundary candidates. Heuristics are then exploited to match tokens within the pairs of boundary candidates by trying to combine consecutive tokens within the boundary candidates if the numbers of tokens do not agree.
Finally, a filtering process is carried out to manually remove the low-frequency noisy alignment pairs. Table 4 shows some of the extracted normalization pairs. As can be seen from the table, our algorithm discovers ambiguous mappings automatically that are otherwise missing from most of the lingo dictionary.
( , ) s e log ( | ) P s e (2, 2) 0 (2, to) -0.579466 (2, too) -0.897016 (2, null) -2.97058 (4, 4) 0 (4, for) -0.431364 (4, null) -3.27161 (w, who are) -0.477121 (w, with) -0.764065 (w, who) -1.83885 (dat, that) -0.726999 (dat, date) -0.845098 (tmr, tomorrow) -0.341514 Table 4. Examples of normalization pairs Given the phrase-aligned SMS corpus, the lexical mapping model, characterized by ( | ) k k P s e , is easily to be trained using equation (6). Our n-gram LM 1 ( | ) n n P e e − is trained on English Gigaword provided by LDC using SRILM language modeling toolkit (Stolcke, 2002). Backoff smoothing (Jelinek, 1991) is used to adjust and assign a non-zero probability to the unseen words to address data sparseness.
Monotone Search
Given an input , the search, characterized in equation (7), is to find a sentence e that maxi-s mizes using the normalization model. In this paper, the maximization problem in equation (7) is solved using a monotone search, implemented as a Viterbi search through dynamic programming.
( | ) ( ) P s e P e i
Experiments
The aim of our experiment is to verify the effectiveness of the proposed statistical model for SMS normalization and the impact of SMS normalization on MT.
A set of 5000 parallel SMS messages, which consists of raw (un-normalized) SMS messages and reference messages manually prepared by two project members with inter-normalization agreement checked, was prepared for training and testing. For evaluation, we use IBM's BLEU score (Papineni et al., 2002) to measure the performance of the SMS normalization. BLEU score measures the similarity between two sentences using n-gram statistics with a penalty for too short sentences, which is already widely-used in MT evaluation. The baseline experiment is to moderate the texts using a lingo dictionary comprises 142 normalization pairs, which is also used in bootstrapping the phrase alignment learning process. Table 5 compares the performance of the different setups of the baseline experiments. We first measure the complexity of the SMS normalization task by directly computing the similarity between the raw SMS text and the normalized English text. The 1 st row of Table 5 reports the similarity as 0.5784 in BLEU score, which implies that there are quite a number of English word 3-gram that are common in the raw and normalized messages. The 2 nd experiment is carried out using only simple dictionary look-up. Lexical ambiguity is addressed by selecting the highest-frequency normalization candidate, i.e., only unigram LM is used. The performance of the 2 nd experiment is 0.6958 in BLEU score. It suggests that the lingo dictionary plus the unigram LM is very useful for SMS normalization. Finally we carry out the 3 rd experiment using dictionary look-up plus bi-gram LM. Only a slight improvement of 0.0128 (0.7086-0.6958) is obtained. This is largely because the English words in the lingo dictionary are mostly highfrequency and commonly-used. Thus bi-gram does not show much more discriminative ability than unigram without the help of the phrasebased lexical mapping model.
Experimental result analysis reveals that the strength of our model is in its ability to disambiguate mapping as in "2" to "two" or "to" and "w" to "with" or "who". Error analysis shows that the challenge of the model lies in the proper insertion of subject pronoun and auxiliary or copula verb, which serves to give further semantic information about the main verb, however this requires significant context understanding. For example, a message such as "u smart" gives little clues on whether it should be normalized to "Are you smart?" or "You are smart." unless the full conversation is studied.
Using Phrase-based Model
We then conducted the experiment using the proposed method (Bi-gram LM plus a phrase-based lexical mapping model) through a five-fold cross validation on the 5000 parallel SMS messages. Table 6 shows the results. An average score of 0.8070 is obtained. Compared with the baseline performance in Table 5, the improvement is very significant. It suggests that the phrase-based lexical mapping model is very useful and our method is effective for SMS text normalization. Figure 2 is the learning curve. It shows that our algorithm converges when training data is increased to 3000 SMS parallel messages. This suggests that our collected corpus is representative and enough for training our model. Table 7 illustrates some examples of the normalization results.
5-fold cross validation BLEU score (3-gram)
Setup 1
Effect on English-Chinese MT
An experiment was also conducted to study the effect of normalization on MT using 402 messages randomly selected from the text corpus. We compare three types of SMS message: raw SMS messages, normalized messages using simple dictionary look-up and normalized messages using our method. The messages are passed to two different English-to-Chinese translation systems provided by Systran 4 and Institute for Infocomm Research 5 (I 2 R) separately to produce three sets of translation output. The translation quality is measured using 3-gram cumulative BLEU score against two reference messages. 3-gram is used as most of the messages are short with average length of seven words. Table 8 shows the details of the BLEU scores. We obtain an average of 0.3770 BLEU score for normalized messages against 0.1926 for raw messages. The significant performance improvement suggests that preprocessing of normalizing SMS text using our method before MT is an effective way to adapt a general MT system to SMS domain.
Conclusion
In this paper, we study the differences among SMS normalization, general text normalization, spelling check and text paraphrasing, and investigate the different phenomena of SMS messages. We propose a phrase-based statistical method to normalize SMS messages. The method produces messages that collate well with manually normalized messages, achieving 0.8070 BLEU score against 0.6958 baseline score. It also significantly improves SMS translation accuracy from 0.1926 to 0.3770 in BLEU score without adjusting the MT model. This experiment results provide us with a good indication on the feasibility of using this method in performing the normalization task. We plan to extend the model to incorporate mechanism to handle missing punctuation (which potentially affect MT output and are not being taken care at the moment), and making use of pronunciation information to handle OOV caused by the use of phonetic spelling. A bigger data set will also be used to test the robustness of the system leading to a more accurate alignment and normalization.
= … … . The channel model can be rewritten in equation (3).
the basic function of the channel model for the phrase-based SMS normalization model, where we used the maximum approximation for the sum over all segmentations. as done in the previous word-based model.
. The channel model can be written as in the following equation where m is the position of a
Using a bigram language model and assuming Bayes decision rule, we finally obtain the following search criterion for equation(1).
in that, here we have an aligned input sentence pair, s and . The alignment process is just to find the alignment maximizes the joint probability. Therefore, in step (2) of the EM algorithm given atFigure 1, only the joint probabilities are involved and updated. For the above equation, we assume the segmentation probability ( | P T e to be constant.Finally, the SMS normalization model consists of two sub-models: a word-based language model(LM)
Figure 1 .
1Phrase
Table 1. Examples of SMS Messages Table 2. Distribution of Insertion, Deletion and Substitution Transformation.Phenomena
Messages
1. Dropping '?' at
the end of
question
btw, wat is ur view
(By the way, what is your
view?)
2. Not using any
punctuation at
all
Eh speak english mi malay
not tt good
(Eh, speak English! My Ma-
lay is not that good.)
3. Using spell-
ing/punctuation
for emphasis
goooooood Sunday morning
!!!!!!
(Good Sunday morning!)
4. Using phonetic
spelling
dat iz enuf
(That is enough)
5. Dropping
vowel
i hv cm to c my luv.
(I have come to see my love.)
6. Introducing
local flavor
yar lor where u go juz now
(yes, where did you go just
now?)
7. Dropping verb
I hv 2 go. Dinner w parents.
(I have to go. Have dinner
with parents.)
Transformation Percentage (%)
Insertion
8.09
Deletion
5.48
Substitution
86.43
Substitution
Deletion
Insertion
u -> you
m
are
2 → to
lah
am
n → and
t
is
r → are
ah
you
ur →your
leh
to
dun → don't
1
do
man → manches-
ter
huh
a
no → number
one
in
intro → introduce lor
yourself
wat → what
ahh
will
Table 3. Top 10 Most Common Substitu-
tion, Deletion and Insertion
)
Takako w r u? Takako who are you? Im in ns, lik soccer, clubbin hangin w frenz! Wat bout u mee? I'm in ns, like soccer, clubbing hanging with friends! What about you? fancy getting excited w others' boredom Fancy getting excited with others' boredom If u ask me b4 he ask me then i'll go out w u all lor. N u still can act so real. If you ask me before he asked me then I'll go out with you all. And you still can act so real. Doing nothing, then u not having dinner w us? Doing nothing, then you do not having dinner with us? Aiyar sorry lor forgot 2 tell u... Mtg at 2 pm. Sorry forgot to tell you... Meeting at two pm. tat's y I said it's bad dat all e gals know u... Wat u doing now? That's why I said it's bad that all the girls know you... What you doing now?
Table 7. Examples of Normalization Results0.8023
Setup 2
0.8236
Setup 3
0.8071
Setup 4
0.8113
Setup 5
0.7908
Ave.
0.8070
Table 6 .
6Normalization results for 5fold cross validation test http://www.systranet.com/systran/net 5 http://nlp.i2r.a-star.edu.sg/techtransfer.html0.7
0.72
0.74
0.76
0.78
0.8
0.82
1000
2000
3000
4000
5000
BLEU
Figure 2. Learning Curve
4
Table 8. SMS Translation BLEU score with or without SMS normalizationI 2 R
Systran
Ave.
Raw Message 0.2633
0.1219
0.1926
Dict Lookup
0.3485
0.1690
0.2588
Normalization 0.4423
0.3116
0.3770
This paper only discusses English SMS text normalization.
1 1 1 1 1 1 1 1 1 1 1 1 ( | ) ( , | ) ( | ) ( | , )
The entries are collected from various websites such as http://www.handphones.info/sms-dictionary/sms-lingo.php, and http://www.funsms.net/sms_dictionary.htm, etc.
Input Normalization for an English-to-Chinese SMS Translation System. A T Aw, M Zhang, Z Z Fan, P K Yeo, J Su, MT Summit-2005A.T. Aw, M. Zhang, Z.Z. Fan, P.K. Yeo and J. Su. 2005. Input Normalization for an English-to- Chinese SMS Translation System. MT Summit- 2005
Bootstrapping Bilingual Data using Consensus Translation for a Multilingual Instant Messaging System. S Bangalore, V Murdock, G Riccardi, COLING-2002S. Bangalore, V. Murdock and G. Riccardi. 2002. Bootstrapping Bilingual Data using Consensus Translation for a Multilingual Instant Messaging System. COLING-2002
Extracting paraphrases from a parallel corpus. R Barzilay, K R Mckeown, ACL-2001R. Barzilay and K. R. McKeown. 2001. Extracting paraphrases from a parallel corpus. ACL-2001
An Improved Error Model for Noisy Channel Spelling Correction. E Brill, R C Moore, ACL-2000E. Brill and R. C. Moore. 2000. An Improved Error Model for Noisy Channel Spelling Correction. ACL-2000
P F Brown, S D Pietra, V D Pietra, R Mercer, The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics. 19P. F. Brown, S. D. Pietra, V. D. Pietra and R. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics: 19(2)
Pre-processing very noisy text. A Clark, Proceedings of Workshop on Shallow Processing of Large Corpora. Workshop on Shallow Processing of Large CorporaLancasterA. Clark. 2003. Pre-processing very noisy text. In Proceedings of Workshop on Shallow Processing of Large Corpora, Lancaster, 2003
A technique for computer detection and correction of spelling errors. F J Damerau, Communications ACM. 7F. J. Damerau. 1964. A technique for computer detec- tion and correction of spelling errors. Communica- tions ACM 7, 171-176
Maximum likelihood from incomplete data via the EM algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society, Series B. 39A.P. Dempster, N.M. Laird and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical So- ciety, Series B, Vol. 39, 1-38
A Winnow-Based Approach to Spelling Correction. A Golding, D Roth, Machine Learning. 34A. Golding and D. Roth. 1999. A Winnow-Based Ap- proach to Spelling Correction. Machine Learning 34: 107-130
Self-organized language modeling for speech recognition. F Jelinek, Readings in Speech Recognition. A. Waibel and K.F. LeeMorgan KaufmannF. Jelinek. 1991. Self-organized language modeling for speech recognition. In A. Waibel and K.F. Lee, editors, Readings in Speech Recognition, pages 450-506. Morgan Kaufmann, 1991
A spelling correction program based on a noisy channel model. M D Kernighan, W Church, Gale, COLING-1990M. D. Kernighan, K Church and W. Gale. 1990. A spelling correction program based on a noisy channel model. COLING-1990
Techniques for automatically correcting words in text. K Kukich, ACM Computing Surveys. 244K. Kukich. 1992. Techniques for automatically cor- recting words in text. ACM Computing Surveys, 24(4):377-439
BLEU : a Method for Automatic Evaluation of Machine Translation. K A Papineni, S Roukos, T Ward, W J Zhu, ACL-2002K. A. Papineni, S. Roukos, T. Ward and W. J. Zhu. 2002. BLEU : a Method for Automatic Evaluation of Machine Translation. ACL-2002
Statistical Phrase-Based Translation. P Koehn, F J Och, D Marcu, HLT-NAACL-2003P. Koehn, F.J. Och and D. Marcu. 2003. Statistical Phrase-Based Translation. HLT-NAACL-2003
A mathematical theory of communication. C Shannon, Bell System Technical Journal. 273C. Shannon. 1948. A mathematical theory of commu- nication. Bell System Technical Journal 27(3): 379-423
Automatic Paraphrasing Based on Parallel Corpus for Normalization. M Shimohata, E Sumita, LREC-2002M. Shimohata and E. Sumita 2002. Automatic Para- phrasing Based on Parallel Corpus for Normaliza- tion. LREC-2002
Normalization of Non-Standard Words. R Sproat, A Black, S Chen, S Kumar, M Ostendorf, C Richards, Computer Speech and Language. 153R. Sproat, A. Black, S. Chen, S. Kumar, M. Ostendorf and C. Richards. 2001. Normalization of Non- Standard Words. Computer Speech and Language, 15(3):287-333
SRILM -An extensible language modeling toolkit. A Stolcke, ICSLP-2002A. Stolcke. 2002. SRILM -An extensible language modeling toolkit. ICSLP-2002
Pronunciation Modeling for Improved Spelling Correction. K Toutanova, R C Moore, ACL- 2002K. Toutanova and R. C. Moore. 2002. Pronunciation Modeling for Improved Spelling Correction. ACL- 2002
Improvements in Phrase-Based Statistical MT. R Zens, H Ney, HLT-NAALL-2004R. Zens and H. Ney. 2004. Improvements in Phrase- Based Statistical MT. HLT-NAALL-2004 |
6,368,353 | Inquiry Semantics: A Functional Semantics of Natural Language Grammar 1 | Programming a computer to operate to a significant degree as an author is a challenging research task. The creation of fluent multiparagraph text is a complex process because knowledge must be expressed in linguistic forms at several levels of organization, including paragraphs, sentences and words, each of which involves its own kinds of complexity. Accommodating this natural complexity is a difficult design problem. To solve it we must separate the various relevant kinds of knowledge into nearly independent collections, factoring the problem.Inquiry semantics is a new factoring of the text generation problem. It is novel in that it provides a distinct semantics for the grammar, independent of world knowledge, discourse knowledge, text plans and the lexicon, but appropriately linked to each. It has been implemented as part of the Nigel text generation grammar of English. This paper characterizes inquiry semantics, shows how it factors text generation, and describes its exemplification in NigeL The resulting description of inquiries for English has three dimensions: the varieties of operations on information, the varieties of information operated upon, and the subject matter of the operations. The definition framework for inquiries involves both traditional and nontraditional linguistic abstractions, spanning the knowledge to be represented and the plans required for presenting it. | [] | Inquiry Semantics: A Functional Semantics of Natural Language Grammar 1
William C Mann
USC/Information Sciences Institute
4676 Admiralty Way Marina del Rey90292CaliforniaUSA
Inquiry Semantics: A Functional Semantics of Natural Language Grammar 1
Programming a computer to operate to a significant degree as an author is a challenging research task. The creation of fluent multiparagraph text is a complex process because knowledge must be expressed in linguistic forms at several levels of organization, including paragraphs, sentences and words, each of which involves its own kinds of complexity. Accommodating this natural complexity is a difficult design problem. To solve it we must separate the various relevant kinds of knowledge into nearly independent collections, factoring the problem.Inquiry semantics is a new factoring of the text generation problem. It is novel in that it provides a distinct semantics for the grammar, independent of world knowledge, discourse knowledge, text plans and the lexicon, but appropriately linked to each. It has been implemented as part of the Nigel text generation grammar of English. This paper characterizes inquiry semantics, shows how it factors text generation, and describes its exemplification in NigeL The resulting description of inquiries for English has three dimensions: the varieties of operations on information, the varieties of information operated upon, and the subject matter of the operations. The definition framework for inquiries involves both traditional and nontraditional linguistic abstractions, spanning the knowledge to be represented and the plans required for presenting it.
Introduction
Text generation is the generation of language to conform to an a priori intention and plan to communicate.
The problem of text generation is naturally complex, requiring the 1previous title: Generating Text: Knowledge a Grammar Demands. active coordination of many kinds of knowledge having independent origins and character. A significant part of this complexity is in grammatical knowledge. It is important for the grammar of a text generator to have its own integrity, yet without being operationally autonomous. 2
The methods of generating text presented here grew out of a concern to maintain the integrity and definitional independence of particular existing fragments of grammar. These methods employ the grammar in ways which do not make any strong assumptions about the nongrammatical kinds of knowledge in the text generator. They control the use of the grammar in generation.
We-first describe the methods, showing how they make grammatical generation possible. Then we show how they factor the problem of text generation and clarify the role of knowledge representations. Finally we characterize inquiry semantics and the notion of meaning.
Grammar and Control
People often anticipate that a text generator will plan the operations of the grammar in full detail and then execute such plans. In fact, such a mode of operation has serious difficulties, and so it is worthwhile to consider other approaches. Even given the definition of a grammar and a particular way of manipulating it to produce text, there is an issue of where the initiative should be exercised in generation. Should the responsibility for conformity of ',he result to the given intention and plan lie within Ihe grammar manipulator, i.e., be part of its process of employing the grammar, or are the details of grammar use preplanned? It is an issue of control.
2This role of intention in the use of language is one of the reasons for calling the semantics in this paper a functional semantics Another is our uSe of one of the "functional" linguistic traditions To see the problem more clearly we can compare controlling the grammar to steering a car.
If we intend to drive to a nearby store, we can imagine planning the trip (in terms of steering motions) in total detail, deciding just where to turn, change lanes, and so forth, with sufficient precision to insure success. This detailed plan could in principle then be used to steer the car to the store. Such methods of imposed control are practical only in very simple cases.
Alternatively, we can make the decisions about steering at the point of need, on demand.
Unanticipated conditions are thus allowed for, and the complexity of the task is reduced. (There is no need to compensate in the plan for tire pressures, for example.) At each significant point along the way, the driver chooses a direction that conforms to the goal of reaching the destination. This is an active conformity approach, in which decisions about direction are made while the trip is in progress.
With imposed control, information about how to satisfy the intention and plan is needed before the process is started. With active conformity, information is needed as the process proceeds.
The design of our generation methods is based on active conformity. The grammar demands the information it needs about the plan as generation proceeds.
What does a purposefully generating grammar need to know? As part of the development of the Penman text-generation ;,,~gr~m, we have created a large systemic grammar of Englis h iMann 83]. Penman is designed to create a text plan and then execute it by giving it, one sentential element at a time, to the grRmmar. The grammar, which is called Nigel, operates on its own initiative, requesting information about the planned text as it is Given this orientation toward choice, the problem of conformity to the text plan is simply the problem of making appropriate choices. Each set of alternatives (each "system" in its systemic representation has an associated chooser or choice expert, a process that embodies a method for choosing appropriately in any particular circumstance.
The choice experts require certain information as they proceed with text generation. Nigei's choice experts request this information by presenting inquiries to the environment (the place outside of the grammar where intentions and plans to communicate are found.) For this purpose, Nigel employs a formal inquiry language in which an inquiry is an expression containing an inquiry operator and a sequence of operands. A single interface is provided for all interactions between Nigel and the environment; all interactions at the interface are in the inquiry language. This way of using such an interface is called inquiry semantics.
In this framework, we can understand the demands of the grammar by understanding the inquiry operators.
Varieties of Demands
This section characterizes the demands for information that Nigel can make in generating sentences. Since Nigel demands information only by presenting inquiries, we first " characterize the things that Nigel can inquire about (the operands of inquiries), then characterize in two different ways the questions that Nigel can ask.
Categories of Operands of Inquiries
Nigel has four related information forms:
1. Concept symbols 2. Presentation specifications 3. Term sets 4. Terms Concept symbols are names assigned by the environment to particular elements of its knowledge, either in the text plan for the text being formed or in the environment's knowledge base. A concept symbol represents an entity that may be simple or complex, decomposable or not; the symbols 3The grammar is written in an extended systemic notation and draws extensively on precedents in the work of Halliday and others [Berry 75,Berry 77 Presentation specifications are formal descriptions of the information that should be expressed in a particular reference, description, or predication. Through presentation specifications the environment designates the content to be conveyed in each ~rt.icular constituent, (but not how the content is to be expressed.)
For nominal groups (NP's). for example, presentation specifications represent the identification of the content to present about the particular object, process, or relation which the nominal group represents. The collection of devices that express nominal group content include head terms (nouns, pronouns, substitute "one"), modifying nominals, adjectives and adjective groups, quantifiers, numerals, determiners, prepositional phrases, restrictive and nonrestrictive relative clauses. Normally the grammar will use some combination of these devices in the nominal group to express all of the content of the presentation specification.
As a minimal example, the grammar's decision on whether a pronoun is adequate as a referring phrase can be made on the basis of the presentation specification, since the specification tells what constitutes adequate reference at the point of referring. (If the presentation specification indicates that nothing beyond gender and number needs to be expressed, a pronoun is used.)
The presentation specification is thus a unifying device for all of the conceptual elements of an intention to refer. It is essential to the generation task because the various syntactic devices effectively compete for the content which the nominal group expresses in referring.
At the clause level, presentation specifications operate comparably, unifying the effects of adverbial, conjunctive, and clausal modifiers. The specifications are constructed units, not frames or delimited regions of knowledge.
Term sets are collections of lexical items created in a special way which insures that they are appropriate, in denotation, :cmnotation, and information content, for their intended use. (The ,~;=cess which creates term sets does not restrict them syntactically; that is done later by the grammar.) The individual terms in a term set need not be so restrictive that they fully express the intent of the unit being constructed, since they are used with modifiers. Term sets are not like sets of synonyms since they do not have any uniformity of semantic content.
Term sets are used as collections of alternatives, from which one term will be picked for the final syntactic unit. The best example is a term set giving alternatives that can serve as the head term of a nominal group.
A Term is a single lexical item selected from a term set. It identifies the particular lexical item to appear in the generated text.
Currently Nigel is deliberately underdeveloped in its treatment of lexical items, having no morphological component at all. Hence terms are simply lexical items which bear lexical features that the grammar can employ for selectivity.
To see how these forms are used, consider the sentence:
The leader is John.
It refers to John twice. In generating this sentence, the same concept symbol, say JLDR, would be used to generate both ;f the references. However, two different presentation specifications for referring to JLDR would be created. The first might specify that the resulting expression should convey the fact that the individual holds the role of leader. The second could merely specify that the resulting expression should convey the person's name.
Two different term sets would also be created. Initially, each would contain conceptually and denotationally appropriate terms, possibly including "leader," "man," and "person," in one cf *.he term sets, and "John," and "Mr. Jones" in the other. Under guidance from various inquiries, the grammar applies different selectivity to one term set than to the other, so that the terms "leader" and "John" are finally selected.
How do these operands of inquiries compare with conventional linguistic abstractions?
Concept symbols have many precedents, and terms are familiar. Both presentation specifications and term sets are new.
As we will see, both presentation specifications and term sets are widely and frequently used in the grammar. Their central role in generation suggests that they are worthy of linguistic attention.
Presentation specifications are novel in that they represent the content of particular units without its allocation to constituent units. This permits the investigation of how the allocation works, and in particular how differing ranks compete for representational roles. Competition among the possible consitiuents of a nominal group for representation of posession seems to be a typical case.
We would like to know, for example how the decision between using the determiner "his," the prepositional phrase "of his," and the clause "which he has" is made. A presentation specification can say in a syntactically neutral way that possession is to be expressed. Using them facilitates study of the alternation.
Nigel uses subtractive operations on presentation specifications to account for the fact that repeated expression of content in a nominal group is marked, but single expression is not.
~.,, it can account for the perception that "his car. which he owns"
is marked in a way in which "his car, which he hates" is not.
Term sets are novel in that they represent the alternations and :ompetition among lexical items. The sets of terms which compete as candidates, e.g. for the main verb of a clause or head term of a nominal group, are highly variable and dependent on the ~'.ubj~ct matter of the communication.
Hence they are not susceptible to static analysis as part of the grammar, and they are not easy to represent in systemic systems.
Consider, for example, the word "attention" at the end of the third paragraph back. Other candidates for use in the same setting would include words such as "research." "curiosity," "work." "perusal." and "funds." These terms (as well as "attention") would all be in the term set for generating that nominal group. However, they are from different lexical fields, fields which are ordinarily not in alternation. Since they are not the basis of a stable alternation, many sorts of static
Abstract Categories of Inquiry Operators
The inquiries of the grammar can be differentiated according to categories of purposes they serve. Five such categories are described below. The first two kinds of inquiries ~ ~ :.'~-ed for control, and the last three extract symbols from the environment --either lexical items or symbols that can be included as subject matter in subsequent inquiries. Inquiries of the first two kinds have predetermined closed sets of possible responses: the last three kinds allow an unlimited number of responses.
information availability 2. information characterization
3. decomposition 4, linking (identification of related information)
mapping
Some inquiries determine whether information of a certain character is available, such as the location or duration of an event.
These inquiries generally precede others used to characterize information.
The operators used for information characterization form the largest collection of operators among the five kinds.
They are used to subcategorize and also to discover relations of inclusion, identity, precedence, adjacency, and attributes of manner, number, completeness, intended emphasis, identifiability to the reader, decomposability, gender, hypotheticality, extensionality, and many other sorts.
When the grammar has determined that some of the available informaion is decomposable into parts in a syntactically significant way (usually through information availability inquiries), information decomposition inquiries are used to obtain access to the parts. This is the largest category of inquiries for which an unlimited diversity of responses is allowed. These inquiries offer access to actors, affected objects, processes, causers, polarities, locations, time periods, extents, manners, and various kinds of participants or conditioners of processes.
The linking inquiries are a small collection of inquiries which resemble the information decomposition inquiries. They obtain information related in a particular way to known information, but not part of it. For example, given an event whose time must be expressed, there is an inquiry that obtains the identity of the time relative to which the event's time of occurrence should be expressed.
In terms of the four forms of information presented in section 3.1 above, exploration always proceeds from concepts to presentation specifications and term sets, and from term sets to terms, as shown in Figure 1. In a similar way, the mappings from concepts to term sets and from term sets to terms also vary depending on the comm,mication situation. Within the categories, however, each individual inquiry is specialized to a single kind of knowledge.
Categories of Subject
Subject Matter of Inquiries Concerning Prior
Knowledge
In addition to inquiring about availability of information, the grammar asks about abstract characteristics of processes, about number and discreteness, and about time and space. Also, there is ~ substantial collection of inquiries about logical relations such as set membership, interval inclusion, identity of two entities, extensionality, definiteness of existence, hypotheticality, polarity and conditionality.
Subject Matter of Inquiries for Communication
Among the inquiry operators that refer to information created in pursuit of an intention or plan to communicate, there are inquiries about speech acts and about controlling the hearer's attention. The latter are used in controlling thematicity, various kinds of marking, and the foregrounding or backgrounding of information.
Support Processes in the Environment
The organization of inquiry requires that various kinds of processes be available in the environment for responding to inquiries. At a detailed level, there must be a capability for the environment to recognize each inquiry operator and to respond to each one appropriately. In computational terms, for a particular domain of expressive problems, all of the inquiry operators which are called upon to serve that domain must be implemented. (For simple expressive problems this can be far fewer than the total for the grammar.)
At a more comprehensive level, we can identify certain recurrent activities which must underlie the operations of the inquiry operator implementations. These include searching for an appropriate set of.lexical items (such as candidate head nouns for a nominal group), creating a presentation specification for expressing a particular idea, and choosing among a set of terms which the grammar has approved as appropriate for a certain use.
At an even more comprehensive level, the grammar relies or; the prior activity of processes which plan the text.
a, Inquiries in Action: An example
The following list summarizes Nigel's activity in developing a particular nominal group: "her appointment on Wednesday morning with us." The starting point is identification of a need to refer to an object represented by concept APPOINTMENT. At the end of the activity shown, there is a structure containing the word "appointment" as the head term, the word "her" as its determiner, and elements that could be further developed into the phrases "on Wednesday morning" and "with us." The category of each inquiry operator is indicated in <brackets>. The order of presentation is the order actually used in the program. It is somewhat disconnected, since the program often Chooses in an arbitrary way between several things which it could do next. An inquiry appears more than once if it is used by more than one choice expert. Using the answers to these inquiries, the grammar builds a structure consisting of four elements in an ordered sequence: "her," "appointment," ONWEDNESDAYMORN, WITHUS. the latter two representing conceptual elements tO be further developed in subsequent applications of the grammar.
Relations between Operators
Some operators are closely related in ways not suggested above. In particular, some pairs of operators are used together in a characteristic way: First an availability operator asks if certain information is available, for example, whether the location of an event is known. If a positive response is given, a decomposition inquiry asks for a symbol to represent the available information, such as the location.
Almost all of the decomposition inquiries are paired with availability inquiries in this way. However, a few are not. For these, the grammar assumes the existence and separability of the information it requests.-The following are the exception cases:
1. the identity of the speaker.
2. the identity of the time of speaking, the "now" of tense.
3. given an event to express in an independent clause, the identity of the time of occurrence of the event.
4. given the need to generate a clause, the identity of the process portion (which will be realized in the main verb.)
In addition, none of the mapping operators and none of the linking operators are paired. We see that the decomposition operators have little intellectual content, but the other kinds all contribute significantly.
Demands on the Knowledge
Representation
Reviewing the inquiries, we can find several kinds of operations that are particularly difficult to support in explicit knowledge representations such as those currently used in AI or logic.
One operator asks whether the existence of a particular entity is hvoothetical. Knowledge gained from this inquiry is useful in controlling contrasts such as the following:
If they run to town, they will be sorry.
If they are running to town, they will be sorry.
Another operator asks about conjectural existence. It controls contrasts such as:
They will run to town.
They might run to town.
In the first case the running to town is treated as definite but occurring in the future.
Another asks whether an action to be expressed is habitual recurrent rather than a particular instance. Another group of inquiries seeks to determine the manner of performance of an action. Others deal with partial specifications and "question variables" of the sort that are often realized by "wh" terms such as "what," "how," and "whether." Some operators control negation and quantification, which often cause representation problems.
In addition to all of these potential problem sources, associated with inquiries whose responses will be difficult to determine, there are also many difficulties which do not arise from
8The Abstract Character of Inquiry Semantics
In this section we compare inquiry semantics to other kinds of semantics, and also identify the nature of meaning in this framework.
Comparative Semantics
The inquiry-based semantics presented here contrasts with other accounts also called "semantics" in many ways, but it does not particularly compete with them. This semantics, as a way of theorizing, is an answer to the question "How can we characterize the circumstances under which it is appropriate to make each particular grammatical choice of a language?" It differs from other semantic approaches in that 1. its scope is confined to grammar, rather than addressing linguistic behavior as a whole;
2. it does not presume particular structures (deep or otherwise) in the environment;
3. it is not particularly limited to issues reducible to questions of truth value;
4. its scope includes nondeclarative, noninterrogative speech actions (including imperative, imprecation, and greeting functions) on a par with declarative and interrogative ones;
5. it includes other functions of language in addition to the representational ones (such as the attention.direction functions); 6. it is defined relative to generation rather than interpretation, but is not thereby "generative".
This semantics is potentially compatible with other sorts, since it makes very few theoretical assumptions about the nature of language and communication. By encompassing every kind of syntactic construction, it is more inclusive than most.
Nothing in inquiry semantics rules out any particular formal apparatus as the notation for the methods by which the environment responds to inquiries. Accounts of particular languages and grammars will give some informal guidance as to which sorts of methods will be perspicuous, and may rule out particular formalisms as response mechanisms for particular grammars. The topic is as yet unexplored. 5The meanings of the features are not sufficient to find the sets of meanings which corres~ond to particular structures, since that requires the realization mapping of featureS to structureS. However, given the associations of features with realization operations, the structures for which a particular feature (or combination of features) is chosen can be identified, and so in principle the sets of techincal circumstances which can yield a particular string can be identified. to Singular contains a response "unitary" to MultiplicityQ, and a corresponding collection contains "multiple" as a response to MultiplicityQ, which leads to Plural. We can determine by inspection of the entire meanings that Singular and Plural exclude each other, and the determination could be made even if the features were not in direct opposition in the grammar.
Notice that this approach is compatible with approaches to grammar other than traditional systemic grammar, provided that their optionality is reexpressed as alternation of features, with choice experts defined to identify the circumstances under which each option is chosen.
Notice also that it is possible to have meanings in the ~irammar which are ruled out by the environment, for example, by consistency conditions. A change in the environment's epistemology could lead to changes in how the grammar is employed, without changes in meaning, the grammar being more neutral than its user.
Notice also that the collection of inquiry operators for a language is a claim concerning the semantic range of the grammar of that language, a characterization of what can be exDresssd syntactically.
Notice finally that, given a grammar and an inquiry semantics of each of two different languages, the question of whether a particular sentence of one language has the same meaning as a particular sentence of the other language is an addressable question, and that it is possible in principle to find cases for which the meanings are the same. One can also investigate the extent to which a particular opposition in one language is an exact translation of an opposition in another.
Conclusions
The inquiry language as a level of abstraction provides a useful factoring of the text generation problem, isolating the grammar-intensive part.
Development of inquiry language has led to the creation of new kinds of abstract elements that can be the operands of i;~quiries. Of these, presentation specifications and term sets have sufficiently novel scopes to suggest that they may be useful in defining relationships between grammar and language use.
We have identified three dimensions of characterization that yield a convenient abstract structure for understanding inquiry language collectively (by categories of operands, categories of operators and categories of subject matter.) These categorizations clarify the ways in which effective use of a grammar depends on processes and information outside of the grammar, including some ways which are not well controlled in available knowledge representations.
Inquiry semantics contrasts with other theoretical entities I.
also called "semantics" in many ways. It is potentially compatible with some other forms, but tends to be broader than many in including non-representational functions and non-declarative speech actions in its scope.
This research was SUl~ported by the Air Force Office of Scientific Research contract No, F49620-79-C-0181. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Ihe Air Force Office of Scientific Research of the U.S Government.
needed. The central organizing concept in the grammar is choice. The language offers a variety of grammatical options that :?,~ !~ represented as sets of alternatives, and means for producing surface forms from particular combinations of choices made among the alternatives. All syntactic options are expressed in the sets of alternatives. In any one set, choosing one option excludes all of the others. Nigel contains over 200 systems (collections of alternatives in systemic notation), along with provisions for realizing choices as structures, an experimental lexicon used to give the structures surface forms, and extensive provisions for experimental control. 3
Figure 1 :
1Information flow through mapping inquiries A small collection of Mapping inquiries participate in this :~,ploration at the points where information forms change. Several create specialized presentation specifications for concepts, and others create term sets and terms. Since operators can request presentation specifications, they can in effect demand that the environment work out what information to include in a new reference to an entity. The e,~vironment must then use the knowledge of past mentions, a model of the hearer's attention and of possible confusion candidates, and also the knowledge of denotationally appropriate le.<ic.~l items; these elements of knowledge are all outside the ooundary of the grammar. The mapping from concepts to presentation specifications is thus dependent on the particular circumstances.
. Halliday & Hasan 76, Halliday 76, Hudson 76, Halliday & Martin 81,de Joia & $tenton 80. Fawcett 80]. We gratefully acknowledge the participation of Michael Halliday and Christian Matthiessen in the work. themselves are not decomposable. A concept symbol does not have to bear any particular relationship to any kind of linguistic entity.
representations of them (including representation in systems in a systemic lexico-grammar) seem inappropriate. The situation is much more complex and dynamic, worthy of linguistic attention.Notice that in both cases, addition of a new formal c3:~struct will facilitate study of how particular expressions are related to closely related alternatives in ways which are not in ~.~po3ition in a conventional systemic account.Studies of functional alternation have long been a highly valued activity among systemicists.Notice also that these constructs arise easily, almost;nevitably, in studies of text construction, but are not inevitable at all in descriotive studies of text. Given a particular text to study, it =s not at all clear what the rejected head term candidates were, nor what the alternate allocations of content to syntactic units might have been. In systemic terms, part of the meaning of a nominal g,ouo is derived from the particular choice of the head term. but, working descriptively, the alternation is hard to characterize. Study of text generation (and related work on constructive characterization) thus complements other methodologies in that it n, ~.=.s certain difficult tasks easier.
8.2 The Nature of Meaning in inquiry SemanticsWe could assign meanings to any of several kinds of entities in this framework: grammatical features, collections of features, realizations of collections of features (i.e., structures), inquiry responses--or other possibilities. Our selection of a particular kind of entity as the locus of meaning depends on our intended use for that locus. We intend to use this notion of meaning to identify the ways in which minimal structurally-justified.~istinctives are responsive to their conditions of use. This selection does not preclude other selections for other purposes, and it certainly does not suggest that there are no other entities which are meaningful. We associate meanings with qrammal; qa feature~, in part because these are the controlling entities in the systemic framework. Given a systemic grammar, the syntactic structures ~',nicn are produced depend entirely on the grammatical features which are chosen, and the opportunity to choose a grammatical feature also depends entirely on the grammatical features which are chosen, i.e., the entry conditions of the system in which "the feature occurs. So it is convenient to associate meaning with features, and to derive meanings for any other entity by the determinate derivational methods which the systemic framework provides. To state the meaning of a grammatical feature is to state the technical circumstances under which the feature is chosen. We identify these circumstances as the set of possible collections of inquiry responses which are sufficient to lead to the choice of the feature. The definitions of the systems of the grammar and their choice experts are thus sufficient to determine the meaning of every grammatical feature. 45 Ambiguity of a feature arises when there is more than one collection of relevant inquiry responses which leads to the choice of the feature.Differences of meaning reflect differences between collections of inquiry responses. In Nigel, for the features Singular and Plural, one of the collections of inquiry responses which leads 4We do not stats the method here, since that involves many systemic details, but it is normally a rather straightforward matter for the Nige! grammar• More detail can be found in[Mann 82, Mann & Matthiessen 83a, Mann & Matthiessen 8,3b].
Introduction to Systemic Linguistics: Structures and Systems. M ; Berry, B T Batsford, Ltd , Introduction to Systemic Linguistics: Levels and Links. B. T. Batsford, Ltd., LondonLondon75] Berry, M., Introduction to Systemic Linguistics: Structures and Systems. B. T. Batsford, Ltd., London, 1975. [Berry 77] Berry, M., Introduction to Systemic Linguistics: Levels and Links, B. T. Batsford, Ltd., London, 1977.
A De Joia, A Stenton, Terms in Systemic Linguistics. London80Joia & StentonJoia & Stenton 80] de Joia, A., and A. Stenton, Terms in Systemic Linguistics, Batsford Academic and Educational, Ltd., London, 1980.
System and Function in Language. R Fawcett, M A K Halliday, Cohesion in English. Halliday. M. A. K.. and R. HasanOxford University PressJulius Groos Vertag and Exeter University PressHailiday 76. Halliday & Hasan 76. English Language Series, Title No. 9Fawcett, R., Cognitive Linguistics and Social Interaction, Julius Groos Vertag and Exeter University Press, 1980. [Hailiday 76] Halliday, M. A. K., System and Function in Language, Oxford University Press. London. 1976. [Halliday & Hasan 76] Halliday. M. A. K.. and R. Hasan. Cohesion in English, Longman, London, 1976. English Language Series, Title No. 9.
Readings in Systemic Linguistics. Haltiday & Martin 81] Halliday, M.A.K., and J. R. MartinBatsford, LondonHaltiday & Martin 81] Halliday, M.A.K., and J. R. Martin (eds.), Readings in Systemic Linguistics, Batsford, London, 1981.
Arguments for a Non-Transformational Grammar. R A Hudson, University of Chicago PressChicagoHudson, R. A., Arguments for a Non-Transformational Grammar, University of Chicago Press, Chicago, 1976.
The Anatomy of a Systemic Choice, USC/Information Sciences Institute. W C Mann, RR-82-104Marina del Rey, CATechnical ReportTo appear in ~iscourse ProcessesMann, W. C., The Anatomy of a Systemic Choice, USC/Information Sciences Institute, Marina del Rey, CA, Technical Report RR-82-104, October 1982. To appear in ~iscourse Processes
An Overview of the Penman Text Generation System, USC information Sciences Institute. William Mann, C°, RR.8,3.114Marina del Rey, CA 90291Technical ReportTo appear in the 1983 AAAI ProceedingsMann, William C°, An Overview of the Penman Text Generation System, USC information Sciences Institute, Marina del Rey, CA 90291., Technical Report RR.8,3.114, 1983. To appear in the 1983 AAAI Proceedings.
The papers in this report will also apoear in a forthcoming volume of the Advances in Discourse Processes Series. W C Mann & Matthiessen 83a] Mann, C M I M Matthiessen, RR-83.105Systemic Perspectives on Discourse: Selected Theoretical Papers from the 9th International Systemic Workshop to be published by Ablex. R. FreedleNigeh A Systemic Grammar for Text Generation, USC/Information Sciences InstituteMann & Matthiessen 83a] Mann, W. C., and C. M. I. M. Matthiessen, Nigeh A Systemic Grammar for Text Generation, USC/Information Sciences Institute, RR-83.105, February 1983. The papers in this report will also apoear in a forthcoming volume of the Advances in Discourse Processes Series, R. Freedle (ed.): Systemic Perspectives on Discourse: Selected Theoretical Papers from the 9th International Systemic Workshop to be published by Ablex.
An Overview of the Nige/ Text Generation Grammar, USC Information Sciences institute. Mann & Matthiessen, Mann, C William, M I M Christian, Matthiessen, RR-8Marina del Rey, CA 90291Technical ReportMann & Matthiessen 83b] Mann, William C. and Christian M. I. M. Matthiessen, An Overview of the Nige/ Text Generation Grammar, USC Information Sciences institute, Marina del Rey, CA 90291., Technical Report RR-8,3-113, 1983. |
250,179,884 | Détection des influenceurs dans des médias sociaux par une approche hybride | RESUMEL'influence sociale est un phénomène important dans divers domaines, tels que l'économie et la politique, qui a gagné en résonnance avec la popularité des médias sociaux, notamment les réseaux sociaux et les forums. La majorité des travaux sur ce sujet propose des approches fondées sur des théories en sciences humaines (sociologie, linguistique), et des techniques d'analyse de réseau (mesures de propagation et de centralité) ou de TAL. Dans cet article, nous présentons un modèle d'influence inspiré de travaux en psychologie sociale, sur lequel nous construisons un système combinant un module de TAL pour détecter les messages reflétant les processus d'influence, associé à une analyse par centralité de la transmission de ces messages. Nos expériences sur le forum de débats Change My View montrent que l'approche par hybridation, comparée à la centralité seule, aide à mieux détecter les influenceurs.ABSTRACTInfluencer detection in social media, a hybrid approachSocial influence is an important phenomenon in various fields, such as economics and politics, which has gained resonance with the popularity of social media, especially social networks and forums. Most of the works on this topic propose approaches based on theories in human sciences (sociology, linguistics) and techniques using either network analysis (propagation and centrality measures) or NLP. In this paper, we present a model of influence inspired by works in social psychology, which we implement with an NLP system to detect messages reflecting influence processes, combined with a centrality analysis of the transmission of these messages. Our experiments on the Change my view debate forum show that the hybridization approach, compared to the centrality analysis alone, gives significant improvements to detect influencers. | [
53091341
] | Détection des influenceurs dans des médias sociaux par une approche hybride
Kévin Deturck [email protected]
Damien Nouvel [email protected]
Namrata Patel [email protected]
Rte de Mende
Université Montpellier 3 -MIAp
34090MontpellierFrance (
Frédérique Segond [email protected]
Inria Minatec
17, avenue des Martyrs38000GrenobleFrance
(1) Inalco -Ertim, 2, rue de Lille75007ParisFrance (
Détection des influenceurs dans des médias sociaux par une approche hybride
Actes de la 29e Conférence sur le Traitement Automatique des Langues Naturelles Avignon, France, 27 juin au 1er juillet 2022 Volume 1 : conférence principale, pages 54-63. Cette oeuvre est mise à disposition sous licence Attribution 4.0 International.MOTS-CLES : influenceursréseaux sociauxmédias sociauxTALgraphescentralité KEYWORDS: influencerssocial networkssocial mediaNLPgraphscentrality
RESUMEL'influence sociale est un phénomène important dans divers domaines, tels que l'économie et la politique, qui a gagné en résonnance avec la popularité des médias sociaux, notamment les réseaux sociaux et les forums. La majorité des travaux sur ce sujet propose des approches fondées sur des théories en sciences humaines (sociologie, linguistique), et des techniques d'analyse de réseau (mesures de propagation et de centralité) ou de TAL. Dans cet article, nous présentons un modèle d'influence inspiré de travaux en psychologie sociale, sur lequel nous construisons un système combinant un module de TAL pour détecter les messages reflétant les processus d'influence, associé à une analyse par centralité de la transmission de ces messages. Nos expériences sur le forum de débats Change My View montrent que l'approche par hybridation, comparée à la centralité seule, aide à mieux détecter les influenceurs.ABSTRACTInfluencer detection in social media, a hybrid approachSocial influence is an important phenomenon in various fields, such as economics and politics, which has gained resonance with the popularity of social media, especially social networks and forums. Most of the works on this topic propose approaches based on theories in human sciences (sociology, linguistics) and techniques using either network analysis (propagation and centrality measures) or NLP. In this paper, we present a model of influence inspired by works in social psychology, which we implement with an NLP system to detect messages reflecting influence processes, combined with a centrality analysis of the transmission of these messages. Our experiments on the Change my view debate forum show that the hybridization approach, compared to the centrality analysis alone, gives significant improvements to detect influencers.
Introduction
L'influence est un phénomène social qui repose en grande partie sur les actions d'individus impactant les opinions, les décisions et les comportements d'autres individus. La notion d'influenceur, apparue avec les réseaux sociaux, se rapporte à l'activité d'influence sur Internet, en particulier pour des considérations commerciales ou politiques. Les institutions comme les marques font appel à des influenceurs qui agissent comme des leviers de communication.
La problématique motivant ce travail est de concevoir un système pour détecter automatiquement les influenceurs à partir de données issues de médias sociaux (Deturck, 2021). Nos objectifs sont les suivants : modéliser l'influence à partir de données issues de médias sociaux, identifier des traits linguistiques associés à ce modèle d'influence dans des messages et les détecter automatiquement avec du TAL, enfin, montrer que l'approche classique par centralité peut être combinée à une approche avec du TAL pour mieux détecter les influenceurs.
Nous commençons par un état de l'art en section 2. La section 3 présente la dimension TAL de notre approche, un aspect très peu pris en compte dans la plupart des travaux sur le sujet. Nous présentons l'hybridation de cette approche avec une approche classique par centralité et discutons les résultats en section 4, avant de conclure en section 5.
État de l'art
Le psychosociologue Alex Bavelas affirme que la position centrale d'un individu dans les interactions d'un groupe social induit de l'influence sur ce groupe (Bavelas, 1948), ce qu'il vérifiera sur des petits groupes d'individus (Bavelas, 1950). Le sociologue L.C. Freeman met au point des mesures de centralité pour les graphes sociaux, utilisant les notions d'intermédiarité (Freeman, 1977) et de proximité (Freeman, 1978). Dans les années 1990, de nouvelles mesures de centralité apparaissent pour mesurer l'importance des pages web, la plus célèbre étant PageRank (Page et al., 1998). Toutes ces mesures sont utilisées directement pour détecter les influenceurs dans les réseaux sociaux en ligne (Wibisono & Ruldeviyani, 2021) ou sont adaptées à ce type d'environnement (Singh, 2022). D'autres approches s'intéressent au contenu échangé entre les individus, caractérisant les messages des influenceurs afin de les détecter avec du TAL, que ce soit en s'attachant à décrire une stratégie de narration chez les influenceurs (Feng et al., 2021), ou en se basant sur des traits comportementaux des influenceurs, traduits en des traits discursifs (Rosenthal & McKeown, 2017). Des travaux ont intégré dans la mesure de centralité des informations issues de la sémantique des messages, comme la similarité (Katsimpras et al., 2015), l'originalité (Song et al., 2007) ou le niveau d'expertise (Li et al., 2013). Nous nous inspirons de ces travaux en y ajoutant une modélisation unifiée de l'influence qui intègre l'analyse structurelle et une dimension TAL bien plus présente et approfondie par une caractérisation des phénomènes d'influence dans des messages de médias sociaux.
55
L'influence dans les messages
Poursuivant l'objectif d'une modélisation pragmatique de l'influence, nous avons choisi d'en étudier les manifestations langagières, dans des messages de médias sociaux (Nouvel et al., 2019). Afin de concevoir une telle modélisation, notre approche empirique part de discussions incluant des influenceurs pour analyser à la fois leurs messages et ceux des autres participants. Notre objectif est d'identifier des régularités de discours des influenceurs et des influencés.
Modélisation de l'influence individuelle
Pour modéliser l'influence individuelle (à l'échelle d'un individu et non d'un groupe), nous avons analysé manuellement des fils de discussion du forum Change My View, en anglais, où le créateur d'un fil de discussion expose son point de vue et son raisonnement sur un sujet de son choix afin que d'autres participants le fassent changer d'avis. Si un participant y parvient, l'auteur initial doit citer le message clé, expliquer son changement d'avis et récompenser l'auteur du message par un symbole « delta », validé par la modération du forum si le message explicatif respecte les contraintes du forum.
Nos observations sur Change My View nous ont conduit à modéliser l'influence individuelle comme un processus en trois étapes : stimulus, stimulation et décision. Les composantes de stimulus et de stimulation correspondent à un cadre théorique en psychologie sociale, qui conçoit l'environnement social d'un individu comme porteur de stimuli pouvant induire la stimulation de son état psychologique (Turner & Oakes, 1986) et ainsi le conduire à une prise de décision (Ajzen, 1996).
En analysant manuellement des fils de discussion du forum « Change my view » contenant des changements d'avis et donc des actions d'influence individuelle, nous avons identifié des traits énonciatifs réguliers correspondant à chacune des trois composantes du modèle. Ne sont pas détaillés ici les traits énonciatifs de stimulation car ils ne sont pas l'objet d'expériences dans le présent article. Il y a trois types de stimuli : le claim, désignant un énoncé présenté comme factuel, la pédagogie, un type d'énoncé par lequel un énonciateur fait la leçon à son audience ou la conseille, et l'argumentation, un énoncé utilisé par un énonciateur pour rallier une audience à son opinion en la motivant par des arguments.
Détection automatique de l'influence individuelle
Pour le développement et l'évaluation d'un système de TAL qui détecte les messages contenant nos traits énonciatifs d'influence, nous avons constitué un corpus de référence avec une campagne d'annotation qui consistait à repérer dans des messages les segments de texte correspondant à chacun des traits. Bien que la campagne d'annotation ait porté sur l'ensemble des traits, nous n'avons pas eu le temps d'expérimenter sur la stimulation. Aussi, les expériences pour la détection du changement d'avis ont été décrites dans une publication antérieure (Deturck, 2018) et nous n'avons pas utilisé les données de la campagne d'annotation pour ces dernières mais le système « dela » du forum. Ainsi, nous focalisons ici la présentation sur la détection des traits stimuli.
Des deux corpus utilisés pour cette campagne, l'un est extrait du forum Change My View, avec un filtrage des discussions assurant un minimum d'activité de l'auteur initial et des autres participants (Tan et al. 2016 Nous présentons ci-dessous les configurations ayant produit les meilleurs résultats pour la détection de chaque trait stimulus.
Ø Claim : Forêts aléatoires + Glove ; précision à 0,66, rappel à 0,69, f-mesure à 0,67 Ø Pédagogie : Régression logistique + Bert + Glove + TF-IDF avec morpho-syntaxe, et dépendances syntaxiques ; précision à 0,29, rappel à 0,76, f-mesure à 0,42 Ø Argumentation : Perceptron multicouche + Bert + TF-IDF 2-3 caractères, 1-2 tokens ; précision à 0,52, rappel à 0,62, f-mesure à 0,57
Nous évinçons la pédagogie pour l'hybridation à cause de la faible précision. Chacune des configurations sera utilisée indépendamment pour la détection de chaque trait.
Approche hybride à la détection des influenceurs
Nous allons voir comment nous avons combiné l'approche classique d'analyse de la structure des échanges par centralité à l'analyse des messages pour détecter l'influence et les influenceurs. Notre hypothèse est que l'hybridation de ces deux types d'information permettra une meilleure détection.
Modélisation de l'influence par hybridation
Le principe de l'hybridation est de créer des graphes qui intègrent les informations sémantiques issues des messages correspondant à nos trois composantes d'influence individuelle : le stimulus, la stimulation et la décision. La construction du graphe utilisé dans l'hybridation est faite par l'ajout successif de trois composantes que nous présentons ordonnées ci-dessous. Pour une évaluation comparative, en plus de la construction du graphe correspondant à notre modèle hybride, nous avons aussi construit un graphe baseline qui représente la transmission des messages, quels qu'ils soient, entre les individus : il y a un arc orienté depuis un noeud représentant un individu Orig vers un noeud représentant un individu Dest si Orig a envoyé un message à Dest, ou si Dest a répondu à un message de Orig. Nous avons appliqué deux mesures de centralité sortante sur ces graphes : le degré sortant (Shaw, 1954), basé sur le nombre d'arcs sortants, et la mesure hits relais (Kleinberg, 1999), qui ajoute une pondération des arcs sortants d'autant plus grande que leurs noeuds cibles ont des liens entrants, constituant ainsi des pôles dans le graphe.
la modération du forum Change My View. Le score, de 0 à 1, est d'autant plus grand que les influenceurs sont situés en haut du classement évalué.
Résultats
Évaluation MAP avec l'hybridation MAP avec la baseline L'approche par hybridation supplante globalement la baseline (jusqu'à 10 points en plus). La mesure Hits ne supplante notablement la mesure par degré sortant, pourtant plus simple, que sur les graphes locaux, avec la distinction du changement d'avis (7 points en plus), ce qui semble indiquer l'importance de ce trait détecter les influenceurs.
Conclusion et perspectives
Nous avons proposé un modèle d'influence à partir de messages d'un forum d'argumentation, réalisé une caractérisation linguistique de ce modèle par des traits énonciatifs originaux et mené une campagne d'annotation de ces traits, qui nous a permis de développer et d'évaluer des systèmes à base de TAL, qui ont montré la possibilité de les détecter automatiquement. Nos expériences par hybridation ont prouvé qu'une approche avec du TAL peut apporter un gain substantiel pour détecter les influenceurs par rapport à la seule approche par centralité, plus classique.
Il serait intéressant d'utiliser d'autres sources de données pour la robustesse de notre système, d'ajouter la détection automatique des traits de stimulation et d'améliorer la détection des traits déjà intégrés. Nous pourrions aussi affiner la sortie du système avec, par exemple, un classement des influenceurs et des stimuli d'après l'étendue de leur impact, ainsi qu'une analyse des relations particulières qui existent peut-être entre des stimuli et des stimulations.
Ø Exemple de Claim : « Salman hold talks with Putin » ; il s'agit de la description d'un prétendu événement, vrai ou faux, il s'agit bien d'un énoncé donné comme factuel Ø Exemple de Pédagogie : « Turn it off so they can stay in the darkness of their misguidance » ; il s'agit de l'expression d'un conseil directement adressé à son destinataire, avec une explication, pour le guider dans son comportement et sa pensée Ø Exemple d'argumentation : « Since technology is always consolidating, it's only logical that it'll Le trait énonciatif de décision, dans Change My View, est l'expression d'un changement d'avis, la finalité d'une action d'influence dans ce forum. Cela correspond à toute modification, même partielle, du positionnement intellectuel d'un individu sur un sujet. Ø Exemple de Changement d'avis : « This definitely makes me rethink my point on this » ; l'auteur dit penser autrement suite à la lecture d'un message, ce qui est indiciateur d'un impact sur son positionnement intellectuel Un système de détection des influenceurs, fondé sur notre modèle de l'influence individuelle doit être capable de repérer automatiquement les traits énonciatifs que nous venons de présenter. Dans la section suivante, nous présentons l'implémentation et l'évaluation des modules destinés à la détection automatique des traits stimuli et du trait de décision.continue to do so. » ; l'auteur énonce ici son idée sur un sujet (« it'll continue to do so ») qu'il
justifie par un argument (« Since technology is always consolidating »), en insistant sur le
caractère logique de son argumentation (« it's only logical »)
). De ce corpus pré-existant, nous n'avons conservé que les discussions contenant au minimum un changement d'avis du participant initial (celui qui expose son point de vue). L'autre corpus est constitué de tweets en anglais, d'individus désignés comme étant pro-État islamique 1 : partant de l'hypothèse que les individus pro-État islamique agissent en influenceurs, l'objectif était de voir si nous retrouvions nos traits stimuli dans leurs tweets. Ce corpus ne présentant pas les réactions aux tweets, nous ne l'avons pas utilisé pour l'annotation des traits de stimulation et de décision puisque ces derniers concernent les réactions aux stimuli.Pour la détection automatique des stimuli, conçue comme un problème de classification binaire des messages par trait selon que le trait est présent ou pas, nous avons utilisé l'ensemble des données issues de la campagne d'annotation avec un partitionnement de type 70%-30% respectivement pour le développement et pour l'évaluation.Avec la librairie Scikit-learn 2 , nous avons expérimenté quatre algorithmes classiques : un algorithme SVM (l'implémentation SVC), l'algorithme des forêts aléatoires, un autre fondé sur la régression logistique et un perceptron multicouche. Nous avons associé à ces algorithmes différents descripteurs des messages basés d'une part sur une représentation TF-IDF des messages, visant la spécificité des énoncés stimuli à travers les caractères, tokens, catégories morphosyntaxiques et dépendances syntaxiques en présence, et nous avons utilisé d'autre part des plongements lexicaux Bert 3 et Glove 4 , en utilisant respectivement les librairies PyTorch 5 et Transformers(Wolf et al., 2020) avec le modèle Bert d'une part et la librairie Spacy 6 d'autre part pour passer d'une représentation lexicale à une représentation des messages. L'ensemble de ces descripteurs sont agrégés. pour le perceptron multicouche, ainsi que les paramètres kernel et gamma pour SVM, criterion et max_features pour les forêts aléatoires, activation et solver pour le perceptron multicouche, parmi toutes les valeurs discrètes proposées par la librairie.Les annotateurs étaient des étudiants en TAL dont l'anglais n'était pas la langue maternelle. Pour
cadrer le travail d'annotation, nous avons rédigé un guide (Deturck, 2021) qui a été amélioré
itérativement, en collaboration avec les annotateurs, lors de discussions et de réconciliations
(unification des désaccords entre les annotateurs) après chaque session d'annotation. Nous avons
organisé quatre sessions concernant les traits stimuli, une session n'a concerné que le trait Claim car
1
https://www.kaggle.com/fifthtribe/how-isis-uses-twitter
nous n'avions alors rédigé le guide que pour ce trait-ci. Il y a eu un total de 27 annotateurs, ceux des
deux premières sessions, sept duos puis cinq duos et 1 trio, étaient complètement différents, tandis
que ceux des deux dernières sessions, deux duos, étaient quatre annotateurs ayant déjà travaillé lors
de la deuxième session.
À chaque session, les groupes ont reçu pour l'annotation des corpus de volumes identiques et d'un
seul genre textuel (tweet ou forum) pour simplifier l'annotation. Pour le trait claim, 1126 messages
ont été annotés avec 45% de messages contenant une instance de claim, pour les traits pédagogie et
argumentation, 716 messages ont été annotés avec 14% de messages contenant une instance de
pédagogie et 7% une instance d'argumentation. Sur les sessions 2 à 4, qui portent sur tous les traits
stimuli, les accords inter-annotateurs mesurés avec la métrique GammaCat (Mathet, 2017), qui
compare la catégorisation des segments jugés similaires par leur position dans les textes selon un
algorithme associé, a une valeur moyenne de 0,70 et affichant une progression allant de 0,53 sur la
session 2 à 0,88 sur la session 4.
Nous avons optimisé les données produites par les descripteurs des messages en utilisant les
algorithmes StandardScaler et SelectKBest. Nous avons aussi optimisé la configuration des
algorithmes en utilisant l'algorithme GridSearchCV avec une validation croisée sur les données
d'entrainement, sur quelques paramètres génériques, les autres étant laissés à leur valeur par défaut :
k parmi {5, 10, 20, 30} pour SelectKBest, C parmi {0,00001, 1, 1 000} pour SVM et la régression
logistique, n_estimators parmi {10, 100, 500} pour les forêts aléatoires, penalty parmi {l1, l2} et
solver parmi {liblinear, saga} pour la régression logistique, max_iter parmi {1 000, 5 000, 10 000}
2
https://scikit-learn.org/stable/
3
https://huggingface.co/bert-base-uncased
4
spacy_en_vectors_web_lg-2.3.0
5
https://pypi.org/project/torch/
6
https://spacy.io
58
Change My View en partie utilisé pour la campagne d'annotation, avec les critères de filtrage sur le minimum de participation et de changement d'avis. Ces 50 fils de discussion contiennent 4 664 messages, 1 435 participants dont 5% d'influenceurs. Les données utilisées pour l'apprentissage des classifieurs sont distinctes de celles utilisées pour l'hybridation : elles correspondent à l'ensemble des messages annotés durant la campagne.1. Le graphe de stimuli : les noeuds du graphe représentent les individus tandis que chaque arc
du graphe orienté représente la circulation d'au minimum un message contenant un stimulus
d'un participant vers un autre. Nous considérons qu'un message stimulus a transité d'un
individu Orig vers un individu Dest dans les trois cas suivants : l'individu Orig a envoyé un
message contenant un stimulus à l'individu Dest, l'individu Dest a répondu à un message de
l'individu Orig contenant un stimulus (même s'il ne lui était pas explicitement destiné),
l'individu Dest a répondu à l'individu Orig par un message contenant un changement d'avis.
2. La pondération de l'influence individuelle dans le graphe de stimuli : pour quantifier
dans le graphe de stimuli l'influence individuelle entre les individus représentés, nous
attribuons à chaque arc du graphe le poids moyen de tous les messages stimuli ayant transité
sur cet arc. Le poids d'un message stimulus est conçu pour refléter la probabilité que ce
message influence un individu cible, ce que nous associons à une caractérisation de la
réaction éventuelle que ce message a engendrée chez l'individu cible, contenue dans tout
message en réponse à un message stimulus. Le poids d'un message stimulus peut prendre
trois valeurs : 0,25 en l'absence de réaction, 1 (la valeur maximale) si le message en réaction
inclut l'expression d'un changement d'avis, et 0,5 pour toute autre réaction
3. La centralité des individus représentés dans le graphe d'influence individuelle : la
centralité doit indiquer dans quelle mesure chaque individu propage de l'influence, la mesure
de centralité doit donc se focaliser sur les liens sortants du noeud analysé
4.2 Cadre expérimental
Pour le développement et l'évaluation de notre système hybride, nous avons utilisé 50 fils de
discussion qui proviennent du corpus
Type de grapheGraphe global Graphes locaux Graphe global Graphes locaux TABLEAU 1 : Résultats des expériences sur l'hybridationMesure de centralité Degré Hits Degré
Hits Degré Hits
Degré
Hits
Sans pondération
0,26
0,24
0,32
0,33
0,205 0,204
0,22
0,25
Réactions diverses
0,27
0,25 0,349 0,348
0,21
0,20
0,30
0,22
Ch. d'avis distingué
0,31
0,28
0,48
0,55
Seulement par hybridation
Nous avons expérimenté trois configurations du graphe de stimuli et du graphe baseline : (1) aucune pondération dans les deux graphes, (2) une pondération plus importante des arcs lorsque les messages correspondants ont engendré des messages en réponse (0,5 contre 0,25), et (3) ajoute à (2) une pondération maximale (un poids de 1) des arcs stimuli si les messages correspondants ont engendré un changement d'avis. Nous avons aussi comparé l'utilisation de la centralité des participants mesurée par discussion, sur des graphes locaux, chacun étant construit sur une seule discussion du jeu de données, avec la mesure de centralité des participants mesurée sur un graphe global, construit à partir de l'ensemble des discussions du jeu de données.Pour l'évaluation, nous avons établi les classements des participants par discussion selon leurs scores de centralité dans les différentes configurations présentées auparavant, puis appliqué la mesure Mean Average Precision (MAP) pour comparer ces classements à une catégorisation binaire des participants, considérés comme des influenceurs s'ils ont reçu une récompense « delta » validée par
The social psychology of decision making. Social Psychology: Handbook of Basic Principles. I Ajzen, AJZEN, I. (1996). The social psychology of decision making. Social Psychology: Handbook of Basic Principles, p. 297-325.
A mathematical model for group structures. Bavelas A, 10.1121/1.1906679BAVELAS A. (1948). A mathematical model for group structures. Human organization, p. 16-30. DOI : 10.1121/1.1906679.
Communication patterns in task-oriented groups. The journal of the acoustical society of America. Bavelas A, 10.1121/1.1906679BAVELAS A. (1950). Communication patterns in task-oriented groups. The journal of the acoustical society of America, p. 725-730. DOI : 10.1121/1.1906679.
Détection des influenceurs dans des médias sociaux, Doctoral dissertation. Deturck K, Institut National des Langues et Civilisations Orientales-INALCO PARIS-LANGUES O'. DETURCK K. (2021). Détection des influenceurs dans des médias sociaux, Doctoral dissertation, Institut National des Langues et Civilisations Orientales-INALCO PARIS-LANGUES O'.
Détection d'influenceurs dans des médias sociaux. K Deturck, Démonstrations, articles des Rencontres Jeunes Chercheurs, ateliers DeFT. Rennes, FranceATALA2Actes de la conférence TALNDETURCK, K. (2018). Détection d'influenceurs dans des médias sociaux. In Actes de la conférence TALN, Volume 2-Démonstrations, articles des Rencontres Jeunes Chercheurs, ateliers DeFT, Rennes, France, p. 117-130 : ATALA.
An expert with whom I can identify: The role of narratives in influencer marketing. Y Feng, H Chen, Q Kong, 10.1080/02650487.2020.1824751International Journal of Advertising. 407FENG Y., CHEN, H., & KONG, Q. (2021). An expert with whom I can identify: The role of narratives in influencer marketing. International Journal of Advertising, 40(7), p. 972-993. DOI : 10.1080/02650487.2020.1824751.
A set of measures of centrality based on betweenness. L C Freeman, Sociometry. FREEMAN, L. C. (1977). A set of measures of centrality based on betweenness. Sociometry, p. 35-41.
Centrality in social networks conceptual clarification. L C Freeman, Social networks. 13FREEMAN, L. C. (1978). Centrality in social networks conceptual clarification. Social networks, 1(3), 215-239.
Determining influential users with supervised random walks. G Katsimpras, D Vogiatzis, G Paliouras, KATSIMPRAS, G., VOGIATZIS, D., & PALIOURAS, G. (2015). Determining influential users with supervised random walks.
Authoritative sources in a hyperlinked environment. The Structure and Dynamics of Networks. J M Kleinberg, 10.1145/324133.324140KLEINBERG, J. M. (1999). Authoritative sources in a hyperlinked environment. The Structure and Dynamics of Networks, 9781400841(5), p. 514-542. DOI : 10.1145/324133.324140.
An improved mix framework for opinion leader identification in online learning communities. Knowledge-Based Systems. Y Li, S Ma, Y Zhang, R Huang, Kinshuk, 10.1016/j.knosys.2013.01.00543LI, Y., MA, S., ZHANG, Y., HUANG, R., & KINSHUK. (2013). An improved mix framework for opinion leader identification in online learning communities. Knowledge-Based Systems, 43, p. 43-51. DOI : 10.1016/j.knosys.2013.01.005.
The PageRank Citation Ranking: Bringing Order to the Web. World Wide Web Internet And Web Information Systems. Y Mathet, L Page, S Brin, R Motwani, T Winograd, 10.1162/COLI_a_00296Computational Linguistics. 433The Agreement Measure γ cat a Complement to γ Focused on Categorization of a Continuum. DOI : 10.1.1.31.1768MATHET, Y. (2017). The Agreement Measure γ cat a Complement to γ Focused on Categorization of a Continuum. Computational Linguistics, 43(3), p. 661-681. DOI : 10.1162/COLI_a_00296. PAGE, L., BRIN, S., MOTWANI, R., & WINOGRAD, T. (1998). The PageRank Citation Ranking: Bringing Order to the Web. World Wide Web Internet And Web Information Systems. DOI : 10.1.1.31.1768.
Page rank versus katz: Is the centrality algorithm choice relevant to measure user influence in twitter?. H Rosa, J P Carvalho, R Astudillo, F Batista, 10.1007/978-3-319-74681-4_1Studies in Computational Intelligence. SpringerROSA, H., CARVALHO, J. P., ASTUDILLO, R., & BATISTA, F. (2018). Page rank versus katz: Is the centrality algorithm choice relevant to measure user influence in twitter? Studies in Computational Intelligence, 1-9 : Springer. DOI : 10.1007/978-3-319-74681-4_1.
Detecting influencers in multiple online genres. S Rosenthal, K Mckeown, 10.1145/3014164ACM Transactions on Internet Technology. 172ROSENTHAL, S., & MCKEOWN, K. (2017). Detecting influencers in multiple online genres. ACM Transactions on Internet Technology, 17(2), 1-22. DOI : 10.1145/3014164.
Centrality measures: a tool to identify key actors in social networks. M E Shaw, 10.1080/00223980.1954.9712925.SINGHR.R.DOI : 10.1007/978-981-16-3398-0_1. 62Journal of Psychology: Interdisciplinary and Applied. SpringerGroup Structure and the Behavior of Individuals in Small GroupsSHAW, M. E. (1954). Group Structure and the Behavior of Individuals in Small Groups. Journal of Psychology: Interdisciplinary and Applied, 139-149. DOI : 10.1080/00223980.1954.9712925. SINGH R. R. (2022). Centrality measures: a tool to identify key actors in social networks. Principles of Social Networking, p. 1-27. Springer, Singapore. DOI : 10.1007/978-981-16-3398-0_1. 62
Identifying opinion leaders in the blogosphere. X Song, Y Chi, K Hino, B L Tseng, 10.1145/1321440.1321588International Conference on Information and Knowledge Management. SONG, X., CHI, Y., HINO, K., & TSENG, B. L. (2007). Identifying opinion leaders in the blogosphere. International Conference on Information and Knowledge Management, Proceedings, 07(January 2007), p. 971-974. DOI : 10.1145/1321440.1321588.
Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. C Tan, V Niculae, C Danescu-Niculescu-Mizil, L Lee, 10.1145/2872427.288308125th International World Wide Web Conference. TAN, C., NICULAE, V., DANESCU-NICULESCU-MIZIL, C., & LEE, L. (2016). Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. 25th International World Wide Web Conference, WWW 2016, p. 613-624. DOI : 10.1145/2872427.2883081.
The significance of the social identity concept for social psychology with reference to individualism, interactionism and social influence. J C Turner, P J Oakes, 10.1111/j.2044-8309.1986.tb00732.xBritish Journal of Social Psychology. 253TURNER, J. C., & OAKES, P. J. (1986). The significance of the social identity concept for social psychology with reference to individualism, interactionism and social influence. British Journal of Social Psychology, 25(3), 237-252. DOI : 10.1111/j.2044-8309.1986.tb00732.x.
Detecting Social Media Influencers of Airline Services through Social Network Analysis on Twitter: A Case Study of the Indonesian Airline Industry. A I Wibisono, Y Ruldeviyani, 3rdWIBISONO, A. I., & RULDEVIYANI, Y. (2021). Detecting Social Media Influencers of Airline Services through Social Network Analysis on Twitter: A Case Study of the Indonesian Airline Industry. 3rd
. 10.1109/EIConCIT50028.2021.9431876IEEEEast Indonesia Conference on Computer and Information Technology (EIConCITEast Indonesia Conference on Computer and Information Technology (EIConCIT), p. 314-319. IEEE. DOI : 10.1109/EIConCIT50028.2021.9431876. |
5,702,756 | The TALP&I2R SMT Systems for IWSLT 2008 | This paper gives a description of the statistical machine translation (SMT) systems developed at the TALP Research Center of the UPC (Universitat Politècnica de Catalunya) for our participation in the IWSLT'08 evaluation campaign. We present N gram-based (TALPtuples) and phrase-based (TALPphrases) SMT systems. The paper explains the 2008 systems' architecture and outlines translation schemes we have used, mainly focusing on the new techniques that are challenged to improve speech-to-speech translation quality. The novelties we have introduced are: improved reordering method, linear combination of translation and reordering models and new technique dealing with punctuation marks insertion for a phrase-based SMT system.This year we focus on the Arabic-English, Chinese-Spanish and pivot Chinese-(English)-Spanish translation tasks.ISCA Archivehttp://www.isca-speech.org/archive | [
1586274,
11706155,
1559412,
7701908,
3681367,
10228412,
5219389
] | The TALP&I2R SMT Systems for IWSLT 2008
Maxim Khalilov
TALP Research Center Universitat Politècnica de Catalunya
Barcelona
Marta R Costa-Jussà
TALP Research Center Universitat Politècnica de Catalunya
Barcelona
Carlos A Henríquez
TALP Research Center Universitat Politècnica de Catalunya
Barcelona
Q
José A R Fonollosa
TALP Research Center Universitat Politècnica de Catalunya
Barcelona
Adolfo Hernández
TALP Research Center Universitat Politècnica de Catalunya
Barcelona
H
José B Mariño
TALP Research Center Universitat Politècnica de Catalunya
Barcelona
Rafael E Banchs
TALP Research Center Universitat Politècnica de Catalunya
Barcelona
Chen Boxing
Department of Human Language Technology Institute for Infocomm Research
Singapore
Min Zhang
Department of Human Language Technology Institute for Infocomm Research
Singapore
Aiti Aw
Department of Human Language Technology Institute for Infocomm Research
Singapore
Haizhou Li
Department of Human Language Technology Institute for Infocomm Research
Singapore
The TALP&I2R SMT Systems for IWSLT 2008
This paper gives a description of the statistical machine translation (SMT) systems developed at the TALP Research Center of the UPC (Universitat Politècnica de Catalunya) for our participation in the IWSLT'08 evaluation campaign. We present N gram-based (TALPtuples) and phrase-based (TALPphrases) SMT systems. The paper explains the 2008 systems' architecture and outlines translation schemes we have used, mainly focusing on the new techniques that are challenged to improve speech-to-speech translation quality. The novelties we have introduced are: improved reordering method, linear combination of translation and reordering models and new technique dealing with punctuation marks insertion for a phrase-based SMT system.This year we focus on the Arabic-English, Chinese-Spanish and pivot Chinese-(English)-Spanish translation tasks.ISCA Archivehttp://www.isca-speech.org/archive
Introduction
TALP-UPC N gram-based Machine Translation (MT) has proved to be a competitive alternative to state-of-the-art systems in previous evaluation campaigns, as shown in [1,2]. One of the most significant distinctions of the N gram-based translations from phrase-based systems lies in the different representation of bilingual units. It leads to a strong requirement of a certain reordering strategy implemented with probabilistic distortion model able to cope with middle-and longdistance dependencies.
Our ongoing efforts are mainly dedicated to finding the best way to reorder the source side of the bilingual corpus aiming to decrease the divergences in word order of the source and target languages, and, consequently, to reduce the size of bilingual units that the N gram-based translation systems operates with. This is especially important when the translation is performed between pairs of languages with non-monotonic word order, like Arabic and English, or Chinese and Spanish.
Another promising way to improve the quality of MT output is to involve additional out-of-domain parallel information into bilingual modelling. Inspired by the results presented in [3], we interpolate a principal translation model (TM) with a secondary one, adjusting the weight coefficients according to the corresponding monolingual language models. To the best of our knowledge, so far no attempts have been made to linearly combine the TMs. Unfortunately, we did not have time to include the results of TM interpolation technique into the evaluation submission, but we present the post-evaluation results in the paper.
Apart from the classical Arabic-English translation, this year we have participated in a new comparative task: direct Chinese-Spanish translation versus pivot Chinese-(English)-Spanish translation.
Ngram-based Machine Translation system
Here we briefly describe the baseline N gram-based translation system that coincides with the MT system used in the IWSLT'07 campaign, as well as specific novel techniques implemented for the IWSLT'08 evaluation.
Our translation system implements a log-linear model in which a foreign language sentence f J
1 = f 1 , f 2 , .
.., f J is translated into another language e I 1 = e 1 , e 2 , ..., e I by searching for the translation hypothesisê I 1 maximizing a loglinear combination of several feature models [4]:
e I 1 = arg max e I 1 M m=1 λ m h m (e I 1 , f J 1 )
where the feature functions h m refer to the system models and the set of λ m refers to the weights corresponding to these models.
The N gram-based approach regards translation as a stochastic process maximizing the joint probability p(f, e), leading to a decomposition based on bilingual n-grams, socalled tuples, that are extracted from a word-to-word alignment (performed with GIZA++ tool 1 and generated by growdiag-final method [5]).
Given a certain word-aligned parallel corpus, tuples are extracted according to the following constraints [6]:
• a monotonic segmentation of each bilingual sentence pair is produced
• no word in a tuple is aligned to words outside of it
• no smaller tuples can be extracted without violating the previous constraints
As mentioned above, dealing with pairs of languages with non-monotonic word order, a certain reordering strategy is required to extract more reusable units (less sparse). The method that we used in this evaluation is detailled below. Figure 1 shows an example of tuple monotonic extraction (Spanish-English).
Translation model
The core part of the system following N gram-based approach is a TM, which is based on tuples extracted from a word-to-word alignment. In contrast to phrase-based models, our TM is estimated as a standard n-gram model of a bilingual language expressed in tuples. In this way, it approximates the joint probability between source and target 1 http://code.google.com/p/giza-pp/ languages capturing bilingual context, as described by the following equation:
p(S, T ) = K k=1 p((s,t) k |(s,t) k−N +1 , ..., (s,t) k−1 ) (1)
where s refers to source, t to target, and (s,t) k to the k th tuple of a given bilingual sentence pair segmented in K tuples.
The bilingual TM actually constitutes an n-gram-based language model (LM) of tuples, which approximates the joint probability between the languages under consideration and can be seen here as a LM, where the language is composed by tuples.
Feature functions
Apart from the TM, TALP-UPC translation system implements a log-linear combination of six additional feature models:
• a target LM (a model of target-side words);
• a Part-of-Speech (POS) target LM (a model of target-side tags);
• a word bonus model (is used to compensate the system's preference for short output sentences);
• a source-to-target lexicon model and a target-tosource lexicon model (the models using word-to-word IBM Model 1 probabilities to estimate the lexical weights for each tuple in the translation table);
• a POS source LM (a model of source-side tags, supporting reordering process);
MARIE decoder
As decoder, we use MARIE [7], a beam-search decoder developed at TALP Research Center which taking the previous models into account. For efficient pruning of the search space, threshold pruning, histogram pruning and hypothesis recombination are used. MARIE admits a weighted reordering graph (distortion of source-side words order), generated by the statistical machine reordering algorithm as described in Section 2.5.
Feature weights optimization
Given the development set and references, the log-linear combination of weights was adjusted using a simplex optimization method (with the optimization criteria of the highest BLEU score) and an n-best re-ranking just as described in http://www.statmt.org/jhuws/. This strategy allows for a faster and more efficient adjustment of model weights by means of a double-loop optimization, which provides reduction of the number of translations that should be carried out.
Statistical Machine Reordering
The conception of the Statistical Machine Reordering (SMR) stems from the idea of using the powerful techniques developed for SMT and to translate the source language (S) into a reordered source language (S'), which more closely matches the order of the target language. To infer more reorderings, it makes use of word classes. To correctly integrate the SMT and SMR systems, both are concatenated by using a word graph which offers weighted reordering hypotheses to the SMT system.
The details are described in [8] and [9].
Translation models interpolation
During the post-evaluation period we have implemented a TM interpolation strategy following the ideas proposed in [3], where the authors present a promising technique of target LMs linear interpolation. These findings open the way to involve additional monolingual information into the translation process, and also gives a motivation to interpolate the translation and reordering tables in a linear way.
Due to a small amount of available in-domain data (IWSLT training material), we have used an out-of-domain 130K-line subset from the Arabic News, English Translation of Arabic Treebank and Ummah LDC parallel corpora (VI-OLIN) [10] to increase the final translation and reordering tables. Both corpus statistics can be found in table 1.
Instead of time-consuming iterative TM reconstruction and using the highest BLEU score as an maximization criteria, we adjust the weights as a function of the lowest perplexity estimated by the corresponding interpolated combination of the target-side LMs and generalize the optimization results on the interpolated translation and reordering models.
The word-to-word alignment was obtained from the joint database (IWSLT + VIOLIN). Then, we separately computed the translation and reordering tables corresponding to the IWSLT and VIOLIN parts of the joint alignment. The final tables, as well as the final target LM were obtained using linear interpolation. The weight coefficients (IWSLT weight = 0.95, VIOLIN weight = 0.05) were selected using a minimum perplexity criterion estimated on the corresponding interpolated combination of the target-side LMs.
Phrase-based Machine Translation
In this section we present a phrase-based MT system that was used in the evaluation. This system is based on the well-known MOSES 2 toolkit, which is nowadays considered as a state-of-the-art SMT system [11]. The training and weights tuning procedures are explained in details in the above-mentioned publication, as well as, on the MOSES web page: http://www.statmt.org/moses/. 2 www.statmt.org/moses/
Punctuation restoration
We decided to embed punctuation restoration in the main translation step. For this purpose we preprocessed the training corpus as follows:
1. Source sentences: we added a <PUNC> tag at the beginning of each sentence, we replaced final punctuations marks (points and questions marks) with another <PUNC> tag, and we removed any other punctuation marks.
2. Target sentences: we repeated the final punctuation mark at the begin of each sentence.
The resulting preprocessed training corpus is used to train a standard SMT system (w i stands for the i-th word).
SRC: w 1 w 2 w 3 . → <PUNC> w 1 w 2 w 3 <PUNC> TRG: w 1 w 2 w 3 . → . w 1 w 2 w 3 .
During the actual translation of unpunctuated test sentences we add the <PUNC> tag at the beginning and at the end of each sentence. The trained TM along with the target LM and the other features serves to restore the corresponding final/initial punctuation mark translating each <PUNC> tag.
The rest of punctuation marks can also be restored as any other words included in the target side of translation units.
Note that the preprocessing of the target data follows the IWSLT 2008 suggestions 3 , but no additional target LM is needed in this case. After translation, the same suggested postprocessing scheme is applied: the last punctuation mark is replaced with the first one and the first punctuation mark is then removed.
Experiments
Arabic to English translation
The first run we have participated was a Basic Traveling Expression Corpus (BTEC) Arabic to English translation task. The model weights were tuned with the 2006 development corpus (Dev6), containing 489 sentences and 6 reference translations and the 2002 development set (500 sentences and 16 reference translations) was used as an internal test, according to which we take a decision about better or worse system performance.
Arabic data preprocessing
We used a similar approach to that shown in [12], namely the MADA+TOKAN system for disambiguation and tokenization. For disambiguation only diacritic unigram statistics were employed. For tokenization we used the D3 scheme with -TAGBIES option. The scheme splits the following set of enclitics: w+, f+, b+, k+, l+, Al+ and pronominal enclitics. The -TAGBIES option produces Bies POS tags on all taggable tokens.
Secondary submission
Our secondary submission was the TALPtuples system, configured to use the bilingual TM of order 4, 4-gram target-side LM and 4-gram POS target-side LM. It includes SMR as described in Section 2.5 with 100 statistical classes.
For this system configuration we used a strategy for restoring punctuation and case information as proposed on the IWSLT'08 web page, using standard SRI LM [13] tools: disambig to restore case information and hidden-ngram to insert missing punctuation marks.
Post-evaluation experiments and official evaluation results
After the systems submission we performed experiments interpolating translation and reordering tables using the weights that cause the minimal perplexity value for the interpolated target-side LM, as described in Section 2.6. The final tables were passed to the primary (MOSES-based) system. For comparison, we also estimated a standard TM from the union of the IWSLT and VIOLIN corpora.
The official submission and post-evaluation results for the ASR and CRR Arabic-English translation tasks can be found in table 2. Evaluation conditions were case-sensitive and with punctuation marks considered.
Consecutive union of the in-domain and out-of-domain corpora ("Union") leads to slightly worse results for the CRR track and shows almost the same performance as the system which use the IWSLT parallel corpus solely in the MOSES-based system ("Supplied 1") for the ASR track.
The system based on the weighted and merged TM ("Interpolation") outperforms BTEC-only system by 1.8 BLEU points and 1.2 METEOR points for the CRR track and by 2.1 BLEU points and by about 1 METEOR points for the ASR track measured on the official evaluation test set.
"Supplied 2" line stands for the results obtained with the TALPtuples system as described in sub-section 4.1.3.
Chinese-(English)-Spanish pivot translation
Our participation in this task is the result of a joint contribution between I2R (Institute for Infocomm Research) and UPC. We followed two different strategies for the primary and secondary runs. In both cases the I2R team built a Chinese-to-English SMT system and the UPC team was responsible for an English-to-Spanish SMT system.
Both Machine Translation Systems were based on MOSES open source package [11]. IBM word reordering constraints [14] were applied during decoding to reduce the computational complexity. The other models and feature functions employed by MOSES decoder were:
• TM(s), direct and inverse phrase/word based TM.
• Distortion model, which assigns a cost linear to the reordering distance, while the cost is based on the number of source words which are skipped when translating a new source phrase.
• Lexicalized word reordering model [15].
• Word and phrase penalties, which count the number of words and phrases in the target string.
• Target-side LM.
The TM and reordering model were trained using the standard MOSES tools. Weights of feature functions were tuned by using the optimization tools from the MOSES package. The search operation was accomplished by MOSES decoder.
The experiments with the Chinese to English MT were carried out on the BTEC Chinese-English data [16] augmented with HIT-corpus 4 , Olympic-corpus 5 and PKUcorpus 6 [17]. Table 3 reports the basic statistics of the principal and additional corpora that were used to build the Chinese-to-English SMT system. Regarding English-to-Spanish translation, no extra corpora were used.
Chinese-English independent results
The union of the BTEC corpus and the additional bilingual corpora allowed gaining 0.23 BLEU points for the internal test set. This impact can be seen in table 4. The Chinese-English SMT system returns the sentences with truecase and tokenized punctuation, ready to be input to the English-Spanish SMT.
IWSLT'08 Additional data BLEU 0.3628 0.5916 NIST 7.2417 9.4015 METEOR 0.5913 0.7148 Table 4: Results for Chinese-English translation.
English-Spanish independent results
As mentioned before, no additional corpus was used for the English-to-Spanish system. The input was considered to be in true case, tokenized and with punctuation marks. Contractions like "we'll" and "you're" were split as "we 'll" and 8 http://www.nlp.org.cn/project/project.php?proj id=6 "you 're", and negations like "don't", "wouldn't" or "can't" were split as "do n't", "would n't" and "ca n't".
The output of this system was performed in accordance with the official evaluation specification, without any postprocessing needed. Table 5 shows the results of the English-Spanish system trained with the BTEC corpus.
IWSLT'08 BLEU 0.5586 NIST 9.2855 METEOR 0.6994 Table 5: Results for English-Spanish translation.
Primary submission
Our primary approach to the pivot task was a system cascade.
Using the 50-best list of translation hypotheses generated by the decoder for the Chinese-to-English system, a 4-best list was made for each of the first list instances, totally representing a 200-best of possible Spanish translations for each Chinese sentence. From that 200-best list, which is allowed for repetitions, the single-best translation was computed using a Minimum Bayes Risk (MBR) strategy as described in [18]. We used the MOSES implementation of the MBR algorithm. This strategy of 200-best list rescoring performed better than a single-best list selection for both systems, gaining 2.5 BLEU points in the development set.
Secondary submission
As an alternative approach to the system cascade, we followed a different strategy for the secondary submission combining the phrase translation probabilities of the two language pairs (Chinese-English and English-Spanish translations) with the strategy proposed in [19] to obtain the translation probabilities for each Chinese-Spanish phrase. The final phrase probabilities are calculated as followed:
φ(f i |e i ) = p i φ(f i |p i )φ(p i |e i )(2)
where φ(f i |e i ) corresponds to the translation probability of the Chinese phrase f i given the Spanish phrase e i , φ(f i |p i ) stands for the translation probability of the Chinese phrase f i given the English phrase p i and φ(p i |e i ) stands for the translation probability of the English phrase p i given the Spanish phrase e i . It is important to mention that the English and Spanish phrases are lowercased in this system and the case information restoration process is performed on the postprocess step, following the strategy proposed for the IWSLT'08 evaluation. These two scores are supported by a Spanish LM, a word and phrase penalty feature and a distortion model which would complete the final Chinese-(English)-Spanish system. Inspired by the idea proposed in [19] which was to extend a small amount of available parallel training material for a given language pair, we tried to use these findings to skip the English LM used during the Chinese-English translation as was applied in the cascade system. The reason for skipping the LM is that an English reordering should not be needed to get a Spanish translation from a Chinese input. Table 6 shows the official results obtained with both strategies. As can be seen, the cascade system outperforms the secondary system, despite that the secondary submission did not use an extra LM on the pivot step of the translation process. On the other hand, the lexicalized word reordering is lost with the artificial phrase pairs and we had problems with the final table size of our TM. It happened that the computation showed in equation 2 gave us a lot of wrong phrase-pairs and most of the phrases got very low probabilities (due to the multiplication factor).
Chinese-(English)-Spanish pivot results
We hope that an improved pruning algorithm applied to the resulting phrase table could help to achieve a robust TM which finally would perform better than the cascade approach.
Chinese to Spanish translation
For direct Chinese-Spanish task we planned to build a N gram-based SMT system (TALPTuple), using the SMR algorithm described in section 2.5 and a phrased-based SMT system (MOSES-based), as described in section 4.2.
For this task we only used the BTEC'08 corpus, which contains about 20, 000 sentences for training and 506 sentences with 16 Spanish references for tuning the system. The basic statistics of this corpus can be seen in table 7.
Data preprocessing
The Chinese corpus was not preprocessed before translation: the corpus was tokenized by words and the punctuation marks were separated.
Note that the TM, as well as the LM and reordering model, was trained with punctuation marks and the official test set that did not contain this information, therefore it was preprocessed with the hidden-ngram tool to restore it.
The Spanish part of the corpus was lowercased and tokenized using the Freeling toolkit [20], an open source tool for language analysis. It splitted the enclitics from the Spanish verbs (dámelo → da +me +lo) and also generated the POS tags that were lately used to estimate a target-side POS LM and in postprocessing.
Data postprocessing
Once the decoding process had finished, the output of the system was still lowercased and splitted with the enclitics and the POS tags were generated.
Afterwards, a postprocess including two steps was performed: firstly, the original morphological verbs form was restored using the enclitics and POS tags information; on the next step, the case information is restored, using the disambig tool from SRILM following the instruction from the IWSLT'08 web-page. This postprocess was not run during the tuning step, where all the Spanish references were also tokenized, splitted with enclitics and lowercased.
Chinese to Spanish translation results
The official evaluation results for both systems can be seen in table 8.
As mentioned before, the TALPTuple system was nominated as the primary submission and the MOSES-based system as the secondary one.
Conclusions
In this year evaluation we participated in three translation tasks, collaborating with the I2R in pivot Chinese-(English)-Spanish translation tasks. This paper outlines the architecture of the submitted translation systems and summarizes the preliminary official results.
The main conclusion that can be made from our participation in the Arabic-English shared task is that the N grambased system is comparable with the state-of-the-art phrasebased SMT in terms of automatically evaluated accuracy for both the ASR and CRR tasks: in case of CRR track the MOSES system outperforms the tuples-based one by 3 BLEU points, but loosing in NIST score, while for the ASR run the difference in BLEU and METEOR results is negligible and the N gram-based translation is evaluated slightly higher in terms of NIST metric. For the Chinese-(English)-Spanish pivot task the system cascade architecture demonstrates better results than the alternative (phrase probabilities combination), however there is still room for improvement on phrase table pruning. Although the direct Chinese-Spanish phrase-based system performed better than the TALPtuple system on the internal test, we submitted the last one as a primary system in order to contrast it the many other MOSES-based strategies presented in the evaluation.
Future work is to be conducted to apply the promising TM interpolation strategy to the N gram-based SMT.
Acknowledgments
This work has been funded by the Spanish Government under grant TEC2006-13964-C03 (AVIVAVOZ project).
Figure 1 :
1Example of tuples extraction.
from Chinese LDC. 20K BTEC sentence pairs were supplied for the IWSLT 2008 evaluation campaign. HIT corpus contains 132K sentence pairs in total, and is known as a multi-source Chinese-English parallel corpus; Olympic corpus has 54K bilingual sentences mainly from sport and travelling domains; while PKU-corpus has about 200K parallel phrases and is considered as a domain-balanced corpus. Besides, the English part of the Tanaka corpus 7 was used as a complementary trainingTrack
System
BLEU
METEOR
(BLEU+METEOR)/2
NIST
CRR
Union (Post-evaluation)
0.5223
0.6809
0.6016
8.5253
CRR
Supplied 1 (Primary submission)
0.5263
0.6848
0.6055
8.5940
CRR
Interpolation (Post-evaluation)
0.5446
0.6974
0.6210
8.8772
CRR
Supplied 2 (Secondary submission)
0.4976
0.6807
0.5892
8.7421
ASR
Union (Post-evaluation)
0.4379
0.6262
0.5320
7.2878
ASR
Supplied 1 (Primary submission)
0.4352
0.6288
0.5320
7.2808
ASR
Interpolation (Post-evaluation)
0.4562
0.6385
0.5473
7.6113
ASR
Supplied 2 (Secondary submission)
0.4300
0.6292
0.5296
7.5862
Table 2: Official and post-evaluation results for Arabic-English translation.
IWSLT'08
All additional data
Chinese English Spanish Chinese English
Sentences
19,972
19,972
19,972 379,065 379,065
Words
164K
182K
147K
4,834K 5,036K
Vocabulary
8,506
8,301
16,953
57,055
75,156
Table 3 :
3Corpus used during the Chinese-English training material for the target-side LM.The I2R research group performed word segmentation for the Chinese part using ICTCLAS tools 8 developed in the ICT
Table 6 :
6Official results for the Chinese-(English)-Spanish translation.Chinese Spanish
Sentences
19,972
19,972
Words
171K
147K
Average sentence length
8.59
7.39
Vocabulary
8,428
16,953
Table 7 :
7Corpus used for the Chinese-Spanish trainingSubmission
BLEU METEOR (BLEU+METEOR)/2 NIST
Primary ASR
0.2433
0.2715
0.2574
5.2547
Primary CRR
0.2677
0.2901
0.2789
5.6833
Secondary ASR 0.2684
0.2792
0.2783
4.9303
Secondary CRR 0.2911
0.3007
0.2959
5.3240
Table 8 :
8Official results for Chinese-Spanish translation.
http://www.slc.atr.jp/IWSLT2008/ -118 -Proceedings of IWSLT 2008, Hawaii -U.S.A.
http://mitlab.hit.edu.cn/index.php/resources 5 http://www.chineseldc.org/EN/index.htm 2004-863-008 6 http://www.chineseldc.org/EN/index.htm CLDC-LAC-2003-006 7 http://www.csse.monash.edu.au/∼jwb/tanakacorpus.html Proceedings of IWSLT 2008, Hawaii -U.S.A.
Proceedings of IWSLT 2008, Hawaii -U.S.A.
Overview of the IWSLT 2006 Evaluation Campaign. M Paul, Proc. of the Int. Workshop on Spoken Language Translation (IWSLT'06). of the Int. Workshop on Spoken Language Translation (IWSLT'06)Kyoto, JapanM. Paul, "Overview of the IWSLT 2006 Evaluation Campaign," in Proc. of the Int. Workshop on Spo- ken Language Translation (IWSLT'06), Kyoto, Japan, 2006, pp. 2-15.
Overview of the IWSLT 2007 evaluation campaign. C S Fordyce, Proc. of the Int. Workshop on Spoken Language Translation (IWSLT'07). of the Int. Workshop on Spoken Language Translation (IWSLT'07)Trento, ItalyC. S. Fordyce, "Overview of the IWSLT 2007 evalua- tion campaign," in Proc. of the Int. Workshop on Spoken Language Translation (IWSLT'07), Trento, Italy, 2007, pp. 1-12.
Data selection and smoothing in an open-source system for the 2008 NIST machine translation evaluation. H Schwenk, Y Estève, Proceedings of the Interspeech'08. the Interspeech'08Brisbane, Australiato appearH. Schwenk and Y. Estève, "Data selection and smooth- ing in an open-source system for the 2008 NIST ma- chine translation evaluation," in Proceedings of the In- terspeech'08, Brisbane, Australia, 2008, p. to appear.
UPC's bilingual n-gram translation system. J B Mariño, R E Banchs, J M Crego, A De Gispert, P Lambert, J R Fonollosa, M R Costa-Jussà, M Khalilov, Proc. of the TC-STAR Workshop on Speech-to-Speech Translation. of the TC-STAR Workshop on Speech-to-Speech TranslationBarcelona, SpainJ. B. Mariño, R. E. Banchs, J. M. Crego, A. de Gis- pert, P. Lambert, J. R. Fonollosa, M. R. Costa-jussà, and M. Khalilov, "UPC's bilingual n-gram translation sys- tem," in Proc. of the TC-STAR Workshop on Speech-to- Speech Translation, Barcelona, Spain, June 2006, pp. 43-48.
A systematic comparison of various statistical alignment models. F Och, H Ney, Computational Linguistics. 29F. Och and H. Ney, "A systematic comparison of var- ious statistical alignment models," in Computational Linguistics, vol. 29(1), 2003.
Finitestate-based and phrase-based statistical machine translation. J M Crego, J B Mariño, A De Gispert, Proc. of the 8th Int. Conf. on Spoken Language Processing (ICSLP'04). of the 8th Int. Conf. on Spoken Language essing (ICSLP'04)Jeju, KoreaJ. M. Crego, J. B. Mariño, and A. de Gispert, "Finite- state-based and phrase-based statistical machine trans- lation," in Proc. of the 8th Int. Conf. on Spoken Lan- guage Processing (ICSLP'04), Jeju, Korea, October 2004, pp. 37-40.
A ngrambased statistical machine translation decoder. J M Crego, J B Mariño, A De Gispert, Proc. of the Interspeech'05. of the Interspeech'05Lissbon, PortugalJ. M. Crego, J. B. Mariño, and A. de Gispert, "A ngram- based statistical machine translation decoder," in Proc. of the Interspeech'05, Lissbon, Portugal, September 2005, pp. 3193-96.
Statistical machine reordering. M R Costa-Jussà, J R Fonollosa, Empirical Methods in Natural Language Processing (EMNLP'06). Sydney, AustraliaM. R. Costa-jussà and J. R. Fonollosa, "Statistical ma- chine reordering," in Empirical Methods in Natural Language Processing (EMNLP'06), Sydney, Australia, July 2006, pp. 70-76.
Analysis of statistical and morphological classes to generate weigthed reordering hypotheses on a statistical machine translation system. M R Costa-Jussà, J R Fonollosa, Proceedings of the Second Workshop on Statistical Machine Translation. the Second Workshop on Statistical Machine TranslationPrague, Czech RepublicM. R. Costa-jussà and J. R. Fonollosa, "Analysis of sta- tistical and morphological classes to generate weigthed reordering hypotheses on a statistical machine transla- tion system," in Proceedings of the Second Workshop on Statistical Machine Translation, Prague, Czech Re- public, June 2007, pp. 171-176.
Syntactic preprocessing for statistical machine translation. N Habash, Proceeding of the Machine Translation Summit (MT-Summit). eeding of the Machine Translation Summit (MT-Summit)Copenhagen, DenmarkN. Habash, "Syntactic preprocessing for statistical ma- chine translation," in Proceeding of the Machine Trans- lation Summit (MT-Summit), Copenhagen, Denmark, September 2007.
Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL'07). the 45th Annual Meeting of the Association for Computational Linguistics (ACL'07)Prague, Czech RepublicP. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst, "Moses: Open source toolkit for statisti- cal machine translation," in Proceedings of the 45th An- nual Meeting of the Association for Computational Lin- guistics (ACL'07), Prague, Czech Republic, June 2007, pp. 177-180.
Arabic preprocessing schemes for statistical machine translation. N Habash, F Sadat, Proceedings of the Human Language Technology and North American Assosiation for Computational Linguistics Conference (HLT/NAACL'06). the Human Language Technology and North American Assosiation for Computational Linguistics Conference (HLT/NAACL'06)New York, USAN. Habash and F. Sadat, "Arabic preprocessing schemes for statistical machine translation," in Proceedings of the Human Language Technology and North American Assosiation for Computational Linguistics Conference (HLT/NAACL'06), New York, USA, June 2006.
SRILM: an extensible language modeling toolkit. A Stolcke, Proc. of the Int. Conf. on Spoken Language Processing. of the Int. Conf. on Spoken Language essingDenver, COA. Stolcke, "SRILM: an extensible language model- ing toolkit." in Proc. of the Int. Conf. on Spoken Lan- guage Processing, Denver, CO, September 2002, pp. 901-904.
Language translation apparatus and method using context-based translation models. A L Berger, P F Brown, S A Della Pietra, V J Della Pietra, A S Kehler, R L Mercer, Patent 5 510 981A. L. Berger, P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, A. S. Kehler, and R. L. Mercer, "Language translation apparatus and method using context-based translation models," Patent 5 510 981, 1996.
Edinburgh system description for the 2005 IWSLT speech translation evaluation. P Koehn, A Axelrod, A B Mayne, C Callison-Burch, M Osborne, D Talbot, Proceedings of the Int. Workshop on Spoken Language Translation (IWSLT'05). the Int. Workshop on Spoken Language Translation (IWSLT'05)Pittsburg, USAP. Koehn, A. Axelrod, A. B. Mayne, C. Callison-Burch, M. Osborne, and D. Talbot, "Edinburgh system descrip- tion for the 2005 IWSLT speech translation evaluation," in Proceedings of the Int. Workshop on Spoken Lan- guage Translation (IWSLT'05), Pittsburg, USA, Octo- ber 2005.
Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. T Takezawa, E Sumita, F Sugaya, H Yamamoto, S Yamamoto, Proceeding of LREC-2002: Third International Conference on Language Resources and Evaluation. eeding of LREC-2002: Third International Conference on Language Resources and EvaluationLas Palmas, SpainT. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto, "Toward a broad-coverage bilingual cor- pus for speech translation of travel conversations in the real world," in Proceeding of LREC-2002: Third Inter- national Conference on Language Resources and Eval- uation, Las Palmas, Spain, May 2002, pp. 147-152.
HHMM-based chinese lexical analyzer ICTCLAS. H Zhang, H Yu, D Xiong, Q Liu, Proc. of the 2nd SIGHAN Workshop on Chinese language processing. of the 2nd SIGHAN Workshop on Chinese language processingSapporo, JapanH. Zhang, H. Yu, D. Xiong, and Q. Liu, "HHMM-based chinese lexical analyzer ICTCLAS," in Proc. of the 2nd SIGHAN Workshop on Chinese language processing, Sapporo, Japan, July 2003, pp. 184-187.
Minimum bayes-risk decoding for statistical machine translation. S Kumar, W Byrne, Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL'04). the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL'04)Boston, USAS. Kumar and W. Byrne, "Minimum bayes-risk decod- ing for statistical machine translation," in Proceedings of the Human Language Technology and North Ameri- can Association for Computational Linguistics Confer- ence (HLT/NAACL'04), Boston, USA, May 2004, pp. 169-176.
Pivot language approach for phrase-based statistical machine translation. H Wu, H Wang, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07). the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07)Prague, Czech RepublicH. Wu and H. Wang, "Pivot language approach for phrase-based statistical machine translation," in Pro- ceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07), Prague, Czech Republic, June 2007, pp. 856-863.
Freeling: An open-source suite of language analyzers. X Carreras, I Chao, L Padrò, M Padrò, Proceeding of LREC-2004: Fourth International Conference on Language Resources and Evaluation. eeding of LREC-2004: Fourth International Conference on Language Resources and EvaluationLissbon,PortugalX. Carreras, I. Chao, L. Padrò, and M. Padrò, "Freel- ing: An open-source suite of language analyzers," in Proceeding of LREC-2004: Fourth International Con- ference on Language Resources and Evaluation, Liss- bon,Portugal, May 2004. |
248,780,578 | SSN_MLRG3 @LT-EDI-ACL2022-Depression Detection System from Social Media Text using Transformer Models | Depression is a common mental illness that involves sadness and lack of interest in all day-to-day activities.The task is to classify the social media text as signs of depression into three labels namely "not depressed", "moderately depressed", and "severely depressed". We have built a system using Deep Learning Model "Transformers". Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. The multi-class classification model used in our system is based on the ALBERT model(Lan et al., 2019). In the shared task ACL 2022, Our team SSN_MLRG3 obtained a Macro F1 score of 0.473. | [
44137768,
19686267,
36985598
] | SSN_MLRG3 @LT-EDI-ACL2022-Depression Detection System from Social Media Text using Transformer Models
199 May 27, 2022
Sarika Esackimuthu
Department of Computer Science and Engineering Sri
Sivasubramaniya Nadar College of Engineering Chennai -603110
Tamil NaduIndia
Shruthi H
Department of Computer Science and Engineering Sri
Sivasubramaniya Nadar College of Engineering Chennai -603110
Tamil NaduIndia
Rajalakshmi Sivanaiah [email protected]
Department of Computer Science and Engineering Sri
Sivasubramaniya Nadar College of Engineering Chennai -603110
Tamil NaduIndia
Angel Deborah
Department of Computer Science and Engineering Sri
Sivasubramaniya Nadar College of Engineering Chennai -603110
Tamil NaduIndia
S
Department of Computer Science and Engineering Sri
Sivasubramaniya Nadar College of Engineering Chennai -603110
Tamil NaduIndia
Sakaya Milton
Department of Computer Science and Engineering Sri
Sivasubramaniya Nadar College of Engineering Chennai -603110
Tamil NaduIndia
Mirnalinee T TR
Department of Computer Science and Engineering Sri
Sivasubramaniya Nadar College of Engineering Chennai -603110
Tamil NaduIndia
SSN_MLRG3 @LT-EDI-ACL2022-Depression Detection System from Social Media Text using Transformer Models
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
the Second Workshop on Language Technology for Equality, Diversity and Inclusion196199 May 27, 2022
Depression is a common mental illness that involves sadness and lack of interest in all day-to-day activities.The task is to classify the social media text as signs of depression into three labels namely "not depressed", "moderately depressed", and "severely depressed". We have built a system using Deep Learning Model "Transformers". Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. The multi-class classification model used in our system is based on the ALBERT model(Lan et al., 2019). In the shared task ACL 2022, Our team SSN_MLRG3 obtained a Macro F1 score of 0.473.
Introduction
Social media is developed as a great point for its users to communicate with their friends, relatives and share their opinions, photos, and videos reflecting their feelings and sentiments. This creates an opportunity to analyze social media data for user's feelings and sentiments to investigate their moods and attitudes when they are communicating through the Social Media Apps. Depression is the common issue of today's youngsters and suicide due to depression is growing day by day. People often communicate their moods through tweets or messages but people around them fail to understand the underlying truth behind the words. Katikalapudi et al. (2012) conducted depression survey among 216 undergraduate students with real time Internet data. Feuston and Piper (2018) analyzed instagram posts, pictures, captions and concluded that mental health and illness are interrelated through the application of the coded gaze.
The task 4 in Second Workshop On Language Technology For Equality, Diversity, Inclusion (LT-EDI-2022) Sampath et al. (2022) was conducted to detect the signs of depression from social media text in English language. We tried to classify each message as "not depressed", "moderately depressed", and "severely depressed". The training set provided by the organizers contains 8,891 social media messages. The given dataset is used to train our model.
Related Works
In last five years, the use of social media has increased drastically and the data available too, has increased. Hence, numerous studies on emotion analysis and depression analysis have been carried out in recent times. Most of them revolve around machine learning and deep learning techniques. Liu and Lapata (2019) showcased how Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) models can be used for text summarization. They proposed a general framework for extractive and abstractive models. This helped us to understand the BERT encoderdecoder architecture. O'dea et al. (2015) carried out work on detecting suicidality on Twitter using Support Vector Machine (SVM) and Logistic Regression with crossvalidation methods. SVM-TF-IDF filter algorithm showed best results with combined dataset accuracy of 76%. It stated that more searches on suicide related terms can improve the accuracy of the model.
Methodology and Data
The task is to discover the mood of the user from the social media posts and it is always difficult to extract the emotions from the text. A post can have different combination of emotions. The architecture diagram for the depression classification is shown in Figure 1. The training dataset is preprocessed to remove the unwanted information and is given to ALBERT model to learn the features. Test data is given to the built model to classify the text into 3 states of depression.
Acquiring Datasets
The dataset given by the organizers (Sampath et al., 2022) contains social media posts in English Language. All the dataset files are in tsv format. The dataset is based on multi-class classification. Each post is annotated by three labels namely moderate, severe and not depression. The distribution of the dataset is shown in Table 1.
Label
Train Dev
Data Preprocessing
Data preprocessing is vital for the success of deep learning solution. The given dataset has unwanted characters which is a classic signature of any collection of social media posts. In order to bring the posts into textual form, we performed normalization. The dataset is cleaned and processed using functions from NLTK toolkit. During preprocessing, we removed stopwords, URLs, special characters, symbols, annotated emojis, and emoticons. We expanded contractions and lemmatized the text. The accented characters, extra whitespaces are reduced.The long words are reduced and uppercase are converted to lowercase.
Model Description
We classified the social media posts with the help of the below transformer model.
ALBERT base v1 -Transformer Model
A Lite BERT (ALBERT) architecture has significantly fewer parameters as compared to traditional BERT architecture.ALBERT incorporates two-parameter reduction techniques which are factorised embedding parameterisation and crosslayer parameter sharing in order to deal with the obstacles in scaling pre-trained models in NLP. The first step in learning is a factorized embedding parameterization. The large vocabulary embedding matrix are decomposed into two small matrices.Then, size of the hidden layers are separated from the size of vocabulary embedding.This separation makes it simpler to grow the hidden size without significantly increasing the parameter size of the vocabulary embeddings. Cross-layer parameter sharing is the second technique.This technique is used to prevent the growth of the parameters with the growth in the depth of the network. ALBERT configurations have fewer parameters compared to BERT-large but achieve significantly better performance. ALBERT model used here has 12 encoder segment, 768 hidden state size and embedding size. We have trained the model for 3 epochs.The train batch size is 8 and the learning rate is 4e-5.
The Evaluation metrics of development dataset using ALBERT is shown in
Result
We have used evaluation metrics as accuracy, macro F1-score, macro recall, macro precision, weighted F1-score, weighted recall and weighted precision. The performance is shown in Table 4.
We obtained 20 th rank with an accuracy of 57% while the top ranked team obtained 66% as accuracy. Due to the resource constraints we trained
Error Analysis
The confusion matrix for the results obtained with the ALBERT model is shown in figure 2.
Conclusion
We have built ALBERT base Model for the task to detecting signs of depression from social media posts. All the models are preprocessed with NLTK, which we think is an important factor for building a good model. The emotion of a social media posts depends on individual's perception and cannot be judged by simple conventional models. This is one of the reason for the reduced accuracy. Understanding one's feelings and mood is too delicate for models to detect them accurately. Imbalanced data distribution among the output class labels can be another reason for less accuracy. The training data has high number of moderately depressed posts followed by not depressed and severely depressed.
We intend to investigate further by using different transformer models and methods to augment the data.
Tripathi et al. (2019) built an emotion recognition system using speech features and transcriptions. Different Deep Neural Network (DNN) architectures were used among which Text-MFCC(mel frequency cepstral coefficients) gave an accuracy of 76.1%.Shah et al. (2020) used deep learning based models for analyzing the depression state. They tried different combinations of metadata features and word embedding techniques with Bidirectional Long Short Term Memory (BiLSTM). Among different features, Word2VecEmbed+Meta features performed well with a F1 score of 81%.We have worked in contextual emotion and sentiment analysis with various machine learning and Gaussian process models in(Angel Deborah et al., 2019),(Angel Deborah et al., 2021),(Rajalakshmi et al., 2018), (Rajendram et al., 2017b,(S et al., 2022) and(Rajendram et al., 2017a) which form the base for dealing with emotions and kindle our interest in depression detection.
Figure 1 :
1Architecture of Proposed System
Figure 2 :
2Confusion matrix for results with ALBERT
table 2 .
2Parameters
Score
Accuracy
0.56
Macro F1-score
0.38
Macro Recall
0.38
Macro Precision
0.38
Weighted F1-score
0.56
Weighted Recall
0.56
Weighted Precision 0.56
Table 2 :
2Evaluation metrics of ALBERT Base Random forest classifiers fall under ensemblebased learning methods.A random forest algorithm consists of various decision trees.It establishes the outcome based on the predictions of the decision trees.Random forest reduces overfitting of dataset and increases precision.The Evaluation metrics of development dataset using Random forest is shown in table. 3.3.3.2 Random Forest
Parameters
Score
Accuracy
0.50
Macro F1-score
0.32
Macro Recall
0.34
Macro Precision
0.33
Weighted F1-score
0.47
Weighted Recall
0.50
Weighted Precision 0.49
Table 3 :
3Evaluation metrics of Random Forest
Table 4 :
4Result for ALBERT Base our model with fewer epochs. The accuracy of the system may improve with hyperparameter optimisation.Parameters
Score
Accuracy
0.573
Macro F1-score
0.473
Macro Recall
0.516
Macro Precision
0.458
Weighted F1-score 0.585
Weighted Recall
0.573
Weighted Precision 0.605
Emotion analysis on text using multiple kernel gaussian. S Angel Deborah, S Milton Tt Mirnalinee, Rajendram, Neural Processing Letters. 53S Angel Deborah, TT Mirnalinee, and S Milton Rajen- dram. 2021. Emotion analysis on text using mul- tiple kernel gaussian... Neural Processing Letters, 53(2):1187-1203.
Contextual emotion detection in text using ensemble learning. S Angel Deborah, Rajalakshmi, T T Milton Rajendram, Mirnalinee, Emerging Trends in Computing and Expert Technology. COMET 2019. ChamSpringer35S Angel Deborah, S Rajalakshmi, S Milton Rajendram, and TT Mirnalinee. 2019. Contextual emotion de- tection in text using ensemble learning. In Emerg- ing Trends in Computing and Expert Technology. COMET 2019. Lecture Notes on Data Engineering and Communications Technologies, vol 35., pages 1179-1186. Springer, Cham.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Beyond the coded gaze: Analyzing expression of mental health and illness on instagram. L Jessica, Anne Marie Feuston, Piper, Proceedings of the ACM on Human-Computer Interaction. 2CSCWJessica L Feuston and Anne Marie Piper. 2018. Be- yond the coded gaze: Analyzing expression of mental health and illness on instagram. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1- 21.
Associating internet usage with depressive behavior among college students. Raghavendra Katikalapudi, Sriram Chellappan, Frances Montgomery, Donald Wunsch, Karl Lutzen, IEEE Technology and Society Magazine. 314Raghavendra Katikalapudi, Sriram Chellappan, Frances Montgomery, Donald Wunsch, and Karl Lutzen. 2012. Associating internet usage with depressive behavior among college students. IEEE Technology and Society Magazine, 31(4):73-80.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942arXiv preprintZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.
Text summarization with pretrained encoders. Yang Liu, Mirella Lapata, arXiv:1908.08345arXiv preprintYang Liu and Mirella Lapata. 2019. Text summa- rization with pretrained encoders. arXiv preprint arXiv:1908.08345.
Stephen Bridianne O'dea, Wan, J Philip, Alison L Batterham, Cecile Calear, Helen Paris, Christensen, Detecting suicidality on twitter. Internet Interventions. 2Bridianne O'dea, Stephen Wan, Philip J Batterham, Al- ison L Calear, Cecile Paris, and Helen Christensen. 2015. Detecting suicidality on twitter. Internet Inter- ventions, 2(2):183-188.
Ssn mlrg1 at semeval-2018 task 1: Emotion and sentiment intensity detection using rule based feature selection. S Rajalakshmi, Milton Rajendram, Tt Mirnalinee, Proceedings of the 12th International Workshop on Semantic Evaluation. the 12th International Workshop on Semantic EvaluationS Rajalakshmi, S Milton Rajendram, TT Mirnalinee, et al. 2018. Ssn mlrg1 at semeval-2018 task 1: Emotion and sentiment intensity detection using rule based feature selection. In Proceedings of the 12th In- ternational Workshop on Semantic Evaluation, pages 324-328.
Ssn_mlrg1 at semeval-2017 task 4: sentiment analysis in twitter using multi-kernel gaussian process classifier. S Milton Rajendram, Tt Mirnalinee, Proceedings of the 11th International Workshop on Semantic Evaluation. the 11th International Workshop on Semantic EvaluationS Milton Rajendram, TT Mirnalinee, et al. 2017a. Ssn_mlrg1 at semeval-2017 task 4: sentiment anal- ysis in twitter using multi-kernel gaussian process classifier. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 709-712.
Ssn_mlrg1 at semeval-2017 task 5: fine-grained sentiment analysis using multiple kernel gaussian process regression model. S Milton Rajendram, Tt Mirnalinee, Proceedings of the 11th International Workshop on Semantic Evaluation. the 11th International Workshop on Semantic EvaluationS Milton Rajendram, TT Mirnalinee, et al. 2017b. Ssn_mlrg1 at semeval-2017 task 5: fine-grained senti- ment analysis using multiple kernel gaussian process regression model. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation (SemEval- 2017), pages 823-826.
Contextual emotion detection on text using gaussian process and tree based classifiers. Angel Deborah, S Milton Rajendram, T T Mirnalinee, Rajalakshmi S , 26Intelligent Data AnalysisAngel Deborah S, S Milton Rajendram, Mirnalinee TT, and Rajalakshmi S. 2022. Contextual emotion de- tection on text using gaussian process and tree based classifiers. Intelligent Data Analysis, 26(1):119-132.
Bharathi Raja Chakravarthi, and Jerin Mahibha C. 2022. Findings of the shared task on Detecting Signs of Depression from Social Media. Kayalvizhi Sampath, Thenmozhi Durairaj, Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. the Second Workshop on Language Technology for Equality, Diversity and InclusionAssociation for Computational LinguisticsKayalvizhi Sampath, Thenmozhi Durairaj, Bharathi Raja Chakravarthi, and Jerin Mahibha C. 2022. Findings of the shared task on Detecting Signs of Depression from Social Media. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics.
Samir Sadek, Rimon Shil, and Md Hasanul Kabir. 2020. Early depression detection from social network using deep learning techniques. 2020 IEEE Region 10 Symposium (TENSYMP). Faisal Muhammad Shah, Farzad Ahmed, Sajib Kumar Saha Joy, Sifat AhmedIEEEFaisal Muhammad Shah, Farzad Ahmed, Sajib Ku- mar Saha Joy, Sifat Ahmed, Samir Sadek, Rimon Shil, and Md Hasanul Kabir. 2020. Early depression detection from social network using deep learning techniques. In 2020 IEEE Region 10 Symposium (TENSYMP), pages 823-826. IEEE.
Suraj Tripathi, Abhay Kumar, Abhiram Ramesh, arXiv:1906.05681Chirag Singh, and Promod Yenigalla. 2019. Deep learning based emotion recognition system using speech features and transcriptions. arXiv preprintSuraj Tripathi, Abhay Kumar, Abhiram Ramesh, Chi- rag Singh, and Promod Yenigalla. 2019. Deep learning based emotion recognition system using speech features and transcriptions. arXiv preprint arXiv:1906.05681. |
14,142,074 | Contexts, Patterns, Interrelations -New Ways of Presenting Multi-word Expressions | This contribution presents the newest version of our 'Wortverbindungsfelder' (fields of multi-word expressions), an experimental lexicographic resource that focusses on aspects of MWEs that are rarely addressed in traditional descriptions: Contexts, patterns and interrelations. The MWE fields use data from a very large corpus of written German (over 6 billion word forms) and are created in a strictly corpus-based way. In addition to traditional lexicographic descriptions, they include quantitative corpus data which is structured in new ways in order to show the usage specifics. This way of looking at MWEs gives insight in the structure of language and is especially interesting for foreign language learners. | [] | Contexts, Patterns, Interrelations -New Ways of Presenting Multi-word Expressions
Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 2014. 2014
Kathrin Steyer [email protected]
Institute for the German Language
R 5, 6-13, 6-13D-68161, D-68161Mannheim, MannheimGermany, Germany
Contexts, Patterns, Interrelations -New Ways of Presenting Multi-word Expressions
Proceedings of the 10th Workshop on Multiword Expressions (MWE 2014)
the 10th Workshop on Multiword Expressions (MWE 2014)Gothenburg, SwedenAssociation for Computational LinguisticsApril 2014. 2014Annelen Brunner Institute for the German Language R 5,
This contribution presents the newest version of our 'Wortverbindungsfelder' (fields of multi-word expressions), an experimental lexicographic resource that focusses on aspects of MWEs that are rarely addressed in traditional descriptions: Contexts, patterns and interrelations. The MWE fields use data from a very large corpus of written German (over 6 billion word forms) and are created in a strictly corpus-based way. In addition to traditional lexicographic descriptions, they include quantitative corpus data which is structured in new ways in order to show the usage specifics. This way of looking at MWEs gives insight in the structure of language and is especially interesting for foreign language learners.
Our concept of MWEs
We study MWEs from a linguistic perspective and are mainly interested in two questions: What can we learn about the nature of MWEs and their status in language by studying large corpora? And how can we present MWEs in novel lexicographic ways that reflect our findings? The MWE field presented in this contribution is a prototype that reflects our current ideas regarding these questions. It can be explored online free of charge at http://wvonline.ids-mannheim. de/wvfelder-v3/index.html.
Our approach is based on the concept 'Usuelle Wortverbindungen' (UWV, Steyer 2000;Steyer 2004;Steyer 2013), which defines MWEs as conventionalized patterns of language use that manifest themselves in recurrent syntagmatic structures. This includes not only idioms and idiosyncratic structures, but all multi-word units which have acquired a distinct function in communica-tion. Our focus is on real-life usage, pragmatics and context. We work bottom-up in detecting and describing MWE units in a strongly corpusdriven way (Sinclair 1991;Tognini-Bonelli 2001;Hanks 2013), taking iterative steps to arrive at conclusions about language use. Methologically, our approach bears some similarities to Stefanowitsch/Gries' 'collostructions' (Stefanowitsch/Gries 2003) though we are less interested in syntactic and grammatical structures -as it is common in construction grammar approaches -but see MWEs primarily as parts of the lexicon and feel closer to phraseology.
The basis of our research is DeReKo (Deutsches Referenzkorpus, Institut für Deutsche Sprache 2012), the largest collection of written German available today which has over six billion word tokens and is located at the Institute for the German Language (IDS). In the current stage of our work, which is mainly explorative, we use DeReKo as it is. This means our text basis is dominated by newspaper texts from the last 10-15 years. Though this is surely not a 'balanced' corpus, we argue that it still reflects much of contemporary written language use, as newspaper texts are a medium that is widely disseminated.
Though the interpretation and main analysis is done manually, automatic methods form an important basis to our work. We use a sophisticated method of collocation analysis developed at the IDS (Belica 1995) to get indications which word combinations constitute MWEs and to explore contexts in which an MWE is commonly used. In addition to that, we use a pattern matching tool developed in our project to explore and structure corpus evidence and gain further insight into the behavior and variations of MWE candidates.
Our special interest lies in the fact that MWEs are not as fixed as is often assumed, but often behave as patterns and show multiple interrelations. Therefore, we also describe MWE patterns -a more abstract form of MWEs which are only partially fixed. An example for a fixed MWE is Pi mal Daumen (pi times thumb -'approximately'), a multi-word expression that is always used in exactly this form. MWE patterns on the other hand consist of fixed lexical components as well as slots that can be filled in different ways. In spite of this variability, the whole pattern has a holistic meaning and function. An example is the expression wie NOUN in jemandes Ohren klingen (to sound like NOUN in someone's ears -'to be perceived in a certain way' (specified by NOUN)). The NOUN slot can be filled with different words in order to specify the general meaning of the pattern. In section 2.3 we will go into further detail about how a slot in an MWE pattern can be filled.
The MWE field presented in this contribution centers around the word Grund (reason/basis/foundation) combined with several prepositions. It is the newest of several versions of MWE fields which have been described elsewhere (cf. Brunner/Steyer 2009;Brunner/ Steyer 2010) and are available at our website http://wvonline.ids-mannheim.de as well. This newest version focusses more on hierarchies of MWEs and MWE patterns and incorporates additional resources like collocation analyses in its descriptive texts. In the following, we will highlight some features of the MWE field which illustrate our focus on interrelations, contexts and patterns. Figure 1 shows a part of the MWE field, centered on the word Grund and preposition aus. Each node is linked to a lexicographic description. Figure 2 presents a screenshot of one of those articles. In addition to narrative descriptions and manually selected usage examples from our corpus, the articles also include components that are derived from quantitative corpus data. Specifically, these are collocation analyses as well as filler tables for MWE patterns. The function of these components will be explained in more detail in sections 2.2 and 2.3.
MWE field Grund
Interrelations
In Figure 1, you can observe the relations between MWEs (thick border) and MWE patterns (regular border). The nodes with the dashed border represent repeating surface structures which themselves have no common holistic meaning but show the lexical interconnectedness between the MWEs and MWE patterns.
All nodes enclosed in the square field contain the elements Grund and auf. The nodes on the far right are extensions which do not belong to the core of the MWE field as it was defined, but are connected lexically and functionally to MWEs that do. We decided to include those 'external nodes' to give a glimpse of how the building blocks of language connect even beyond the artificial borders that where necessary when defining the MWE field. In this example the core field contains the MWEs aus welchem Grund auch immer and aus welchen Gründen auch immer ('for whatever reason/s'). However, the lexical components auch immer are part of more general patterns as well. The word form Grund can be substituted by different nouns in the MWE pattern aus welch-SUB-G auch immer (e.g. Motiv (motive), Richtung (direction)). In the MWE pattern PRON auch immer the place is taken by an interrogative pronoun (e.g. was (what), wo (where), wer (who), warum (why)). One of those pronoun fillers, wie (how), is much more frequent than the others, which justifies the definition of a separate MWE wie auch immer, which can be translated as 'howsoever' or 'to whatever extent' (see section 2.3 for more details).
The basic structure of the MWE field thus highlights the different degrees of abstraction of MWEs and the functional use of lexical clusters like auch immer. The lexicographic descriptions linked to the nodes explain the interrelations and the differences in usage and meaning.
Contexts
Another important aspect of our approach to MWEs is that we pay close attention to the contexts in which they are commonly used. A good tool to explore this empirically is collocation analysis. In addition to narrative descriptions and manually selected corpus examples we therefore include the results of collocation analysis in our articles.
One interesting aspect is the difference between MWEs and their single-lexeme quasi-synonyms.
For example the meaning of the MWE im Grunde is very close to the lexeme eigentlich (actually). Figures 3 and 4 show the highest ranking results of a collocation analysis that focusses on a window of five words in front of the units eigentlich and im Grunde respectively and calculates the log likelihood ratio. 1 When comparing the results for these two units you can see that there are some contexts that are strongly preferred by eigentlich but are not highly ranked for im Grunde. Notable are the combination schade eigentlich (sad actually) as well as combinations with interrogative adverbs like wie (how), was (what), warum (why). The MWE im Grunde, on the other hand, has strong collocation partners that are capitalized conjunctions like aber (but) or denn (because). This indicates a clear tendency to appear near the beginning of a sentence in contexts where an argument is made, which is not prominent for eigentlich. So even if a quasi-synonymous single lexeme exists, the MWE shows differences in usage which become apparent when studying large quantities of data.
Patterns
As mentioned before, MWE patterns are of special interest to us. When exploring MWEs, we use a pattern matching tool that allows us to search large quantities of keyword in context lines (KWICs) for combinations of fixed strings and slots. The lexical fillers of these slots can also be counted and presented in the form of frequency tables. This allows us to explore which kinds of variations are possible and typical for an MWE. The filler tables can show quite different 'profiles' for a slot. In the following, we will give some examples.
For the MWE aus welchen Gründen auch immer (for whatever reasons) we checked whether the element Gründen can be modified by searching for the pattern aus welchen # * Gründen auch immer (# * stands for a slot that can be filled by any number of words). Table 1 shows the absolute and relative frequencies that where calculated from KWIC lines of our corpus. In the vast majority of cases, the slot is empty, which means that the MWE is used exactly in the form cited above: aus welchen Gründen auch immer. It is thus very stable, though not completely inflexible, as there is also evidence of adjectives that are used to further specify the reasons in question, e.g. persönlichen Gründen (personal reasons).
A different example of filler behavior can be observed when studying the pattern # auch immer (# marks a slot that has to be filled with exactly one word). Table 2 shows that this slot wer (who), wem (whom) etc. This lead us to define the MWE hierarchies as shown in figure 1 and explained in section 2.1. A different filler profile (Table 3) can be observed for the pattern aus # Gründen (for # reasons). This is a true MWE pattern, as it has a specific communicative function tied to the plural form of Grund: reasons are mentioned, but left intentionally vague. Table 3 shows that there is a large number of adjectives that can fill the gap. In contrast to the example X auch immer above, none of these is so dominant and striking that a separate MWE needs to be considered. However, the fillers can be grouped into functional groups, like type of the reasons (e.g. politisch (political), persönlich (personal), finanziell (financial)), validity of the reasons (e.g. nachvollziehbar (understandable), gut (good), triftig (valid)) or relevance of the reasons (e.g. wichtig (important), zwingend (imperative)).
You can see that filler tables are very useful for different purposes: To confirm the fixedness of an MWE and explore occasional variations, to conceptualize lexical units in order to build up hierarchies, and to further describe and understand the behavior of MWE patterns. Not only do we work with such patterns and filler tables when building the MWE field, we also include them in our descriptions -another way to give a user access to original corpus data structured in an informative way.
Additionally, we provide access to the KWIC lines that were used to calculate the filler tables. Table 4 shows some of the lines that match the pattern aus # Gründen. These lines are structured in fields according to the search pattern and the different columns can be sorted. In this way, you can explore the use of specific MWE structures yourself.
Figure 1 :
1Part of the MWE field centered around Grund and preposition aus.
Figure 2 :
2MWE article Aus welchen Gründen auch immer from the MWE field Grund. The article parts are 'Frequency in the Corpus', 'General description', 'Context Analysis', 'Contrast Analysis' and 'Superordinated Nodes'. The part 'Context Analysis' contains links to a filler table and to the corresponding KWIC lines.
Figure 3 :
3Highest ranking results of the collocation analysis for eigentlich (scope: 5 words in front).
Figure 4 :
4Highest ranking results of the collocation analysis for im Grunde (scope: 5 words in front).
1 For details on the collocation analysis used here seePerkuhn/Belica 2004. The settings were: Korpus: Wgesamt -alle Korpora des Archivs W (mit Neuakquisitionen); Archiv-Release: Deutsches Referenzkorpus (DeReKo-2013-II); Analyse-Kontext : 5. Wort links bis 0. Wort rechts; Granularität: grob; Zuverlässigkeit: analytisch; Clusterzuordnung: mehrfach; Auf 1 Satz beschränkt: ja; Lemmatisierung: nein; Funktionswörter: zugelassen; Autofokus: aus
Table 2 :
2Fillers of the pattern # auch immer.
Table 4 :
4KWIC lines of the pattern aus # Gründen.Filler
Freq Rel Freq
gesundheitlichen 7355
10.03
beruflichen
6311
8.60
finanziellen
4708
6.42
persönlichen
2660
3.63
organisatorischen 2585
3.52
politischen
2499
3.41
wirtschaftlichen
2180
2.97
privaten
1941
2.65
welchen
1849
2.52
verschiedenen
1779
2.43
diesen
1494
2.04
anderen
1381
1.88
technischen
1260
1.72
zwei
1237
1.69
familiären
1219
1.66
. . .
. . .
. . .
Table 3 :
3Fillers of the pattern aus # Gründen.
ConclusionWe believe that our MWE fields allow a different way to look at MWEs which is very useful to understand the structure of language. As they are strictly based on data from a large modern language corpus, our findings also reflect real, contemporary language use. This is especially useful for foreign language learners who struggle to navigate the complexities of fixedness and variability in the German language. In continuing our MWE research, we strive to refine our strategies for description and visualization and also plan to add contrastive studies in the future.
Cyril Belica, Statistische Kollokationsanalyse und Clustering. Korpusanalytische Analysemethode. Belica, Cyril: Statistische Kollokationsanal- yse und Clustering. Korpusanalytische Analysemethode, 1995 URL: http: //www1.ids-mannheim.de/kl/ projekte/methoden/ur.html - visited on 28.01.2014.
Corpus Based Grammar Research (= Proceedings of SLOVKO. Annelen / Brunner, Kathrin Steyer, eLexicography in the 21st century: New challenges, new applications. Proceedings of the eLex 2009. Louvaine-la-Neuve: Presses universitaires de Louvain. Granger, Silviane/ Paquot, Magaliin Smolenice, Slovakia; Brunner, Annelen/Steyer, Kathrin11A Model for Corpus-Driven Exploration and Presentation of Multi-Word Expressions. Cahiers du CENTALBrunner, Annelen/Steyer, Kathrin: A Model for Corpus-Driven Exploration and Pre- sentation of Multi-Word Expressions, in: Levická, Jana/Garabík, Radovan, edi- tors: NLP, Corpus Linguistics, Corpus Based Grammar Research (= Proceedings of SLOVKO 2009, held 25-27.11.2009 in Smolenice, Slovakia), 2009, pp. 54-64. Brunner, Annelen/Steyer, Kathrin: Wortverbindungsfelder: Fields of Multi- Word Expressions, in: Granger, Silviane/ Paquot, Magali, editors: eLexicography in the 21st century: New challenges, new applications. Proceedings of the eLex 2009. Louvaine-la-Neuve: Presses universitaires de Louvain, 2010, Cahiers du CENTAL, pp. 23-31.
Institut für Deutsche Sprache: Deutsches Referenzkorpus/Archiv der Korpora geschriebener Gegenwartssprache (DeReKo 2012-II). Patrick Hanks, Lexical Analysis: norms and exploitations. CambridgeMIT Pressu.aHanks, Patrick: Lexical Analysis: norms and ex- ploitations, Cambridge [u.a.]: MIT Press, 2013. Institut für Deutsche Sprache: Deutsches Referenzkorpus/Archiv der Korpora geschriebener Gegenwartssprache (DeReKo 2012-II), Webseite, 2012 URL: http://www.ids-mannheim.de/ DeReKo -visited on 28.01.2014.
Rainer / Perkuhn, Cyril Belica, Eine kurze Einführung in die Kookkurrenzanalyse und syntagmatische Muster. Institut für Deutsche Sprache. MannheimPerkuhn, Rainer/Belica, Cyril: Eine kurze Einführung in die Kookkurrenzanalyse und syntagmatische Muster. Institut für Deutsche Sprache, Mannheim, 2004 URL: http://www1.ids-mannheim.de/ kl/misc/tutorial.html -visited on 28.01.2014.
John : Sinclair, Corpus, Concordance, Collocation, ; Oxford, Anatol / Stefanowitsch, Stephan Gries, Th, Collostructions: Investigating the interaction of words and constructions. Oxford University PressSinclair, John: Corpus, Concordance, Collo- cation, Oxford: Oxford University Press, 1991. Stefanowitsch, Anatol/Gries, Stephan Th.: Col- lostructions: Investigating the interaction of words and constructions, in: International Journal of Corpus Linguistics, 8 2003, Nr. 2, pp. 209-243.
Usuelle Wortverbindungen des Deutschen. Kathrin Steyer, Deutsche Sprache. Steyer, Kathrin: Usuelle Wortverbindungen des Deutschen. Linguistisches Konzept und lexikografische Möglichkeiten, in: Deutsche Sprache, 28 2000, Nr. 2, pp. 101- 125.
Kathrin : Steyer, Kookkurenz, Korpusmethodik, Linguistisches Modell, Lexikographische Persepektiven, Wortverbindungen -mehr oder weniger fest. Steyer, KathrinBerlin/New Yorkde GruyterSteyer, Kathrin: Kookkurenz. Korpusmethodik, linguistisches Modell, lexikographische Persepektiven, in: Steyer, Kathrin, editor: Wortverbindungen -mehr oder weniger fest, Berlin/New York: de Gruyter, 2004, Jahrbuch des Instituts für Deutsche Sprache, pp. 87-116.
. Kathrin: Usuelle Steyer, Wortverbindungen, Zentrale Muster des Sprachgebrauchs aus korpusanalytischer Sicht. NarrSteyer, Kathrin: Usuelle Wortverbindungen. Zentrale Muster des Sprachgebrauchs aus korpusanalytischer Sicht, Tübingen: Narr, 2013.
Tognini-Bonelli, Corpus Linguistics at Work. J. BenjaminsElena; Amsterdam/PhiladelphiaTognini-Bonelli, Elena: Corpus Linguistics at Work, Amsterdam/Philadelphia: J. Ben- jamins, 2001. |
252,819,027 | Mitigating the Diminishing Effect of Elastic Weight Consolidation | Elastic weight consolidation (EWC,Kirkpatrick et al. 2017) is a promising approach to addressing catastrophic forgetting in sequential training. We find that the effect of EWC can diminish when fine-tuning large-scale pretrained language models on different datasets. We present two simple objective functions to mitigate this problem by rescaling the components of EWC. Experiments on natural language inference and fact-checking tasks indicate that our methods require much smaller values for the trade-off parameters to achieve results comparable to EWC. 1 | [
4711425,
208117506,
15085443,
52967399,
3432876,
202888986,
221761373,
232233599
] | Mitigating the Diminishing Effect of Elastic Weight Consolidation
October 12-17, 2022
Canasai Kruengkrai [email protected]
National Institute of Informatics
Japan
Junichi Yamagishi [email protected]
National Institute of Informatics
Japan
Mitigating the Diminishing Effect of Elastic Weight Consolidation
Proceedings of the 29th International Conference on Computational Linguistic
the 29th International Conference on Computational LinguisticOctober 12-17, 20224568
Elastic weight consolidation (EWC,Kirkpatrick et al. 2017) is a promising approach to addressing catastrophic forgetting in sequential training. We find that the effect of EWC can diminish when fine-tuning large-scale pretrained language models on different datasets. We present two simple objective functions to mitigate this problem by rescaling the components of EWC. Experiments on natural language inference and fact-checking tasks indicate that our methods require much smaller values for the trade-off parameters to achieve results comparable to EWC. 1
Introduction
New training data may arrive after we have spent considerable time training our model on the data at hand. A simple method for exploiting both new and old training data is to mix them and retrain the model from scratch. However, this mix-and-retrain method is neither always practical nor economical, especially in academic environments where computational resources are limited.
Sequential training is a potential alternative approach but faces a difficult challenge called catastrophic forgetting in which the performance on old data drastically drops when we train a model on new data. There exists a line of work that has addressed this challenge (Rusu et al., 2016;Li and Hoiem, 2018;Kirkpatrick et al., 2017;Mallya et al., 2018;He and Jaeger, 2018;Zhang et al., 2020). In this paper, we are particularly interested in elastic weight consolidation (EWC, Kirkpatrick et al. 2017), which has been shown to be helpful for domain adaptation (Saunders et al., 2019;Thompson et al., 2019).
EWC adds a regularization term to the objective function to ensure that the model works well on both new and old data. We empirically find (Williams et al., 2018) and FEVER (Thorne et al., 2018) and evaluate performance on the balanced dev sets. EWC starts to increase the accuracy of the prior dataset (MNLI) when increasing λ to 10 5 and yields the highest average accuracy at 10 7 . that EWC requires unexpectedly large values for the trade-off parameter (λ) between the regularizer and the loss to be effective when applying to pre-trained language models. Figure 1 shows such a phenomenon in which EWC has no effect in preventing catastrophic forgetting of the prior dataset (MNLI) with λ in the range of [10 0 , 10 4 ]. We have to scale λ up to [10 5 , 10 7 ], which is an unusual range of hyperparameters. To the best of our knowledge, this phenomenon has not been reported in the literature.
We propose two simple objective functions for mitigating the diminishing effect of EWC. Our objective functions rely on rescaling the components of EWC. Specifically, the first objective function involves taking the square root of the regularization term, while the second one involves using the absolute value of the gradient instead of the squared gradient. Both of our objective functions can reduce the values of the trade-off parameter λ by three to seven orders of magnitude while producing results similar to those of the original EWC.
Background
Problem formulation
We consider a supervised learning problem in which the task is to map an input x ∈ X to a label y ∈ Y. We need to train a model h θ :
X → Y with parameters θ ∈ R d . Given a dataset D = {(x i , y i )} M i=1
, we typically estimate θ on the basis of empirical risk minimization (ERM, Vapnik 1992):
J ERM (θ) = 1 M (x,y)∈D L(h θ (x), y),(1)
where L is the negative log likelihood loss:
L(h θ (x), y) = − y∈Y 1{ŷ = y} log p θ (ŷ|x).
Our base model h θ is a neural network containing a multilayer perceptron (MLP) on top of a pretrained language model (e.g., BERT). Thus, we define p θ (ŷ|x) = softmax(h θ (x)), where h θ = MLP(BERT(x)). The model parameters θ include those in the MLP and BERT.
Elastic weight consolidation
Elastic weight consolidation (EWC, Kirkpatrick et al. 2017) is based on a Bayesian framework that seeks to approximate the posterior distribution of θ conditional on two datasets. Let D and D 0 denote the current and prior datasets, respectively. We express the posterior distribution as:
p(θ|D, D 0 ) = p(θ, D, D 0 ) p(D, D 0 ) , = p(D|θ, D 0 )p(θ, D 0 ) p(D, D 0 ) , = p(D|θ)p(θ|D 0 )p(D 0 ) p(D)p(D 0 ) ∝ p(D|θ)p(θ|D 0 ),(2)
where we assume that D and D 0 are conditionally independent in the third line and ignore the constant in the last line. Taking the log on both sizes of Eq.
(2), we have:
log p(θ|D, D 0 ) = log p(D|θ)+log p(θ|D 0 ). (3)
The first term on the right-hand side corresponds to the log likelihood of D, which can be computed using Eq. (1). The second term is intractable but can be approximated using a second-order Taylor expansion of the KL-divergence around the parameters of the previously trained model, θ 0 :
log p(θ|D 0 ) ≈ 1 2 ∆θ H∆θ,(4)
where ∆θ = θ − θ 0 and H is the expected negative Hessian of the posterior distribution (Pascanu and Bengio, 2014). Computing H is impractical. Kirkpatrick et al. (2017) proposed approximating H using the diagonal of the Fisher information matrix.
Let diag(f) be the diagonal matrix with diagonal f. We estimate f with the average of the squared gradient across some N subsamples S 0 :
f = 1 N (x,y)∈S 0 ∇ θ 0 L(h θ (x), y) 2 .(5)
Replacing H with diag(f), we can simplify Eq. (4) as:
log p(θ|D 0 ) ≈ 1 2 d j=1 f j (θ j − θ 0 j ) 2 .(6)
Applying Eqs. (1) and (6) to Eq. (3), we obtain the EWC objective:
J EWC (θ) = J ERM (θ) + λ 2 d j=1 f j (θ j − θ 0 j ) 2 ,(7)
where λ is the trade-off parameter.
Proposed method
As shown in Figure 1, EWC requires extremely large values of λ to be effective. We analyze the components of EWC and find that this problem arises from the Fisher approximation in Eq. (5).
The diagonal element f j corresponds to the j th element of the squared gradient with respect to θ 0 . Since its training had already converged, the values of the gradient are typically small. When we square such a small decimal and combine it with the squared difference between the current and prior parameters, the final value can be vanishingly small. 2 We find that this issue is neither affected by datasets nor pre-trained language models. In Appendix A, we further investigate this issue on another pre-trained language model. We propose scaling up the Fisher approximation by taking the square root to resolve the issue above. We define the square root of EWC (REWC) as: Both methods begin to affect accuracy with much lower λ (i.e., 10 2 and 10 −1 for REWC and AEWC, respectively) while maintaining average accuracies similar to EWC.
J REWC (θ) = J ERM (θ) + λ √ A + ,(8)where A = d j=1 f j (θ j − θ 0 j ) 2 ,
and is a small value (e.g., 10 −8 ) for preventing the derivative of the square root at 0.
Another solution is to use the absolute value of the gradient instead of the squared gradient. We define:
g = 1 N (x,y)∈S 0 ∇ θ 0 L(h θ (x), y) .(9)
Note that diag(g) is positive semi-definite (like diag(f)) because all of its eigenvalues are greater than or equal to 0. Replacing the squared difference with the absolute difference yields our absolute EWC (AEWC): Figure 2 shows the results of REWC and AEWC based on the same setting as in Figure 1.
J AEWC (θ) = J ERM (θ) + λ d j=1 g j |θ j − θ 0 j |.(10)
Experiments
Datasets
We evaluated the objective functions described in §2 and §3 on natural language inference and factchecking tasks. We used six datasets pre-processed by Schuster et al. (2021) as follows:
MNLI (Williams et al., 2018) is a multi-genre natural language inference dataset. The task is to determine the inference relation between two sentences. Schuster et al. (2021) converted the original labels {"entailment", "contradiction","neutral"} into {"supported", "refuted", "not enough info"}.
FEVER (Thorne et al., 2018) (Fact Extraction and VERification) verifies whether a claim is supported or refuted by an evidence sentence, or decides whether there is insufficient information to make a decision.
VITC (Schuster et al., 2021) introduces the notion of contrastive evidence to FEVER. Given a claim, two evidence sentences that are nearly identical but with different labels are created. Thus, the task becomes more challenging than that of FEVER. The dataset contains both real and synthetic examples. We used only the real ones in our experiments.
ADVERSARIAL (Thorne et al., 2019) is derived from the FEVER 2.0 shared task, containing adversarially created claims that aim to induce erroneous predictions to the FEVER-trained models.
SYMMETRIC (Schuster et al., 2019) is another dataset that challenges the FEVER-trained models. It contains synthetically created claim-evidence pairs designed to break models that often make predictions using claims only without taking evidence sentences into account.
TRIGGERS (Atanasova et al., 2020) contains adversarial claims generated by using GPT-2 (Radford et al., 2019) given the original claims and triggers, which are words that cause the model to flip its prediction.
We selected λ that yields the highest average accuracy on the development (dev) sets. To avoid a bias towards more populated datasets (e.g., VITC), we created our balanced dev sets by randomly selecting 9,000 examples from each of the original dev sets. Since the dev and test sets of MNLI are identical, we split 9,000 examples from the training set to form the dev set and used the test set for the final evaluation.
Training details
We implemented our base model described in §2.1 using Hugging Face's Transformers library (Wolf et al., 2020). Specifically, the model consists of a two-layer MLP and BERT-base. Let x be the input sequence (i.e., a pair of sentences in our datasets). BERT-base encodes x into a sequence of hidden state vectors. Following common practice, we used the first hidden state vector of the special classification token (i.e., [CLS]) to represent x and fed it to the MLP followed by a softmax function.
For all experiments, we used Adafactor optimizer (Shazeer and Stern, 2018) with a gradient clipping of 1.0. Our effective batch size is 256. 3 For standard training, we randomly initialized the model parameters with N (0, 0.02) 4 , except for those of BERT-base. We trained each model for three epochs with a learning rate of 2e-5.
For sequential training, we randomly selected 1% of examples from D 0 to represent S 0 in Eq. (5). We also varied the subsample size from 0.1% to 10% but did not observe significant changes in performance. We initialized the current model parameters using the prior ones (i.e., θ 0 → θ). Determining a learning rate can be challenging. We used a method analogous to the learning rate decay technique (Ng, 2017). Let α 0 be the initial learning rate and r be the number of prior training runs. We computed the learning rate α for the current training run as:
α = 1 1 + (decay_rate × r) α 0 .(11)
For example, consider the case of further training the MNLI-trained model on the FEVER dataset, where α 0 = 2e-5 and r = 1. We set decay_rate to 1e-2 for all sequential training experiments. Using Eq. (11), the learning rate α for the current run decreases to 1.98e-5. We conducted all the experiments on NVIDIA Tesla A100 GPUs. Table 2 shows the results of various settings on the test sets. For sequential training, conducting experiments on all combinations takes time and considerable resources. Thus, we chose only a representative order for the datasets in accordance with their publication times. Since the MNLI and FEVER datasets were published at the same time, we decided to start with MNLI due to its generality. We considered the mix-and-retrain method (∪) with ERM as the topline setting. Unsurprisingly, this method yields the best performance on the prior datasets. The sequential training method (⇒) with ERM (i.e., vanilla fine-tuning) encounters severe catastrophic forgetting on the prior datasets. Our REWC and AEWC effectively reduce the values of λ. AEWC requires the lowest λ among the three objective functions. The performances of all the methods seem comparable on average, but each yields a different trade-off in accuracy between the prior and current datasets. Regarding the training time, AEWC is faster than REWC/EWC (though not significant) because its computation is simpler. Table 3: Ablation studies on EWC and AEWC for MNLI ⇒ FEVER. "w/o f j (or g j )" denotes omitting the gradient component from the regularization term.
Results
Discussion
We can interpret the EWC family as a weighted sum of the squared (or absolute) differences between the current and prior parameters. The gradient component helps suggest which parameter is important. To examine the benefit of gradient information, we conducted ablation studies on EWC and AEWC in the MNLI ⇒ FEVER experiment. We omitted f j and g j from Eqs. (7) and (10), respectively. The remaining regularization terms resemble the squared 2 -norm and the 1 -norm that take the prior parameters into account.
As seen in Figure 3, without the gradient component, both methods need lower λ to affect the accuracy of the prior dataset (MNLI). However, improvements on the prior dataset are marginal (∼1%) before reaching the optimal average accuracy compared to the original EWC and AEWC (∼4%). Table 3 shows the ablation results on the test sets, indicating that omitting the gradient component yields lower accuracies on the prior dataset. These results confirm that the gradient component is indeed helpful.
Conclusion
Without realizing the diminishing effect of EWC, we may fine-tune a pre-trained language model with a conventional range of hyperparameters and find no effect in combating catastrophic forgetting. We identified a possible cause of this issue and suggested two alternative objective functions, REWC and AEWC, that yield results comparable to the original EWC. Exploring more efficient ways for choosing an optimal λ is part of our future work.
Figure 1 :
1Accuracy vs. trade-off parameter λ. We sequentially fine-tune BERT (Bidirectional Encoder Representations from Transformers, Devlin et al. 2019) on MNLI
Figure 2 :
2Accuracy vs. trade-off parameter λ of our REWC and AEWC.
Figure 3 :
3Accuracy vs. trade-off parameter λ of EWC and AEWC without f j and g j , respectively.
Table 1shows our dataset statistics.Dataset
|Train| |Dev|
|Test|
MNLI
383,702 9,000
9,832
FEVER
178,059 9,000 11,710
VITC
248,953 9,000 34,481
ADVERSARIAL
-
-
766
SYMMETRIC
-
-
712
TRIGGERS
-
-
186
Table 1: Dataset statistics in our experiments. Bottom
three datasets contain only test sets adversarially cre-
ated for testing robustness of fact-checking models.
AEWC 10 −1 76.4±0.2 72.7±0.4 85.3±0.1 40.7±1.1 73.3±0.4 72.8±1.9 MNLI ∪ FEVER ∪ VITC ERM -83.8±0.2 88.1±0.1 84.6±0.1 53.5±0.6 82.6±0.4 73.2±1.0 MNLI ⇒ FEVER ⇒ VITC ERMTraining set
Obj.
λ
MNLI
FEVER
VITC
ADVER.
SYM.
TRIG.
MNLI
ERM
-
83.9±0.1 67.7±0.7 47.8±0.7 51.0±0.8 74.8±0.3 68.3±1.4
FEVER
ERM
-
58.8±0.2 87.4±0.1 59.7±0.1 51.4±0.7 75.3±0.2 65.4±1.4
VITC
ERM
-
62.5±1.0 65.1±0.5 78.2±1.2 28.9±0.5 65.8±1.2 69.1±2.8
MNLI ∪ FEVER
ERM
-
83.9±0.2 87.8±0.1 61.0±0.3 53.8±0.2 82.6±0.4 73.8±0.4
MNLI ⇒ FEVER
ERM
-
74.9±0.2 88.2±0.2 62.7±0.1 55.0±0.3 82.6±0.2 71.2±0.4
EWC
10 7
79.3±0.2 86.3±0.1 61.0±0.4 53.7±0.4 80.3±0.6 67.7±1.4
REWC 10 3
78.7±0.2 86.8±0.1 61.5±0.3 53.6±0.5 81.1±0.6 69.2±0.6
AEWC 10 0
78.7±0.2 87.2±0.1 61.9±0.3 53.9±0.4 81.3±0.4 70.5±0.4
FEVER ∪ VITC
ERM
-
69.0±0.4 87.5±0.1 83.3±0.3 51.0±0.2 79.0±0.7 71.5±0.8
FEVER ⇒ VITC
ERM
-
66.2±0.4 75.8±0.4 84.4±0.1 39.6±0.8 71.3±0.8 70.9±0.9
EWC
10 6
65.8±0.3 78.2±0.2 83.6±0.1 40.5±1.4 71.3±0.5 70.0±0.6
REWC 10 2
66.2±0.3 76.7±0.2 84.2±0.1 39.7±1.5 71.4±0.3 70.5±0.6
AEWC 10 −1 66.3±0.4 76.3±0.2 84.3±0.1 39.5±1.4 71.4±0.4 70.6±0.6
MNLI ∪ VITC
ERM
-
84.0±0.1 76.8±0.2 84.3±0.1 43.8±0.6 75.5±0.6 74.6±1.9
MNLI ⇒ VITC
ERM
-
76.0±0.2 72.4±0.2 85.5±0.2 40.2±0.8 73.0±0.3 71.7±1.0
EWC
10 5
76.5±0.3 72.7±0.4 85.3±0.1 41.0±1.0 73.3±0.5 72.4±1.8
REWC 10 2
76.7±0.3 72.9±0.3 85.1±0.1 41.1±0.9 73.5±0.3 72.8±1.9
-
75.1±0.3 79.1±0.3 85.7±0.0 44.4±0.5 75.4±0.7 74.9±0.7
EWC
10 6
77.5±0.3 79.1±0.2 84.0±0.2 44.2±0.4 75.1±0.5 73.3±1.6
REWC 10 2
76.4±0.4 77.7±0.2 85.2±0.1 42.9±0.4 74.4±0.5 73.5±0.9
AEWC 10 0
78.6±0.2 82.8±0.1 80.0±1.0 46.8±0.6 76.9±0.5 74.1±1.5
Table 2 :
2Symbol ∪ denotes mixing training sets, while arrow ⇒ denotes using training sets sequentially. Gray color highlights the effect of catastrophic forgetting on the prior dataset. Blue color emphasizes the performance on the current dataset. Green color indicates the topline performance of the mix-and-retrain method. We ran each experiment five times using different random seeds and report mean and standard deviation./o fj 10 −2 75.9±0.1 88.0±0.1 62.4±0.2 AEWC 10 0 78.7±0.2 87.2±0.1 61.9±0.3 w/o gj 10 −5 76.1±0.2 87.6±0.1 62.1±0.2Obj.
λ
MNLI
FEVER
VITC
EWC
10 7
79.3±0.2 86.3±0.1 61.0±0.4
w
Our code is available at https://github.com/ nii-yamagishilab/ewc.
For example, in the MNLI ⇒ FEVER experiment, we find that 85.9% of non-zero fj are less than 1e-10.
We used gradient accumulation with 8 batches of 32. 4 This is the default setting in Transformers.
AcknowledgmentsThis work is supported by JST CREST Grants (JP-MJCR18A6 and JPMJCR20D3) and MEXT KAK-ENHI Grants (21H04906), Japan.A Additional resultsWe verified the diminishing effect of EWC on another pre-trained language model, A Lite BERT (ALBERT,Lan et al. 2020).Figure 4shows the results of sequential training: MNLI ⇒ FEVER. We can still see the diminishing effect of the original EWC, while our REWC and AEWC reduce the value of λ by three and six orders of magnitude and produced similar results.(Lan et al., 2020). EWC, REWC, and AEWC achieve highest average accuracies with λ = 10 6 , 10 3 , and 10 0 , respectively.
Generating label cohesive and wellformed adversarial claims. Pepa Atanasova, Dustin Wright, Isabelle Augenstein, 10.18653/v1/2020.emnlp-main.256Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsPepa Atanasova, Dustin Wright, and Isabelle Augen- stein. 2020. Generating label cohesive and well- formed adversarial claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3168-3177, Online. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Overcoming catastrophic interference using conceptor-aided backpropagation. Xu He, Herbert Jaeger, 6th International Conference on Learning Representations. ICLRXu He and Herbert Jaeger. 2018. Overcoming catas- trophic interference using conceptor-aided back- propagation. In 6th International Conference on Learning Representations (ICLR).
Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. James Kirkpatrick, Razvan Pascanu, Neil C Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Proceeding of the National Academy of Science. 11413Overcoming catastrophic forgetting in neural networksJames Kirkpatrick, Razvan Pascanu, Neil C. Rabi- nowitz, Joel Veness, Guillaume Desjardins, An- drei A. Rusu, Kieran Milan, John Quan, Tiago Ra- malho, Agnieszka Grabska-Barwinska, Demis Hass- abis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. Proceeding of the National Academy of Science, 114(13):3521-3526.
ALBERT: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite bert for self-supervised learn- ing of language representations. In International Conference on Learning Representations.
Learning without forgetting. Zhizhong Li, Derek Hoiem, 10.1109/TPAMI.2017.2773081IEEE Transactions on Pattern Analysis and Machine Intelligence. 4012Zhizhong Li and Derek Hoiem. 2018. Learning with- out forgetting. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 40(12):2935-2947.
Piggyback: Adapting a single network to multiple tasks by learning to mask weights. Arun Mallya, Dillon Davis, Svetlana Lazebnik, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Arun Mallya, Dillon Davis, and Svetlana Lazebnik. 2018. Piggyback: Adapting a single network to mul- tiple tasks by learning to mask weights. In Proceed- ings of the European Conference on Computer Vi- sion (ECCV).
Learning rate decay. Andrew Ng, Andrew Ng. 2017. Learning rate de- cay. https://www.coursera.org/ lecture/deep-neural-network/ learning-rate-decay-hjgIA.
Revisiting natural gradient for deep networks. Razvan Pascanu, Yoshua Bengio, 2nd International Conference on Learning Representations. ICLRRazvan Pascanu and Yoshua Bengio. 2014. Revisiting natural gradient for deep networks. In 2nd Inter- national Conference on Learning Representations (ICLR).
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Razvan Pascanu, and Raia Hadsell. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, CoRR, abs/1606.04671Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Ko- ray Kavukcuoglu, Razvan Pascanu, and Raia Had- sell. 2016. Progressive neural networks. CoRR, abs/1606.04671.
Domain adaptive inference for neural machine translation. Danielle Saunders, Felix Stahlberg, Adrià De Gispert, Bill Byrne, 10.18653/v1/P19-1022Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsDanielle Saunders, Felix Stahlberg, Adrià de Gispert, and Bill Byrne. 2019. Domain adaptive inference for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 222-228, Florence, Italy. Association for Computational Linguistics.
Get your vitamin C! robust fact verification with contrastive evidence. Tal Schuster, Adam Fisch, Regina Barzilay, 10.18653/v1/2021.naacl-main.52Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesTal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get your vitamin C! robust fact verification with con- trastive evidence. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 624-643, Online. Asso- ciation for Computational Linguistics.
Towards debiasing fact verification models. Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola, Enrico Ortiz, Regina Santus, Barzilay, 10.18653/v1/D19-1341Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsTal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola Ortiz, Enrico Santus, and Regina Barzilay. 2019. Towards debiasing fact verification models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3419-3425, Hong Kong, China. Association for Computational Linguistics.
Adafactor: Adaptive learning rates with sublinear memory cost. Noam Shazeer, Mitchell Stern, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine Learning80Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4596-4604. PMLR.
Overcoming catastrophic forgetting during domain adaptation of neural machine translation. Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, Philipp Koehn, 10.18653/v1/N19-1209Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2062-2068, Minneapolis, Min- nesota. Association for Computational Linguistics.
FEVER: a large-scale dataset for fact extraction and VERification. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal, 10.18653/v1/N18-1074Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
Christos Christodoulopoulos, and Arpit Mittal. 2019. The FEVER2.0 shared task. James Thorne, Andreas Vlachos, Oana Cocarascu, 10.18653/v1/D19-6601Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER). the Second Workshop on Fact Extraction and VERification (FEVER)Hong Kong, ChinaAssociation for Computational LinguisticsJames Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2019. The FEVER2.0 shared task. In Proceedings of the Second Workshop on Fact Extraction and VERifica- tion (FEVER), pages 1-6, Hong Kong, China. Asso- ciation for Computational Linguistics.
Principles of risk minimization for learning theory. Vladimir Vapnik, Advances in Neural Information Processing Systems. Morgan-Kaufmann4Vladimir Vapnik. 1992. Principles of risk minimiza- tion for learning theory. In Advances in Neural In- formation Processing Systems, volume 4. Morgan- Kaufmann.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, 10.18653/v1/N18-1101Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Sidetuning: A baseline for network adaptation via additive side networks. Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, Jitendra Malik, Computer Vision -ECCV 2020. ChamSpringer International PublishingJeffrey O. Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. 2020. Side- tuning: A baseline for network adaptation via ad- ditive side networks. In Computer Vision -ECCV 2020, pages 698-714, Cham. Springer International Publishing. |
1,730,144 | Annotating omission in statement pairs | In this piece of industrial application, we focus on the identification of omission in statement pairs for an online news platform.We compare three annotation schemes, namely two crowdsourcing schemes and an expert annotation. The simplest of the two crowdsourcing approaches yields a better annotation quality than the more complex one. We use a dedicated classifier to assess whether the annotators' behaviour can be explained by straightforward linguistic features. However, for our task, we argue that expert and not crowdsourcing-based annotation is the best compromise between cost and quality. | [
996545,
709,
5730661,
18377006,
1957433,
10435668,
2801015,
10977241,
2772094
] | Annotating omission in statement pairs
Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 3, 2017. 2017
Héctor Martínez
Inria (ALMAnaCH)
2 rue Simone Iff75012ParisFrance
Alonso
Amaury Delamaire
École des Mines de Saint-Étienne
158 cours Fauriel42000Saint-ÉtienneFrance
Storyzy (Trooclick)
130 rue de Lourmel75015ParisFrance
Benoît Sagot [email protected]@trooclick.com
Inria (ALMAnaCH)
2 rue Simone Iff75012ParisFrance
Annotating omission in statement pairs
Proceedings of the 11th Linguistic Annotation Workshop
the 11th Linguistic Annotation WorkshopValencia, SpainAssociation for Computational LinguisticsApril 3, 2017. 2017
In this piece of industrial application, we focus on the identification of omission in statement pairs for an online news platform.We compare three annotation schemes, namely two crowdsourcing schemes and an expert annotation. The simplest of the two crowdsourcing approaches yields a better annotation quality than the more complex one. We use a dedicated classifier to assess whether the annotators' behaviour can be explained by straightforward linguistic features. However, for our task, we argue that expert and not crowdsourcing-based annotation is the best compromise between cost and quality.
Introduction
In a user survey, the news aggregator Storyzy 1 found out that the two main obstacles for user satisfaction when accessing their site's content were redundancy of news items, and missing information respectively. Indeed, in the journalistic genre that is characteristic of online news, editors make frequent use of citations as prominent information; yet these citations are not always in full. The reasons for leaving information out are often motivated by the political leaning of the news platform.
Existing approaches to the detection of political bias rely on bag-of-words models (Zhitomirsky-Geffet et al., 2016) that examine the words present in the writings. Our goal is to go beyond such approaches, which focus on what is said, by instead focusing on what is omitted. Thus, this method requires a pair of statements; an original one, and a shortened version with some deleted words or spans. The task is then to determine whether the 1 http://storyzy.com information left out in the second statement conveys substantial additional information. If so, the pair presents an omission; cf. Table 1.
Omission detection in sentence pairs constitutes a new task, which is different from the recognition of textual entailment-cf. (Dagan et al., 2006)because in our case we are certain that the longer text entails the short one. What we want to estimate is whether the information not present in the shorter statement is relevant. To tackle this question, we used a supervised classification framework, for which we require a dataset of manually annotated sentence pairs. We conducted an annotation task on a sample of the corpus used by the news platform (Section 3). In this corpus, reference statements extracted from news articles are used as long 'reference' statements, whereas their short 'target' counterparts were selected by string and date matching.
We followed by examining which features help identify cases of omission (Section 4). In addition to straightforward measures of word overlap (the Dice coefficient), we also determined that there is a good deal of lexical information that determines whether there is an omission. This work is, to the best of our knowledge, the first empirical study on omission identification in statement pairs. 2
Related work
To the best of our knowledge, no work has been published about omission detection as such. However, our work is related to a variety of questions of interest that resort both to linguistics and NLP.
Segment deletion is one of the most immediate forms of paraphrase, cf. Vila et al. (2014) for a survey. Another phenomenon that also presents the notion of segment deletion, although in a very different setting, is ellipsis. In the case of an ellipsis, the deleted segment can be reconstructed given a discourse antecedent in the same document, be it observed or idealized (Asher et al., 2001;Merchant, 2016). In the case of omission, a reference and a target version of a statement are involved, the deleted segment in one version having an antecedent in the other version of the statement, in another document, as a result of editorial choices.
Our task is similar to the problem of omission detection in translations, but the bilingual setting allows for word-alignment-based approaches (Melamed, 1996;Russell, 1999), which we cannot use in our setup. Omission detection is also related to hedge detection, which can be achieved using specific lexical triggers such as vagueness markers (Szarvas et al., 2012;Vincze, 2013).
Annotation Task
The goal of the annotation task is to provide each reference-target pair with a label: Omission, if the target statement leaves out substantial information, or Same if there is no information loss.
Corpus We obtained our examples from a corpus of English web newswire. The corpus is made up of aligned reference-target statement pairs; cf. Table 1 for examples. These statements were aligned automatically by means of word overlap metrics, as well as a series of heuristics such as comparing the alleged speaker and date of the statement given the article content, and a series of text normalization steps. We selected 500 pairs for annotation. Instead of selecting 500 random pairs, we selected a contiguous section from a random starting point. We did so in order to obtain a more natural proportion of reference-to-target statements, given that reference statements can be associated with more than one target. 3 Annotation setup Our first manual annotation strategy relies on the AMT crowdsourcing platform. We refer to AMT annotators as turkers. For each statement pair, we presented the turkers with a display like the one in Figure 1.
We used two different annotation schemes, namely OM p , where the option to mark an omission is "Text B leaves out some substantial information", and OM e , where it is "Text B leaves out 3 The full distribution of the corpus documentation shall provide more details on the extraction process.
something substantial, such as time, place, cause, people involved or important event information."
The OM p scheme aims to represent a naive user intuition of the relevance of a difference between statements, akin to the intuition of the users mentioned in Section 1, whereas OM e aims at capturing our intuition that relevant omissions relate to missing key news elements describable in terms of the 5-W questions (Parton et al., 2009;Das et al., 2012). We ran AMT task twice, once for each scheme. For each scheme, we assigned 5 turkers per instance, and we required that the annotators be Categorization Masters according to the AMT scoring. We paid 0.05$ per instance.
Moreover, in order to choose between OM p and OM e , two experts (two of the authors of this article) annotated the same 100 examples from the corpus, yielding the OE annotation set. Annotation results The first column in Table 2 shows the agreement of the annotation tasks in terms of Krippendorff's α coefficient. A score of e.g. 0.52 is not a very high value, but is well within what can be expected on crowdsourced semantic annotations. Note, however, the chance correction that the calculation of α applies to a skewed binary distribution is very aggressive (Passonneau and Carpenter, 2014). The conservativeness of the chance-corrected coefficient can be assessed if we compare the raw agreement between experts (0.86) with the α of 0.67. OM e causes agreement to descend slightly, and damages the agreement of Same, while Omission remains largely constant. Moreover, disagreement is not evenly distributed across annotated instances, i.e. some instances show perfect agreement, while other instances have maximal disagreement.
We also measured the median annotation time per instance for all three methods; OM e is almost twice as slow as OM p (42s vs. 22s), while the the expert annotation time in OE is 16s. The large time difference between OM p and OM e indicates that changing the annotation guidelines has indeed an effect in annotation behavior, and that the agreement variation is not purely a result of the expectable annotation noise in crowdsourcing. The fourth and fifth columns in Table 2 show the label distribution after adjudication. While the distribution of Omission-Same labels is very similar after applying simple majority voting, we observe that the distribution of the agreement does change. In OM p , approx. 80% of the Same-label instances are assigned with a high agreement (at least four out of five votes), whereas only a third of the Same instances in OM e have such high agreement. Both experts have a similar perception of omission, albeit with a different threshold: in the 14 where they disagree, one of the annotators shows a systematic preference for the Omission label.
We also use MACE to evaluate the stability of the annotations. Using an unsupervised expectation-maximization model, MACE assigns confidence to annotators, which are used to estimate the resulting annotations (Hovy et al., 2013). While we do not use the label assignments from MACE for the classification experiments in Section 4, we use them to measure how much the proportion of omission changes with regards to simple majority voting. The more complex OM e scheme has, parallel to lower agreement, a much higher fluctuation-both in relative and absolute terms-with regards to OM p , which also indicates this the former scheme provides annotations that are more subject to individual variation. While this difference is arguably of a result of genuine linguistic reflection, it also indicates that the data obtained by this method is less reliable as such.
To sum up, while the label distribution is similar across schemes, the Same class drops in overall agreement, but the Omission class does not.
In spite of the variation suggested by their α coefficient, the two AMT annotated datasets are very similar. They are 85% identical after label assignment by majority voting. However, the cosine similarity between the example-wise proportions of omission labels is 0.92. This difference is a consequence of the uncertainty in low-agreement examples. The similarity with OE is 0.89 for OM p and 0.86 for OM e ; OM p is more similar to the expert judgment. This might be related to the fact that the OM e instructions prime turkers to favor named entities, leading them to pay less attention to other types of substantial information such as modality markers. We shall come back to the more general role of lexical clues in Section 4.
Given that it is more internally consistent and it matches better with OE, we use the OM p dataset for the rest of the work described in this article.
Classification experiments
Once the manually annotated corpus is built, we can assess the learnability of the Omission-Same decision problem, which constitutes a binary classification task. We aimed at measuring whether the annotators' behavior can be explained by simple proxy linguistic properties like word overlap or length of the statements and/or lexical properties.
Features: For a reference statement r, a target statement t and a set M of the words that only appear in r, we generate the following feature sets:
1. Dice (F a ): Dice coefficient between r and t. predicted by the 4-class Stanford Named Entity Recognizer (Finkel et al., 2005). Table 3 shows the classification results. We use all exhaustive combinations of these feature sets to train a discriminative classifier, namely a logistic regression classifier, to obtain a best feature combination. We consider a feature combination to be the best when it outperforms the others in both accuracy and F1 for the Omission label. We compare all systems against the most frequent label (MFL) baseline. We evaluate each feature twice, namely using five-cold cross validation (CV-5 OM p ), and in a split scenario where we test on the 100 examples of OE after training with the remaining 400 examples from OM p (Test OE). The three best systems (i.e. non-significantly different from each other when tested on OM p ) are shown in the lower section of the table. We test for significance using Student's two-tailed test and p <0.05.
As expected, the overlap (F a ) and length metrics (F b ) make the most competitive standalone features. However, we want to measure how much of the labeling of omission is determined by which words are left out, and not just by how many.
The system trained on BoW outperforms the system on DWR. However, BoW features contain a proxy for statement length, i.e. if n words are different between ref and target, then n features will fire, and thus approximate the size of M . A distributional semantic model such as GloVe is however made up of non-sparse, real-valued vec- tors, and does not contain such a proxy for word density. If we examine the contribution of using F d as a feature model, we see that, while it falls short of its BoW counterpart, it beats the baseline by a margin of 5-10 points. In other words, regardless of the size of M , there is lexical information that explains the choices of considering an omission.
Conclusion
We have presented an application-oriented effort to detect omissions between statement pairs. We have assessed two different AMT annotation schemes, and also compared them with expert annotations. The extended crowdsourcing scheme is defined closer to the expert intuition, but has lower agreement, and we use the plain scheme instead. Moreover, if we examine the time need for annotation, our conclusion is that there it is in fact detrimental to use crowdsourcing for this annotation task with respect to expert annotation. Chiefly, we also show that simple linguistic clues allow a classifier to reach satisfying classification results (0.86-0.88 F1), which are better than when solely relying on the straightforward features of different length and word overlap. Further work includes analyzing whether the changes in the omission examples contain also changes of uncertainty class (Szarvas et al., 2012) or bias type (Recasens et al., 2013), as well as expanding the notion of omission to the detection of the loss of detail in paraphrases. Moreover, we want to explore how to identify the most omissionprone news types, in a style similar to the characterization of unreliable users in Wei et al. (2013).
Figure 1 :
1Annotation scheme for OM p
2 .
2Length (F b ): The length of r, the length of t, and their difference. 3. BoW (F c ): A bag of words (BoW) of M . 4. DWR (F d ): A dense word representation is word-vector representation of M built from the average word vector for all words in M . We use the representations from GloVe (Pennington et al., 2014). 5. Stop proportion (F e ): The proportion of stop words and punctuation in M . 6. Entities (F f ): The number of entities in M
Example 1 Interior Minister Chaudhry Nisar Ali Khan on Friday said no Pakistani can remain silent over the atrocities being committed against the people of the occupied Kashmir by the Indian forces.Table 1: Examples of annotated instances. The 'Instance' column contains the full reference statement, with the elements not present in the target statement marked in italics. The last three columns display the proportion of Omission labels provided by the three annotation setups.Instance
OMp OMe OE
0
1
1
Example 2 I don't feel guilty. I cannot tell you how humiliated I feel. "I feel robbed emotionally.
But we're coming from the east (eastern Europe), we're too close to Russia .."
.8
.2
0
Example 3 The tusks resemble the prehistoric sabre-tooth tiger, but of course, they are not
related. It could make wildlife watching in Sabah more interesting. The rare elephant's reversed
tusks might create some problems when it comes to jostling with other elephants. The tusks
resemble the prehistoric sabre-tooth tiger, but of course, they are not related
.6
.4
.5
Dataset
αt
% Om. Vote MACE
Full OM p 0.52 22 61.72
.65
.63
Full OM e 0.49 41 63.48
.69
.61
100 OM p 0.52 22 62.42
.64
.62
100 OM e 0.54 42 60.00
.61
.58
100 OE
0.67 16 70.87
-
.62
Table 2 :
2Dataset, Krippendorff's α, median annotation time, raw proportion of Omision, and label distribution using voting and MACE.
Table 3 :
3Accuracy and F1 for the Omission label for all feature groups, plus for the best feature combination in both evaluation methods. Systems significantly under baseline are marked in grey.
We make all data and annotations are freely available at github.com/hectormartinez/verdidata .
Discourse parallelism, ellipsis, and ambiguity. Nicholas Asher, Daniel Hardt, Joan Busquets, Journal of Semantics. 181Nicholas Asher, Daniel Hardt, and Joan Busquets. 2001. Discourse parallelism, ellipsis, and ambigu- ity. Journal of Semantics, 18(1):1-25.
The pascal recognising textual entailment challenge. Oren Ido Dagan, Bernardo Glickman, Magnini, Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment. SpringerIdo Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evalu- ating predictive uncertainty, visual object classifica- tion, and recognising tectual entailment, pages 177- 190. Springer.
The 5w structure for sentiment summarization-visualization-tracking. Amitava Das, Sivaji Bandyaopadhyay, Björn Gambäck, International Conference on Intelligent Text Processing and Computational Linguistics. SpringerAmitava Das, Sivaji Bandyaopadhyay, and Björn Gambäck. 2012. The 5w structure for senti- ment summarization-visualization-tracking. In In- ternational Conference on Intelligent Text Process- ing and Computational Linguistics, pages 540-555. Springer.
Incorporating non-local information into information extraction systems by gibbs sampling. Jenny Rose Finkel, Trond Grenager, Christopher Manning, Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. the 43rd Annual Meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsJenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, pages 363-370. Association for Computational Lin- guistics.
Learning whom to trust with MACE. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, Eduard Hovy, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1120-1130.
Automatic detection of omissions in translations. I , Dan Melamed, Proceedings of the 16th Conference on Computational Linguistics. the 16th Conference on Computational LinguisticsCopenhagen, Denmark2COLING '96I. Dan Melamed. 1996. Automatic detection of omis- sions in translations. In Proceedings of the 16th Conference on Computational Linguistics -Volume 2, COLING '96, pages 764-769, Copenhagen, Den- mark.
Ellipsis: A survey of analytical approaches. Jason Merchant, Handbook of ellipsis. Jeroen van Craenenbroeck and Tanja TemmermanOxford, United KingdomOxford University PressJason Merchant. 2016. Ellipsis: A survey of analytical approaches. http://home.uchicago.edu/ merchant/pubs/ellipsis.revised.pdf. Manuscript for Jeroen van Craenenbroeck and Tanja Temmerman (eds.), Handbook of ellipsis, Oxford University Press: Oxford, United Kingdom.
Who, what, when, where, why?: comparing multiple approaches to the cross-lingual 5W task. Kristen Parton, Bob Kathleen R Mckeown, Mona T Coyne, Ralph Diab, Dilek Grishman, Mary Hakkani-Tür, Heng Harper, Wei Yun Ji, Adam Ma, Meyers, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPKristen Parton, Kathleen R McKeown, Bob Coyne, Mona T Diab, Ralph Grishman, Dilek Hakkani-Tür, Mary Harper, Heng Ji, Wei Yun Ma, Adam Mey- ers, et al. 2009. Who, what, when, where, why?: comparing multiple approaches to the cross-lingual 5W task. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, pages 423-431.
The benefits of a model of annotation. J Rebecca, Bob Passonneau, Carpenter, Transactions of the Association for Computational Linguistics. 2Rebecca J Passonneau and Bob Carpenter. 2014. The benefits of a model of annotation. Transactions of the Association for Computational Linguistics, 2:311-326.
GloVe: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarJeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar.
Linguistic models for analyzing and detecting biased language. Marta Recasens, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, Proceedings of the 51th Annual Meeting of the ACL. the 51th Annual Meeting of the ACLSofia, BulgariaMarta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for an- alyzing and detecting biased language. In Proceed- ings of the 51th Annual Meeting of the ACL, pages 1650-1659, Sofia, Bulgaria.
Errors of Omission in Translation. Graham Russell, Proceedings of the 8th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI 99). the 8th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI 99)University College, Chester, EnglandGraham Russell. 1999. Errors of Omission in Trans- lation. In Proceedings of the 8th International Con- ference on Theoretical and Methodological Issues in Machine Translation (TMI 99), pages 128-138, Uni- versity College, Chester, England.
Crossgenre and cross-domain detection of semantic uncertainty. György Szarvas, Veronika Vincze, Richárd Farkas, György Móra, Iryna Gurevych, Computational Linguistics. 382György Szarvas, Veronika Vincze, Richárd Farkas, György Móra, and Iryna Gurevych. 2012. Cross- genre and cross-domain detection of semantic uncer- tainty. Computational Linguistics, 38(2):335-367.
Is this a paraphrase? what kind? paraphrase boundaries and typology. Marta Vila, Antònia Martí, Horacio Rodríguez, Scientific Research Publishing4205Marta Vila, M Antònia Martí, Horacio Rodríguez, et al. 2014. Is this a paraphrase? what kind? paraphrase boundaries and typology. volume 4, page 205. Sci- entific Research Publishing.
Weasels, Hedges and Peacocks: Discourse-level Uncertainty in Wikipedia Articles. Veronika Vincze, Proceedings of the International Joint Conference on Natural Language Processing. the International Joint Conference on Natural Language ProcessingNagoya, JapanVeronika Vincze. 2013. Weasels, Hedges and Pea- cocks: Discourse-level Uncertainty in Wikipedia Articles. In Proceedings of the International Joint Conference on Natural Language Processing, pages 383-391, Nagoya, Japan.
An empirical study on uncertainty identification in social media context. Zhongyu Wei, Junwen Chen, Wei Gao, Binyang Li, Lanjun Zhou, Yulan He, Kam-Fai Wong, Proceedings of the 51th Annual Meeting of the ACL. the 51th Annual Meeting of the ACLSofiaBulgariaZhongyu Wei, Junwen Chen, Wei Gao, Binyang Li, Lanjun Zhou, Yulan He, and Kam-Fai Wong. 2013. An empirical study on uncertainty identification in social media context. In Proceedings of the 51th An- nual Meeting of the ACL, pages 58-62, Sofia, Bul- garia.
Utilizing overtly political texts for fully automatic evaluation of political leaning of online news websites. Maayan Zhitomirsky-Geffet, Esther David, Moshe Koppel, Hodaya Uzan, G E Gorman, Online Information Review. 340Maayan Zhitomirsky-Geffet, Esther David, Moshe Koppel, Hodaya Uzan, and GE Gorman. 2016. Uti- lizing overtly political texts for fully automatic eval- uation of political leaning of online news websites. Online Information Review, 40(3). |
554,055 | [] | sennmantics GmbH Thalwil
24Avignon, AprilFrance, Switzerland
Introduction
Until quite recently, most critical edition 1 projects produced printed books, even though the production of these volumes has been supported by computers since the 1960s, e.g., for concordancing, collation, and statistical analyses, as well as for bibliography management, text editing, and typesetting (see, e.g., Froger (1970)).
Modern edition projects increasingly aim to produce digital editions that offer linking, dynamic display of alternative readings, or the integration of related images (in particular facsimiles of original documents), audio, or video. However, the new target medium does not just offer new possibilities, but it also demands sometimes fundamental changes in the editorial process.
One affected area is indexing. In printed books, the manually constructed back-of-the-book index is the only way for readers to access the contents in a non-linear fashion. A good index is not merely a list of words occurring in the text, but it specifies concepts and introduces synonyms and, through cross-references, related terms. The possibility to perform full-text searches on digital texts therefore does not render manually constructed indices obsolete, but complements them (see Savoy (2005) for an evaluation in a comparable scenario). For editions of historical texts, a manually constructed index is indispensable, as spelling variation, meaning shifts, and multilingualism make full-text retrieval difficult for both laypersons and experts.
In book form, collective editions of shorter texts, such as letters, treaties, or charters, form one monolithic entity. The electronic medium allows for direct linking and repurposing of individual parts (or content objects) of a collection in new contexts, so the individual edited text is much more independent than it was in a printed volume. This has direct implications for the construction of indices: Traditionally, an index for a book is compiled when it is completed; thus, when selecting keywords, the indexer does not consider individual texts in isolation, but rather within the specific context set by the book. An indexer may thus choose one particular term for describing a concept over another one because it occurs verbatim in the majority of texts; or an indexer may choose to leave out certain possible index terms because they are self-evident in the context of the book, e.g., the index to an edition of letters is unlikely to contain the index term letter.
In a digital edition, in contrast, index terms should be rather thought of as metadata assigned to individual content objects to enable retrieval and reuse in different contexts. For example, if an edition of a letter is included in a thematic collection containing various types of documents, it should have the metadata information letter, as this may be a distinguishing feature in this collection. It also means that a collection may contain items annotated by different editors, in contrast to backof-the-book indices, which are typically created by a single indexer.
In order to ensure interoperability of index terms, a controlled vocabulary should be used. We define a controlled vocabulary in accordance with ANSI/NISO Z39. 19-2005(ANSI/NISO, 2005 as a set of canonical terms that are managed by an authority according to certain rules; for multiple terms referring to the same concept, a preferred term (i.e., descriptor) is defined, and a term representing various concepts is made unambiguous. A controlled vocabulary may have defined types of relationships between terms such as in a taxonomy (hierarchy), thesaurus (hierarchy, equivalence, association), or ontology (specific types of relationships like "is produced by").
Construction of controlled vocabularies is a time-consuming and labor-intensive process. Since it requires deep semantic understanding, it cannot be fully automated. However, we noted in our experiments that some stages of building a controlled vocabulary (see Shearer (2004) for a nine-step procedure to build a thesaurus) can be partially automated. In particular, we propose to harvest the information contained in subject indices from earlier or related works. This paper describes ongoing work along these lines towards a controlled vocabulary for the Collection of Swiss Law Sources, a large-scale critical edition of historical texts. The vocabulary is intended to support editors in finding meaningful and agreed-upon descriptors and to facilitate retrieval of documents by both experts and laypersons. We expect that for our purposes a post-coordinate vocabulary 2 will be most useful, but the exact type and structure of the vocabulary will be defined at a later stage.
The main contributions of this paper are (1) to raise awareness for existing manually created information resources, which are potentially valuable for many tasks related to the processing of historical texts, and (2) to describe exploratory work towards using one type of resource, namely indices, for creating a controlled vocabulary.
The paper is structured as follows: Section 2 discusses related work; Section 3 gives an overview of the Collection and its subject indices; Section 4 describes the extraction of index terms and their conflation using base form reduction; Section 5 describes experiments with decompounding; in Section 6 we compare the extracted terms with the headwords of the HRG; Section 7 summarizes our findings and outlines future work.
Related Work
Vocabularies are inherently domain-specific. For our domain of historical legal texts, there is currently no controlled vocabulary that could be used as a basis. Despite some similarities, modern legal vocabularies such as Jurivoc 3 or the GLIN Subject Term Index 4 are not readily applicable to medieval and early modern jurisdictions (e.g., they lack concepts such as feudal tenure or witchcraft). The Vocabulaire international de la diplomatique (Milagros Cárcel Ortí, 1997) is an attempt at a vocabulary for describing types of historical documents, but it is not fine-grained enough and does not consider historical regional differences.
There are various approaches for automatically generating back-of-the-book indices and thus potential descriptors (e.g., Csomai and Mihalcea (2008)), but these are intended for book-length texts in a single language; in the case of historical editions, however, the documents differ widely in length, language, and age. Romanello et al. (2009) have parsed OCRprocessed indices scriptorum and extracted information to support the creation of a collection of fragmentary texts. Even though this is a completely different task, the approach is somewhat related to ours, in that it aims to utilize the valuable information contained in manually created indices.
The Collection of Swiss Law Sources
The Collection of Swiss Law Sources is an edition of historical legal texts created on Swiss territory from the early Middle Ages up to 1798. The Collection includes acts, decrees, and ordinances, but also indentures, administrative documents, court transcripts, and other types of documents. Since 1894, the Law Sources Foundation has edited and published more than 60,000 pages of source material and commentary in over 100 volumes.
The primary users of the Collection are historians, but it is also an important source for the Swiss-German Dictionary, which documents the German language in Switzerland from the late Middle Ages to the 21 st century. See Gschwend (2008) for a more detailed description of the Collection.
The primary sources are manuscripts in various regional historical forms of German, French, Italian, Rhaeto-Romanic, and Latin, which are transcribed, annotated, and commented by the editors. The critical apparatuses are in modern German, French, or Italian. Each volume contains an index of persons and places and a subject index. At the time of this writing, the Collection covers 17 of the 26 Swiss cantons to different extents.
The Collection is an ongoing project; future additions to the Collection will be created as digital editions. Instead of compiling a book, each source considered for addition to the Collection will be stored in a TEI-encoded XML document; virtual volumes, e.g., on a certain topic, place, or period, can then be created by selecting a subset of these documents. To make this possible, each document needs to contain the necessary metadata. Some of the metadata has traditionally been associated with each source text: A modern-language summary, the date, and the place of creation. In addition, each document will need to be assigned a set of descriptors.
The basis for the work described in this paper are the 22 latest volumes of the Collection, for which digital typesetting data is available; this subset is referred to as DS21 (Höfler and Piotrowski, 2011). We have converted the typesetting files of the indices into an XML format that makes the logical structure of the indices explicit, i.e., headwords, glosses, spelling variants, page and line references, etc. The conversion process is described in detail by Piotrowski (2010). DS21 contains volumes from ten cantons representing most linguistic and geographic regions of Switzerland and spans 1078 years. We therefore believe DS21 to be a good sample of the types of documents contained in the Collection, and we therefore expect high-frequency index terms to be good candidates for inclusion in the controlled vocabulary. The subject indices of the DS21 volumes contain a total of 70,531 entries (plus 43,264 entries in the indices of persons and places). In the work described below we have focused on the German-language volumes; the volumes in French and Italian will be considered at a later stage. The subject indices of the German-language volumes comprise a total of 47,469 entries. Figure 1 shows an excerpt of a subject index as it appears in print; Figure 2 shows two of the entries in the XML format we used as basis for the experiments described here. Since the subject indices also serve as glossaries, a particular feature is that they contain both historical and modern headwords; words in italics are modern terms, all other are historical words.
Extracting and Conflating Index Terms
Due to high variability of the historical index terms we decided to first concentrate on the modern index terms. Since different historians have worked on the subject indices, our first question was whether the extracted terms would overlap at all, and, if they do, to what extent and in which areas. In total, 6370 subject index word forms were extracted using a Perl script from the 16 German-language volumes. In a first step towards merging the extracted keywords, we manually removed irrelevant terms from the list of unique keywords (e.g., historical terms mistagged as modern terms), resulting in 5138 terms. We normalized the remaining entries by removing punctuation and grammatical information given with some entries. About 85% of the unique terms occur only once. Thus, the vast majority of terms are associated with a specific volume.
Of the 15% of keywords that occur more than once the most frequent one is Erbrecht 'inheritance law' with 10 appearances. Although specific legal terms like Erbrecht are, as would be expected, relatively frequent, a similar number of keywords is linked to people's social, religious, and professional roles (reflected in terms like vagrant, baptist, pope, baker, tanner, etc.) together with terminology related to trades (for example livestock trade, animal market, sawmill). This indicates that a controlled vocabulary for the Collection should not only take into account legal terminology but also focus on roles and trades, which could potentially be covered by a separate controlled vocabulary facet (for a list of potential law subject facets see also Broughton (2010, p. 38)).
We were surprised by the small intersection between the volumes' subject indices. Looking for ways to further conflate the terms, we noted a number of mismatches due to morphological variation (such as singular and plural forms), even though subject indices are not as inflectionally rich as normal German text.
Since many index terms are highly domainspecific or specific to Swiss German (e.g., compounds of the term Anke 'butter' like Ankenballen or Ankenhaus), we did not use a rule-based morphological analyzer (such as GERTWOL, Stripy Zebra, or Morphisto; for an overview see Mahlow and Piotrowski (2009)) but the Baseforms tool from the ASV Toolbox (Biemann et al., 2008), which is based on pretree classifiers. The Baseforms tool does not perform morphological analysis, but is more akin to a stemmer, so that its output is not necessarily linguistically correct; however, since we are primarily interested in term conflation, this is not a major problem. When the output of the system was empty or malformed we used the original term to ensure maximum overlap. We manually reviewed and, where necessary, corrected the base forms, also to get a better understanding of the kind of potential conflations. This cut down the list of keywords from 5138 to 4881 terms, i.e., 490 terms were morphological variants that could be conflated to 233 "concepts."
The majority of term conflations concern variation in number (Kapelle 'chapel' and Kapellen 'chapels'), derivations (Heirat 'marriage' and heiraten 'to marry'), and variant compound forms (Lehenherr and Lehensherr 'liege').
Experiments with Compounds
German is well-known for its tendency to form compound nouns to express complex concepts. For vocabulary construction, compounds are interesting because related terms often share constituent parts. Our idea was therefore to use decompounding to identify potential related terms. The relationships between these terms are usually weaker than between equivalent terms (like plural and singular variants), but will still be valuable in building a controlled vocabulary. For the following experiments we used the decompounding as produced by the ASV Baseforms tool with manual corrections.
In a first experiment, we extracted groups of compound-word terms that share the same first element. This gives us, for example, Bau 'construction', Bauarbeiter 'construction worker', and Bauherr 'constructor'. The terms found in this way could, for example, be used to build a map on the topic "construction" as shown in Figure 3. In total, we found 2555 matches by first compound elements. Note that partial matching without com-pound splitting would lead to unwanted hits like Bauer 'farmer' and Baumgarten 'tree garden'.
In a second experiment, we identified terms sharing the same last compound element. Overall this resulted in 2477 matches. Due to the structure of German compounds, terms sharing the final compound element are usually more closely related than those sharing the first element. Examples along the lines of Bau 'construction' are Hausbau 'house construction' and Kirchenbau 'church construction'; see Figure 4. Although not all of the matches will be equally relevant (for example Erbfall 'case of succession' and Wasserfall 'waterfall' are not semantically related), matches tend to point to terms on the same hierarchical level, meaning that the base form consisting of one element only (if it exists) acts as the broader term (Bau) of the compound matches which are the narrower terms (Hausbau and Kirchenbau).
At the moment our approach does not take into account homonyms and polysemes 5 such as Gericht 'court' vs. Gericht 'dish' or Kirche 'church as a building' vs. Kirche 'church as an institution'. Such semantic unknowns would need to be analyzed in the context of the text passages that the back-of-the-book subject indices refer to. Such a semantic review will be conducted at a later stage when the terms are prepared to be grouped in a controlled vocabulary.
Comparison to HRG Headwords
As noted in Section 4, the majority of index terms occur only once, i.e., in a single volume. In order to answer the question of how many of our terms are just locally useful and how many may be of more general utility, we compared our list to the list of headwords of the Handwörterbuch zur deutschen Rechtsgeschichte (HRG) (Cordes et al., 2008 ), the standard reference work on German history of law. The rationale is that the intersection of both lists contains those index terms that are highly likely to be useful as descriptors in a controlled vocabulary.
The comparison of the 3395 headwords taken from the online version of the HRG 6 (excluding entries for persons) with the 4881 stemmed index 5 In the linguistic sense; ANSI/NISO (2005) defines homonyms and polysemes differently and would refer to homographs in this context without distinguishing whether one or more lexemes are involved. 6 http://www.hrgdigital.de/ terms of our list yielded an intersection of 447 matches, i.e., 9% of our index terms also appear as headwords in the HRG. A closer inspection shows that the rather small intersection of terms is due to the broader scope of the Collection of Swiss Law Sources and the fact that the HRG focuses on German rather than Swiss history. The former is illustrated by the fact that the second most frequent term in our list of index terms after Erbrecht is Bäcker 'baker', which does not appear in the list of HRG keywords. While professional roles related to legal duties like Notar 'notary' or Landvogt 'bailiff', as well as religious roles like Papst 'pope' or Kleriker 'clergyman' are also HRG headwords, terminology related to crafts and trades-like Gerber 'tanner' or Schuhmacher 'shoemaker'-is rare.
However, from a legal perspective, the terms in the intersection between the Collection and the HRG are indeed highly relevant. We also noted that high-frequency index terms from the Collection are in fact more likely to appear in the list of HRG headwords than low-frequency terms. As expected, Erbrecht 'inheritance law', the most frequent term in our list of index terms also occurs in the list of HRG headwords. A third of the terms appearing three times or more (306 terms) are also covered by the HRG (102 headwords), in contrast to an overlap of less than 7% for the terms occurring only once in the indices of the Collection. The index terms that occur more than once in our indices (i.e., 18% of our 4881 base form terms) account for over 46% of the terms in the intersection with the HRG headwords.
Conclusion and Future Work
In this paper, we have described ongoing work on the extraction of index terms from back-ofthe-book subject indices in order to build a controlled vocabulary for the Collection of Swiss Law Sources. We have used base form reduction for term conflation and decompounding for discovering potential hierarchical relations.
We have found that index terms that are also HRG headwords are likely to be highly relevant; the terms in the intersection between our index terms and the HRG headwords will therefore be reviewed by the editors of the Collection to verify whether they are a good foundation for a controlled vocabulary.
At this point, we have only examined index terms in modern language. However, the majority (85%) of modern word forms appears only once; this means that the bulk of the concepts contained in the indices must be represented by historicallanguage index terms. For the construction of a controlled vocabulary it is thus necessary to also consider these terms.
While there are only 6370 modern word forms (5160 unique terms) in the subject indices, we have extracted 41,099 historical word forms (28,860 unique terms). The reduction of about 30% for historical versus about 20% for modern terms indicates that historical index terms are more evenly spread across the analyzed volumes.
The percentage of historical index terms occurring only once is only slightly lower than for modern terms (80% vs. 85%); however, the historical terms exhibit a high degree of spelling variation. We therefore expect that many terms are spelling variants that can be conflated. We are currently working on methods for clustering different historical spellings of related terms.
, s. umgelter W e i n z e h n t 693 27 W e i n z i n s 18 16-21 , 51 1 ; win g•lt 396 17-22 w e i n z ü c h e r , wie/inzüger m Weintransporteur 470 26 -471 17 , 813 32 , 823 13 f. w e i p n, s. wib w e i s f, s. wise w e i s e n pl. Waisen, s. weysen; pl. Wiesen, s. wise w e i s e n v., s. wissen weisheit, wysheit f Weisheit 275 30 ; Bezeichnung f. Richter 272 24 , 277 23 , 284 27 w e i s s e n , weytzen m Weizen 620 34 , 665 17 w e i ß u n g f Anweisung 709 40 w e l l e n m Willen, s. willen w e l l t s c h e pl., s. walch w e l t , wellt f 183 34 , 213 35 , 343 39 ; erbare w. 698 41 ; von dieser w. scheiden 109 1 ; w.geistliche 709 37 w e l t l i c h e , weldtliche, werntlich m Welt-
Figure 2 :
2XML version (automatically created from typesetting data) of the first two entries fromFigure 1.
Figure 3 :Figure 4 :
34Map of terms based on Bau 'construction' with matching first compound elements. Map of terms based on Bau 'construction' with matching last compound elements.
In a narrow sense, a critical edition is a scholarly edition that tries to recover the most authentic version of a historical text from extant sources. We use the term loosely to include other types of scholarly editions, in particular diplomatic editions.
See ANSI/NISO (2005) for a definition of postcoordination.
http://bger.ch/jurisdiction-jurivoc-home 4 http://glin.gov/
AcknowledgementsWe would like to thank Pascale Sutter for fruitful discussions and for her historical expertise.
ANSI/NISO. 2005. Z39.19-2005. Guidelines for the Construction, Format, and Management of Monolingual Controlled Vocabularies. ANSI/NISO. 2005. Z39.19-2005. Guidelines for the Construction, Format, and Management of Monolin- gual Controlled Vocabularies.
ASV Toolbox: a modular collection of language exploration tools. Chris Biemann, Uwe Quasthoff, Gerhard Heyer, Florian Holz, Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odjik, Stelios Piperidis, and Daniel Tapiasthe Sixth International Conference on Language Resources and Evaluation (LREC'08)ParisEuropean Language Resources Association (ELRAChris Biemann, Uwe Quasthoff, Gerhard Heyer, and Florian Holz. 2008. ASV Toolbox: a modular col- lection of language exploration tools. In Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odjik, Stelios Piperidis, and Daniel Tapias, editors, Proceedings of the Sixth Interna- tional Conference on Language Resources and Eval- uation (LREC'08), pages 1760-1767, Paris. Euro- pean Language Resources Association (ELRA).
The use and construction of thesauri for legal documentation. Vanda Broughton, Legal Information Management. 1001Vanda Broughton. 2010. The use and construction of thesauri for legal documentation. Legal Information Management, 10(01):35-42.
Handwörterbuch zur deutschen Rechtsgeschichte. Albrecht Cordes, Heiner Lück, Dieter Werkmüller, Ruth Schmidt-Wiegand, Erich SchmidtBerlin, Germany, 2nd editionAlbrecht Cordes, Heiner Lück, Dieter Werkmüller, and Ruth Schmidt-Wiegand, editors. 2008-. Hand- wörterbuch zur deutschen Rechtsgeschichte. Erich Schmidt, Berlin, Germany, 2 nd edition.
Linguistically motivated features for enhanced Back-of-the-Book indexing. Andras Csomai, Rada Mihalcea, Proceedings of ACL-08: HLT. ACL-08: HLTMorristown, NJ. ACLAndras Csomai and Rada Mihalcea. 2008. Linguisti- cally motivated features for enhanced Back-of-the- Book indexing. In Proceedings of ACL-08: HLT, pages 932-940, Morristown, NJ. ACL.
Jacques Froger, La critique des textes et l'ordinateur. Vigiliae Christianae. 24Jacques Froger. 1970. La critique des textes et l'ordinateur. Vigiliae Christianae, 24(3):210-217.
Rechtshistorische Grundlagenforschung: Die Sammlung Schweizerischer Rechtsquellen. Lukas Gschwend, Schweizerische Zeitschrift für. Lukas Gschwend. 2008. Rechtshistorische Grund- lagenforschung: Die Sammlung Schweizerischer Rechtsquellen. Schweizerische Zeitschrift für
. Geschichte, 58Geschichte, 58(1):4-19.
Building corpora for the philological study of Swiss legal texts. Stefan Höfler, Michael Piotrowski, Journal for Language Technology and Computational Linguistics. 262Stefan Höfler and Michael Piotrowski. 2011. Build- ing corpora for the philological study of Swiss legal texts. Journal for Language Technology and Com- putational Linguistics, 26(2):77-88.
A target-driven evaluation of morphological components for German. Cerstin Mahlow, Michael Piotrowski, Searching Answers -Festschrift in Honour of Michael Hess on the Occasion of his 60th Birthday. Simon Clematide, Manfred Klenner, and Martin VolkMünster, GermanyMV-VerlagCerstin Mahlow and Michael Piotrowski. 2009. A target-driven evaluation of morphological compo- nents for German. In Simon Clematide, Manfred Klenner, and Martin Volk, editors, Searching An- swers -Festschrift in Honour of Michael Hess on the Occasion of his 60th Birthday, pages 85-99. MV- Verlag, Münster, Germany.
Vocabulaire international de la diplomatique. Maria Milagros Cárcel OrtíValencia, SpainUniversitat de Valènciasecond editionMaria Milagros Cárcel Ortí, editor. 1997. Vocabulaire international de la diplomatique. Universitat de València, Valencia, Spain, second edition.
Document conversion for cultural heritage texts: FrameMaker to HTML revisited. Michael Piotrowski, DocEng 2010: Proceedings of the 10 th ACM Symposium on Document Engineering. Apostolos Antonacopoulos, Michael Gormish, and Rolf IngoldNew York, NYACMMichael Piotrowski. 2010. Document conversion for cultural heritage texts: FrameMaker to HTML revisited. In Apostolos Antonacopoulos, Michael Gormish, and Rolf Ingold, editors, DocEng 2010: Proceedings of the 10 th ACM Symposium on Docu- ment Engineering, pages 223-226, New York, NY. ACM.
Rechtsquellen der Stadt und Herrschaft Rapperswil, volume SSRQ SG II/2/1: Die Rechtsquellen der Stadt und Herrschaft Rapperswil) of Sammlung Schweizerischer Rechtsquellen. Rechtsquellenstiftung, Pascale SutterSchwabe, Basel, SwitzerlandRechtsquellenstiftung, editor. 2007. Rechtsquellen der Stadt und Herrschaft Rapperswil, volume SSRQ SG II/2/1: Die Rechtsquellen der Stadt und Herrschaft Rapperswil) of Sammlung Schweizerischer Rechts- quellen. Schwabe, Basel, Switzerland. Prepared by Pascale Sutter.
When printed hypertexts go digital: information extraction from the parsing of indices. Matteo Romanello, Monica Berti, Alison Babeu, Gregory Crane, Proceedings of the 20 th ACM conference on Hypertext and hypermedia (HT '09). the 20 th ACM conference on Hypertext and hypermedia (HT '09)New York, NYACMMatteo Romanello, Monica Berti, Alison Babeu, and Gregory Crane. 2009. When printed hypertexts go digital: information extraction from the parsing of indices. In Proceedings of the 20 th ACM conference on Hypertext and hypermedia (HT '09), pages 357- 358, New York, NY. ACM.
Bibliographic database access using free-text and controlled vocabulary: an evaluation. Jacques Savoy, Information Processing & Management. 414Jacques Savoy. 2005. Bibliographic database access using free-text and controlled vocabulary: an eval- uation. Information Processing & Management, 41(4):873-890.
A practical exercise in building a thesaurus. Cataloging & Classification Quarterly. James R Shearer, 37James R. Shearer. 2004. A practical exercise in build- ing a thesaurus. Cataloging & Classification Quar- terly, 37(3-4):35-56. |
||
253,116,532 | Eeny, meeny, miny, moe. How to choose data for morphological inflection | Data scarcity is a widespread problem in numerous natural language processing (NLP) tasks for low-resource languages. Within morphology, the labour-intensive work of tagging/glossing data is a serious bottleneck for both NLP and language documentation. Active learning (AL) aims to reduce the cost of data annotation by selecting data that is most informative for improving the model. In this paper, we explore four sampling strategies for the task of morphological inflection using a Transformer model: a pair of oracle experiments where data is chosen based on whether the model already can or cannot inflect the test forms correctly, as well as strategies based on high/low model confidence, entropy, as well as random selection. We investigate the robustness of each strategy across 30 typologically diverse languages. We also perform a more in-depth case study of Natügu. Our results show a clear benefit to selecting data based on model confidence and entropy. Unsurprisingly, the oracle experiment, where only incorrectly handled forms are chosen for further training, which is presented as a proxy for linguist/language consultant feedback, shows the most improvement. This is followed closely by choosing low-confidence and high-entropy predictions. We also show that despite the conventional wisdom of larger data sets yielding better accuracy, introducing more instances of high-confidence or low-entropy forms, or forms that the model can already inflect correctly, can reduce model performance. | [
219966792,
8841327,
207986455,
3278107,
224724415
] | Eeny, meeny, miny, moe. How to choose data for morphological inflection
Saliha Muradoglu [email protected]
Mans Hulden [email protected]
The Australian National University (ANU) χ University of Colorado
ARC Centre of Excellence for the Dynamics of Language (CoEDL)
Eeny, meeny, miny, moe. How to choose data for morphological inflection
Data scarcity is a widespread problem in numerous natural language processing (NLP) tasks for low-resource languages. Within morphology, the labour-intensive work of tagging/glossing data is a serious bottleneck for both NLP and language documentation. Active learning (AL) aims to reduce the cost of data annotation by selecting data that is most informative for improving the model. In this paper, we explore four sampling strategies for the task of morphological inflection using a Transformer model: a pair of oracle experiments where data is chosen based on whether the model already can or cannot inflect the test forms correctly, as well as strategies based on high/low model confidence, entropy, as well as random selection. We investigate the robustness of each strategy across 30 typologically diverse languages. We also perform a more in-depth case study of Natügu. Our results show a clear benefit to selecting data based on model confidence and entropy. Unsurprisingly, the oracle experiment, where only incorrectly handled forms are chosen for further training, which is presented as a proxy for linguist/language consultant feedback, shows the most improvement. This is followed closely by choosing low-confidence and high-entropy predictions. We also show that despite the conventional wisdom of larger data sets yielding better accuracy, introducing more instances of high-confidence or low-entropy forms, or forms that the model can already inflect correctly, can reduce model performance.
Data scarcity is a widespread problem in numerous natural language processing (NLP) tasks for low-resource languages. Within morphology, the labour-intensive work of tagging/glossing data is a serious bottleneck for both NLP and language documentation. Active learning (AL) aims to reduce the cost of data annotation by selecting data that is most informative for improving the model. In this paper, we explore four sampling strategies for the task of morphological inflection using a Transformer model: a pair of oracle experiments where data is chosen based on whether the model already can or cannot inflect the test forms correctly, as well as strategies based on high/low model confidence, entropy, as well as random selection. We investigate the robustness of each strategy across 30 typologically diverse languages. We also perform a more in-depth case study of Natügu. Our results show a clear benefit to selecting data based on model confidence and entropy. Unsurprisingly, the oracle experiment, where only incorrectly handled forms are chosen for further training, which is presented as a proxy for linguist/language consultant feedback, shows the most improvement. This is followed closely by choosing low-confidence and high-entropy predictions. We also show that despite the conventional wisdom of larger data sets yielding better accuracy, introducing more instances of high-confidence or low-entropy forms, or forms that the model can already inflect correctly, can reduce model performance.
Introduction
The need for linguistically annotated data sets is a drive that unites many fields within linguistics. Computational linguists often use labelled data sets for developing NLP systems. Theoretical linguists may utilise corpora for constructing statistical argumentation to support hypotheses about language or phenomena. Documentary linguists create interlinear glossed texts (IGTs) to preserve linguistic and cultural examples, which typically aids in generating a grammatical description. With the renewed focus on low-resource languages and diversity in NLP and the urgency propelled by language extinction, there is widespread interest in addressing this bottleneck.
One method for reducing annotation costs is active learning (AL). AL is an iterative process to optimise model performance by choosing the most critical examples to label. It has been successfully employed for various applications through NLP tasks including deep pre-trained models (BERT) (Ein-Dor et al., 2020), semantic role labelling (Myers and Palmer, 2021), named entity recognition (Shen et al., 2017), word sense disambiguation (Zhu and Hovy, 2007), sentiment classification (Dong et al., 2018) and machine translation (Zeng et al., 2019;Zhang et al., 2018). The iterative nature of AL aligns nicely with the language documentation process. It can be tied into the workflow of a field linguist who consults with a language informant or visits a field site in a periodic manner. Prior to a field trip, a linguist typically prepares material/questions (such as elicitation's or picture tasks 1 ) for language consultants which may focus on elements of the language they are working to describe or for material creation (e.g., pedagogical). We propose AL as a method which can provide a supplementary line of insight into the data collection process, particularly for communities that wish to develop and engage with language technology and/or resource building.
Previous work by Palmer (2009) details the efficiency gains from AL in the context of language documentation for the task of morpheme labelling. With deep learning models leading performance for the task of morphological analysis (Pimentel et al., 2021;Vylomova et al., 2020;McCarthy et al., Figure 1: The accuracy for each trained modelled, starting from the baseline (cycle 1). Each cycle 250 instances are re-sampled via the seven sampling methods: correct/incorrect, high/low model confidence, high/low entropy and random (coded with colour). The reported error bars are calculated across 3 separate runs. See Table 1 in Appendix for more detail. After cycle 2, the same sampling strategy is applied to that stream of experiment -e.g. for the lowest log-likelihood strategy, from cycle 2 to 10 the same strategy is used. 2019), AL in the context of neural methods is needed. This paper addresses the following question: How can we identify the type of data needed to improve model performance? To answer this, we explore the use of AL for the task of morphological inflection using a Transformer model. We run AL simulation experiments with four different sampling strategies: (1) correctness oracle, (2) model confidence, (3) entropy and (4) random selection. These strategies are tested across 30 typologically diverse languages and a 10-cycle iterative experiment using Natügu as a case study.
Data
We use data from the UniMorph Project (McCarthy et al., 2020), Interlinear Glossed Texts (IGT) from Moeller et al. (2020) and SIGMORPHON (Vy-lomova et al., 2020;Pimentel et al., 2021). In addition to the data availability, we consider typological diversity when selecting languages to include. Broadly, we attempt to include types of languages that exhibit varying degrees of complexity for inflection. We also consider morphological characteristics coded in WALS; prefixing vs. suffixing (Dryer, 2013), inflectional synthesis of the verb (Bickel and Nichols, 2013b) and exponence (Bickel and Nichols, 2013a). An additional consideration is the paradigm size for the morphological system modelled.
We note data source type to account for the variation in standard across Wikipedia, IGT field data, glossed examples from grammars and data generated from computational grammars. We train the model as if we were addressing an 'inflection' task (Vylomova et al., 2020). The data is in the form of triplets: lexeme, morphosyntactic tags and the desired output inflected form (e.g. ⟨walk, V;PST, walked⟩) 2 . Each model is trained with the fairseq Transformer (Ott et al., 2019) and our hyperparameters follow Liu and Hulden (2020).
A baseline model is trained, after which more examples are resampled from the baseline test file using the methods detailed below. The initial baseline model is trained with 3,500 instances, 1,000 test and 500 for development. We resample 250 instances.
Sampling strategies
Oracle The oracle experiments serve as a proxy for linguist/language expert feedback. 250 examples are sampled based on whether the predicted form is correct/incorrect. The initial filter is supplemented with the following criteria: (1) if there are fewer than 250 incorrect forms, the remaining slots are filled in accordance with examples that exhibit the smallest difference between the first and second output form's log-likelihood, (2) in the case of more than 250 incorrect forms, the incorrect instances are ranked based on the maximum Levenshtein distance between the predicted and target forms. The same selection criteria are applicable for the counterpart correct experiment, with reversed limits (e.g. in the case of less than 250 correct forms, the instances with the largest difference between the first and second log-likelihood are considered).
Model Confidence
The instances introduced to the training data are sampled based on the model confidence for each form. In this particular strategy, we only record the log-likelihood for the highestranked prediction in the beam.
We further examine the correlation between the log-likelihood (continuous variable) and accuracy (dichotomous variable) of the best prediction generated by the model by calculating the Point-Biserial Correlation Coefficient (PBCC). Across the 30 languages we study, the average PBCC is 0.388. Like all correlation coefficients, the PBCC measures the strength of the correlation, and the reported value 2 Data and code available at https://github.com/ smuradoglu/ALmorphinfl ranges from -1 to +1, where -1 indicates an inverse association, +1 indicates a positive association, and 0 indicates no association at all.
Entropy Here we expand upon the previous strategy-model confidence. We consider the distribution of the ranked output predictions for a particular input and approximate its entropy − i p i log(p i ), by only considering such predictions where p i ≥ 0.05, i.e. we calculate − p i log(p i ), for all p i ≥ 0.05. The model generated log-likelihoods are converted to probabilities and renormalised across the outputs generated by beam search.
p i = pn b j=1 pn
, b being the number of predictions we retrieve from the beam search.
Random We contrast the previous methods for re-sampling with random data selection. To establish whether the change in accuracy is statistically significant, we report the average across three independent runs and the standard deviation across the measured accuracy.
Results and Discussion
To simulate a documentation process, we have chosen Natügu as a case study. The inflection data is from Moeller et al. (2020) and is derived from IGTs-a form that is commonly utilised by field linguists. Our choice of language is further motivated by the morphological complexity exhibited by Natügu. By all accounts Natügu showcases complex morphology (Wurm, 1976;Åshild Naess and Boerger, 2008), particularly on the verb. Historically, this observed complexity led to the language family named as Papuan instead of Austronesian. Additionally, we observe a positive correlation between prediction correctness and model confidence (0.605). In fact, 4 out of the top 8 correlations (as shown in Figure 2) are languages with IGTs as a data source. For these reasons, we have chosen to examine iterative sampling over 10 cycles. Figure 1 summarises our results for Natügu. The re-sampling process is iterated over 10 cycles. The first cycle is the baseline/seed run and consists of a 600 instance training set. To account for the impact of random factors affecting the initial training data selection, we have conducted 3 independent seed runs-differing solely on the initial training set. The average accuracy and corresponding standard deviation is reported with the error bars. 3 The small starting size is motivated by the parallels with language documentation efforts, which are typically a low-resource setting. In each cycle, 250 forms are sampled via the corresponding sampling strategy. By the last cycle the training data consists of 2,850 instances.
Aside from the 3rd and 10th cycle, the lowest log-likelihood sampling consistently provides the greatest improvement. For these two cycles sampling based on incorrect forms outperforms selection based on low confidence. In general, the top 3 selection methods are ranked as follows: low loglikelihood, incorrect and highest entropy forms. We note the possible interplay between paradigm size (907 unique tag combinations) and training size set 3 Individual values can be found in Table 1 of the Appendix. (1,100 by cycle 3); unseen morphosyntactic categories will be most informative and presumably beneficial to model performance.
Given the strong correlation between prediction accuracy and model confidence for Natügu, we expect similarity in trajectory across cycle number and accuracy for the oracle and model-confidence based sampling strategies. Figure 1 verifies these forecasts; we see that the sampling based on prediction correctness (in light blue) and the sampling based on the highest log-likelihood (in light green) almost look identical. The same is observable for low log-likelihood (in red) and sampling based on incorrect prediction (green).
The lowest log-likelihood sampling method can be seen as an approximation for the highest entropy selection method, and by extension, the highest log-likelihood as an approximation for the lowest entropy selection. Our results for iterative AL for Natügu show that choosing by approximation is a higher risk endevaour. The choice either works really well or not at all. When we contrast low entropy and high model confidence as a selection strategy we can see that low entropy limits the impact of high model confidence since it accounts for a distribution rather than the single value approximation. We observe similar behaviour between the the high entropy and low confidence selection strategies. Random sampling shows gradual improvement.
Work by Yuan et al. (2020) highlight the issues with uncertainty sampling for deep learning models; noting that neural networks are poorly calibrated (Guo et al., 2017), and that the correlation between high confidence and correctness is not well established. We explore this correlation for our models in Figure 2. We observe a similar uncertainty with an overall slight positive correlation across the 30 languages examined. Despite this, our results show that data selection based on low model confidence yields significant improvement of model accuracy. The work presented here is intended as a preliminary baseline; we leave it to future work to consider calibration methods such as temperature scaling.
Interestingly, despite an increase in training data size, introducing new data that the model already can inflect correctly, or low-entropy or highconfidence forms actually reduces model performance despite the widely-held notion that more data is better. Another recent study by Samir and Silfverberg (2022) reports similar behaviour, where data hallucination reduces prediction accuracy for words that exhibited reduplication. We extend the same sampling strategies to 30 different languages for one round of re-training. The results are summarised in Figure 3. Within the 30 languages we ensure to include languages with large inflection table sizes (ranging from 12 to 700+), different scripts (Latin, Cyrillic, Arabic, Hangul, Ge'ez and Gujarati) and morphological typology (agglutinating, fusional, polysynthetic). We code for the source of the data, and see no particular deviation from the overall observed behaviour. The reported error bars for random sampling correspond to the standard deviation across three independent runs of random sampling.
It is clear that in general, the sampling strategies can be ordered for prediction accuracy improvement in the following manner: incorrect, lowest log-likelihood, highest entropy, random, highest log-likelihood, lowest entropy and finally correct form sampling. While a handful of languages deviate from this pattern (e.g. Swahili or Dido), 4 it
Conclusion
In this paper we examine four different sampling strategies within an AL framework for modelling morphological inflection using a Transformer model. We consider correct/incorrect prediction, model confidence, entropy and random selection as sampling strategies. Our results clearly show that AL can significantly improve learning rates for morphological inflection. Unsurprisingly, adding oracle-indicated incorrect forms for training yields the greatest model improvement. In the absence of a language expert, model confidence can be used to prioritise data annotation. This holds true across 30 different languages. We also show that larger datasets do not always yield better results; the diversity of the training set matters.
Future research should extend the analysis to incorporate language-specific factors-such as model performance for each morphosyntactic slot within the morphological paradigm.
Limitations
The primary limitation of this study is that the results are not evaluated in a real life documentation scenario. While we have tried to address this gap by noting the source of data, and have enlisted IGT data to serve as a proxy, we acknowledge that fieldwork data is often inconsistent, noisy and requires much more data cleaning. The data used for these experiments is, for the most part, already structured as a paradigm.
In addition, the simple metric of accuracy can be crude and is often prone to some degree of fluctuation. To minimise these effects we have considered the change in accuracy across sampling cycles instead. Lastly, we have tried to collate a diverse set of languages to consider. However, this is largely limited by the availability of data. It is likely that several morphophonological phenomena are not included within the data sets used here. Table.1: Model accuracies for iterative sampling for Natügu, across the lowest and highest low-likelihoods and random sampling strategies. S1, S2, S3 corresponds to seed 1, seed 2 and seed 3 respectively. Avg and std indicate the average value across the three seed runs and the standard deviation. Data used to generate Figure.1.
References
Resample by: Incorrect
Correct Highest Entropy Lowest Entropy Cycle # training size S1 S2 S3 avg std S1 S2 S3 avg std S1 S2 S3 avg std S1 Table.2: Model accuracies for iterative sampling for Natügu, across incorrect, correct, highest and lowest entropy sampling strategies. S1, S2, S3 corresponds to seed 1, seed 2 and seed 3 respectively. Avg and std indicate the average value across the three seed runs and the standard deviation. Data used to generate Figure.1. Table.3: Model accuracies for each sampling strategy, across 30 different languages. Data used to generate Figure.3.
Figure 2 :
2The calculated Point-Biserial Correlation Coefficient (PBCC) between correct prediction and the model loglikelihood, across 30 different languages. The source of the data is also noted with colour.
Figure 3 :
3The change in accuracy (from the established baseline) is reported with each sampling strategy, across 30 different languages (coded with colour). The source of data is also noted with tick shapes.
Balthasar Bickel and Johanna Nichols. 2013a. Exponence of selected inflectional formatives. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online.Max Planck Institute for Evolutionary Anthropology,
Leipzig.
Balthasar Bickel and Johanna Nichols. 2013b. Inflec-
tional synthesis of the verb. In Matthew S. Dryer
and Martin Haspelmath, editors, The World Atlas of
Language Structures Online. Max Planck Institute
for Evolutionary Anthropology, Leipzig.
Li Dong, Chris Quirk, and Mirella Lapata. 2018. Con-
fidence modeling for neural semantic parsing. In
Proceedings of the 56th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), pages 743-753, Melbourne, Australia.
Association for Computational Linguistics.
Matthew S. Dryer. 2013. Prefixing vs. suffixing in in-
flectional morphology. In Matthew S. Dryer and
Martin Haspelmath, editors, The World Atlas of Lan-
guage Structures Online. Max Planck Institute for
Evolutionary Anthropology, Leipzig.
Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch,
Lena Dankin, Leshem Choshen, Marina Danilevsky,
Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020.
Active Learning for BERT: An Empirical Study. In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 7949-7962, Online. Association for Computa-
tional Linguistics.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Wein-
berger. 2017. On calibration of modern neural net-
works. In Proceedings of the 34th International Con-
ference on Machine Learning, volume 70 of Pro-
ceedings of Machine Learning Research, pages 1321-
1330. PMLR.
Ling Liu and Mans Hulden. 2020. Leveraging princi-
pal parts for morphological inflection. In Proceed-
ings of the 17th SIGMORPHON Workshop on Com-
putational Research in Phonetics, Phonology, and
Morphology, pages 153-161, Online. Association for
Computational Linguistics.
Arya D. McCarthy, Christo Kirov, Matteo Grella,
Amrit Nidhi, Patrick Xia, Kyle Gorman, Ekate-
rina Vylomova, Sabrina J. Mielke, Garrett Nico-
lai, Miikka Silfverberg, Timofey Arkhangelskiy, Na-
taly Krizhanovsky, Andrew Krizhanovsky, Elena
Klyachko, Alexey Sorokin, John Mansfield, Valts
Ernštreits, Yuval Pinter, Cassandra L. Jacobs, Ryan
Cotterell, Mans Hulden, and David Yarowsky. 2020.
UniMorph 3.0: Universal Morphology. In Proceed-
ings of the 12th Language Resources and Evaluation
Conference, pages 3922-3931, Marseille, France. Eu-
ropean Language Resources Association.
Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu,
Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett
Nicolai, Christo Kirov, Miikka Silfverberg, Sabrina J.
Mielke, Jeffrey Heinz, Ryan Cotterell, and Mans
Hulden. 2019. The SIGMORPHON 2019 shared
task: Morphological analysis in context and cross-
lingual transfer for inflection. In Proceedings of the
16th Workshop on Computational Research in Pho-
netics, Phonology, and Morphology, pages 229-244,
Florence, Italy. Association for Computational Lin-
guistics.
Sarah Moeller, Ling Liu, Changbing Yang, Katharina
Kann, and Mans Hulden. 2020. IGT2P: From in-
terlinear glossed texts to paradigms. In Proceed-
ings of the 2020 Conference on Empirical Methods
in Natural Language Processing (EMNLP), pages
5251-5262, Online. Association for Computational
Linguistics.
Skatje Myers and Martha Palmer. 2021. Tuning deep
active learning for semantic role labeling. In Pro-
ceedings of the 14th International Conference on
Computational Semantics (IWCS), pages 212-221,
Groningen, The Netherlands (online). Association
for Computational Linguistics.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan,
Sam Gross, Nathan Ng, David Grangier, and Michael
Auli. 2019. fairseq: A fast, extensible toolkit for
sequence modeling. In Proceedings of the 2019 Con-
ference of the North American Chapter of the Associa-
tion for Computational Linguistics (Demonstrations),
pages 48-53, Minneapolis, Minnesota. Association
for Computational Linguistics.
Alexis Mary Palmer. 2009. Semi-automated annota-
tion and active learning for language documentation.
Ph.D. thesis.
Cycle # training size S1
S2
S3
avg
std
S1
S2
S3
avg
std
S1
S2
S3
avg
std
1
600
0.618
Table.2: Correct and model log-likelihood correlation based on baseline for each language. The reported value is a Point-Biserial Correlation Coefficient (PBCC) with the respective p-value. Data used to generateFigure.2.Iso-code PBCC p-value
Adyghe
ady
0.031 3.29E-01
Amharic
amh
0.643 5.74E-118
Arabic
ara
0.394 1.53E-38
Arapaho
arp
0.607 1.08E-101
Aymara
aym
0.394 1.64E-38
Asháninka
cni
0.799 1.10E-222
Palantla Chinantec cpa
0.355 4.10E-31
Cree
cre
0.069 2.91E-02
Dido
ddo
0.550 4.24E-80
German
deu
0.363 1.60E-32
Basque
eus
0.707 3.67E-152
Evenki
evn
0.532 2.94E-74
Persian
fas
0.242 7.68E-15
Finnish
fin
0.135 1.84E-05
Irish
gle
0.172 4.24E-08
Gujarati
guj
0.168 8.40E-08
Haida
hai
0.240 1.47E-14
Indonesian
ind
0.364 1.05E-32
Halh Mongolian
khk
0.370 8.44E-34
Korean
kor
0.431 1.78E-46
Manipuri
mni
0.709 2.34E-153
Navaho
nav
0.396 7.66E-39
Natügu
ntu
0.605 1.13E-100
Quechua
que
0.838 3.44E-265
Russian
rus
0.282 9.57E-20
Seneca
see
0.306 3.75E-23
Spanish
spa
-0.069 3.02E-02
Swahili
swc
0.324 6.16E-26
Turkish
tur
0.129 4.17E-05
Zulu
zul
0.540 9.23E-77
Resample by:
Language
Source # Tables Baseline Correct Incorrect
Lowest
log(pi)
Highest
log(pi)
Lowest
Entropy
Highest
Entropy
Random1 Random2 Random3 Randomavg ± Std dev
ady
Wiki
430
0.986
0.984
0.998
0.990
0.988
0.985
0.992
0.989
0.990
0.990
0.990
0.001
amh c. grammar
285
0.983
0.977
0.989
0.993
0.975
0.965
0.987
0.980
0.977
0.977
0.978
0.002
ara c. grammar
83
0.800
0.755
0.924
0.920
0.730
0.754
0.838
0.818
0.805
0.824
0.816
0.010
aym
grammar
55
0.933
0.924
0.991
0.988
0.923
0.926
0.959
0.939
0.941
0.938
0.939
0.002
cpa
grammar
490
0.843
0.798
0.895
0.899
0.822
0.822
0.866
0.820
0.857
0.832
0.836
0.019
cre
grammar
22
0.113
0.029
0.130
0.125
0.096
0.066
0.133
0.110
0.116
0.115
0.114
0.003
deu
wiki
450
0.937
0.911
0.979
0.945
0.942
0.928
0.948
0.935
0.939
0.920
0.931
0.010
eus
wiki
12
0.755
0.738
0.914
0.905
0.660
0.612
0.897
0.813
0.754
0.784
0.784
0.030
fas
wiki
39
0.178
0.044
0.222
0.201
0.143
0.064
0.222
0.186
0.182
0.178
0.182
0.004
fin
wiki
97
0.587
0.489
0.717
0.601
0.539
0.528
0.676
0.593
0.584
0.590
0.589
0.005
guj
wiki
280
0.620
0.511
0.743
0.603
0.579
0.544
0.660
0.601
0.587
0.594
0.594
0.007
ind
wiki
750
0.551
0.445
0.634
0.590
0.469
0.543
0.590
0.556
0.530
0.549
0.545
0.013
khk c. grammar
720
0.936
0.930
0.980
0.967
0.920
0.932
0.952
0.944
0.934
0.918
0.932
0.013
kor
wiki
60
0.597
0.504
0.696
0.710
0.519
0.529
0.629
0.595
0.597
0.606
0.599
0.006
rus
wiki
320
0.884
0.861
0.959
0.901
0.917
0.878
0.915
0.857
0.889
0.881
0.876
0.017
see
grammar
135
0.895
0.872
0.951
0.943
0.878
0.873
0.919
0.902
0.884
0.898
0.895
0.009
spa
wiki
75
0.880
0.847
0.966
0.853
0.918
0.861
0.901
0.884
0.874
0.884
0.881
0.006
swc
wiki
53
0.931
0.916
0.961
0.973
0.903
0.957
0.925
0.939
0.941
0.937
0.939
0.002
tur
wiki
35
0.464
0.328
0.575
0.491
0.402
0.384
0.556
0.456
0.462
0.452
0.457
0.005
zul
wiki
62
0.881
0.861
0.918
0.957
0.844
0.875
0.861
0.876
0.868
0.856
0.867
0.010
arp
IGT
470
0.290
0.238
0.352
0.354
0.265
0.165
0.315
0.326
0.296
0.349
0.324
0.027
que
wiki
25
0.982
0.985
0.988
0.990
0.972
0.973
0.959
0.969
0.994
0.982
0.982
0.013
gle
wiki
350
0.387
0.228
0.472
0.427
0.371
0.297
0.444
0.372
0.375
0.385
0.377
0.007
ddo
IGT
400
0.793
0.770
0.925
0.904
0.756
0.762
0.858
0.804
0.799
0.806
0.803
0.004
nav
wiki
280
0.860
0.826
0.943
0.941
0.854
0.852
0.926
0.874
0.862
0.877
0.871
0.008
mni
IGT
525
0.752
0.730
0.932
0.908
0.729
0.737
0.877
0.784
0.784
0.780
0.783
0.002
evn
grammar
2250
0.460
0.368
0.551
0.559
0.374
0.447
0.554
0.470
0.473
0.479
0.474
0.005
cni
grammar
105
0.992
0.991
0.999
0.993
0.996
0.997
0.993
0.993
0.996
0.995
0.995
0.002
hai
wiki
31
0.715
0.656
0.874
0.731
0.731
0.708
0.773
0.728
0.727
0.717
0.724
0.006
ntu
IGT
560
0.800
0.762
0.947
0.917
0.772
0.766
0.897
0.811
0.792
0.806
0.803
0.010
Or indeed any materials such as those complied by the Max Planck Institute for Psycholinguistics at http:// fieldmanuals.mpi.nl/
seeTable.3 in Appendix for more detail.holds true for a majority of the languages considered.
A AppendixHighest log(p i ) Random
Deep active learning for named entity recognition. Yanyao Shen, Hyokun Yun, Zachary Lipton, 10.18653/v1/W17-2630Proceedings of the 2nd Workshop on Representation Learning for NLP. the 2nd Workshop on Representation Learning for NLPVancouver, CanadaAssociation for Computational LinguisticsYakov Kronrod, and Animashree AnandkumarYanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kro- nrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representa- tion Learning for NLP, pages 252-256, Vancouver, Canada. Association for Computational Linguistics.
Miikka Silfverberg, and Mans Hulden. 2020. SIGMORPHON 2020 shared task 0: Typologically diverse morphological inflection. Ekaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J Mielke, Shijie Wu, Maria Edoardo, Rowan Ponti, Ran Hall Maudslay, Josef Zmigrod, Svetlana Valvoda, Francis Toldova, Elena Tyers, Ilya Klyachko, Natalia Yegorov, Paula Krizhanovsky, Irene Czarnowska, Andrew Nikkarinen, Tiago Krizhanovsky, Lucas Pimentel, Christo Torroba Hennigen, Garrett Kirov, Nicolai, 10.18653/v1/2020.sigmorphon-1.1Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and MorphologyAdina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell; OnlineAssociation for Computational LinguisticsEkaterina Vylomova, Jennifer White, Elizabeth Salesky, Sabrina J. Mielke, Shijie Wu, Edoardo Maria Ponti, Rowan Hall Maudslay, Ran Zmigrod, Josef Valvoda, Svetlana Toldova, Francis Tyers, Elena Klyachko, Ilya Yegorov, Natalia Krizhanovsky, Paula Czarnowska, Irene Nikkarinen, Andrew Krizhanovsky, Tiago Pimentel, Lucas Torroba Henni- gen, Christo Kirov, Garrett Nicolai, Adina Williams, Antonios Anastasopoulos, Hilaria Cruz, Eleanor Chodroff, Ryan Cotterell, Miikka Silfverberg, and Mans Hulden. 2020. SIGMORPHON 2020 shared task 0: Typologically diverse morphological inflec- tion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 1-39, Online. Association for Computational Linguistics.
The Reef Islands-Santa Cruz family. A Stephen, Wurm, New Guinea Area Languages and Language Study. Stephen A. WurmCanberra2Pacific Linguistics: Series C. Australian National UniversityStephen A. Wurm. 1976. The Reef Islands-Santa Cruz family. In Stephen A. Wurm, editor, New Guinea Area Languages and Language Study Vol 2: Aus- tronesian Languages, volume 39 of Pacific Linguis- tics: Series C, pages 637-674. Research School of Pacific and Asian Studies, Australian National Uni- versity, Canberra.
Cold-start active learning through selfsupervised language modeling. Michelle Yuan, Hsuan-Tien Lin, Jordan Boyd-Graber, 10.18653/v1/2020.emnlp-main.637Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsMichelle Yuan, Hsuan-Tien Lin, and Jordan Boyd- Graber. 2020. Cold-start active learning through self- supervised language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7935-7948, Online. Association for Computational Linguistics.
Empirical evaluation of active learning techniques for neural MT. Xiangkai Zeng, Sarthak Garg, Rajen Chatterjee, 10.18653/v1/D19-6110Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP. the 2nd Workshop on Deep Learning Approaches for Low-Resource NLPHong Kong, ChinaAssociation for Computational LinguisticsUdhyakumar Nallasamy, and Matthias PaulikXiangkai Zeng, Sarthak Garg, Rajen Chatterjee, Ud- hyakumar Nallasamy, and Matthias Paulik. 2019. Empirical evaluation of active learning techniques for neural MT. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 84-93, Hong Kong, China. Association for Computational Linguistics.
Active learning for neural machine translation. Pei Zhang, Xueying Xu, Deyi Xiong, 10.1109/IALP.2018.86291162018 International Conference on Asian Language Processing (IALP). Pei Zhang, Xueying Xu, and Deyi Xiong. 2018. Ac- tive learning for neural machine translation. In 2018 International Conference on Asian Language Pro- cessing (IALP), pages 153-158.
Active learning for word sense disambiguation with methods for addressing the class imbalance problem. Jingbo Zhu, Eduard Hovy, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)Prague, Czech RepublicAssociation for Computational LinguisticsJingbo Zhu and Eduard Hovy. 2007. Active learning for word sense disambiguation with methods for address- ing the class imbalance problem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 783-790, Prague, Czech Republic. Association for Computational Linguistics.
Reefs-Santa Cruz as Oceanic: Evidence from the verb complex. Åshild Naess, Brenda H Boerger, Oceanic Linguistics. 47Åshild Naess and Brenda H. Boerger. 2008. Reefs-Santa Cruz as Oceanic: Evidence from the verb complex. Oceanic Linguistics, 47(1):185-212. |
5,113,468 | Integrated Information Management: An Interactive, Extensible Architecture for Information Retrieval | [
5216592,
28532290,
18627189
] | Integrated Information Management: An Interactive, Extensible Architecture for Information Retrieval
Eric Nyberg
Language Technologies Institute Carnegie Mellon University Pittsburgh
Language Technologies Institute Carnegie Mellon University Pittsburgh
15213, 15213PA, PA
Hal Daume
Language Technologies Institute Carnegie Mellon University Pittsburgh
Language Technologies Institute Carnegie Mellon University Pittsburgh
15213, 15213PA, PA
Integrated Information Management: An Interactive, Extensible Architecture for Information Retrieval
INTRODUCTION
Most current IR research is focused on specific technologies, such as filtering, classification, entity extraction, question answering, etc. There is relatively little research on merging multiple technologies into sophisticated applications, due in part to the high cost of integrating independently-developed text processing modules.
In this paper, we present the Integrated Information Management (IIM) architecture for component-based development of IR applications 1 . The IIM architecture is general enough to model different types of IR tasks, beyond indexing and retrieval. Rather than providing a single framework or toolkit, our goal is to create a higherlevel framework which is used to build a variety of different class libraries or toolkits for different problems. Another goal is to promote the educational use of IR software, from an "exploratory programming" perspective. For this reason, it is also important to provide a graphical interface for effective task visualization and realtime control.
Prior architecture-related work has focused on toolkits or class libraries for specific types of IR or NLP problems. Examples include the SMART system for indexing and retrieval [17], the FIRE [18] and InfoGrid [15] class models for information retrieval applications, and the ATTICS [11] system for text categorization and machine learning. Some prior work has also focused on the user interface, notably FireWorks [9] and SketchTrieve [9] 2 . Other systems such as GATE [4] and Corelli [20] have centered on specific approaches to NLP applications.
The Tipster II architecture working group summarized the requirements for an ideal IR architecture [6], which include:
Standardization. Specify a standard set of functions and interfaces for information services. ¢ For further discussion on how these systems compare with the present work, see Section 7.
.
Maintainability.
Use standardized modules to support plugand-play updates.
Flexibility. Enhance performance by allowing novel combinations of existing components.
Evaluation. Isolate and test specific modules side-by-side in the same application.
One of the visions of the Tipster II team was a "marketplace of modules", supporting mix-and-match of components developed at different locations. The goals of rapid deployment and flexibility require an excellent user interface, with support for drag-and-drop task modeling, real-time task visualization and control, and uniform component instrumentation for cross-evaluation. The modules themselves should be small, downloadable files which run on a variety of hardware and software platforms. This vision is in fact a specialized form of component-based software engineering (CBSE) [14], where the re-use environment includes libraries of reusable IR components, and the integration process includes realtime configuration, control, and tuning. Section 2 summarizes the architectural design of IIM. Section 3 provides more detail regarding the system's current implementation in Java. In Section 5 we describe three different task libraries that have been constructed using IIM's generic modules. Current instrumentation, measurement, and results are presented in Section 6. We conclude in Section 7 with some relevant comparisons of IIM to related prior work.
ARCHITECTURAL DESIGN
IIM uses a flow-based (pipe and filter [16]) processing model. Information processing steps are represented as nodes in a graph. Each edge in the graph represents a flow connection between a parent node and a child node; the documents produced by the parent node are passed to each child node. In IIM, the flow graph is referred to as a node chain. A sample node chain is shown in Figure 1. The IIM class model includes six basic node types, which can be used to model a variety of IR problems:
1. Source. Generates a document stream (from a static collection, web search, etc.) and passes documents one at a time to its child node(s).
2.
Filter. Passes only documents which match the filter to its child node(s).
3.
Annotator. Adds additional information to the document regarding a particular region in the document body.
4.
Sink. Creates and passes either a single document or a collection to its child node(s), after pooling the input documents it receives.
5.
Transformer. Creates and passes on a single new document, presumably the result of processing its input document.
6. Renderer. Produces output for documents received (to disk, to screen, etc.).
The IIM class model is embedded in a Model-View-Controller architecture [5], which allows the system to be run with or without the graphical interface. Pre-stored node chains can be executed directly from the shell, or as a background process, completely bypassing all user interaction when optimal performance is required. The Controller subsystem and interface event dispatching subsystem must run as separate threads to support dynamic update of parameters in a running system. The View (user interface) should support: a) plug-and-play creation of new node chains; b) support for saving, loading and importing new node chains; c) dynamic visualization of a task's status; and d) direct manipulation of a node's parameters at any time.
In addition to the nodes themselves, IIM supports two other important abstractions for IR task flows:
Macro Nodes. Certain sequences of nodes are useful in more than one application, so it is convenient to store them together as a single reusable unit, or macro node. IIM allows the user to export a portion of a node chain as a macro node to be loaded into the Node Library and inserted into a new chain as a single node. The user may specify which of the properties of the original nodes are visible in the exported macro node (see Figure 3).
Controllers. Some IR tasks require iteration through multiple runs; the system's behavior on each successive trial is modified based on feedback from a previous run. For example, a system might wish to ask for more documents or perform query expansion if the original query returns an insufficient number of relevant documents. IIM includes a Controller interface, which specifies methods for sending feedback from one node to another. The user can implement a variety of controllers, depending on the needs of the particular application.
JAVA IMPLEMENTATION
In the IIM Java implementation, nodes are specified by the abstract interface Node and its six abstract subinterfaces: Source, Filter, Annotator, Transformer, Sink and Renderer (see Figure 2). Any user-defined Java class which implements one of the Node subinterfaces can be loaded into IIM and used in a node chain. The visualization of a node is represented by a separate Java class, Box, which handles all of the details related to drawing the node and various visual cues in the node chain display.
The graphical user interface ( Figure 1) is implemented as a set of Java Swing components:
Node Chain Display. The canvas to the right displays the current node chain, as described in the previous section. While the node chain is running, IIM provides two types of visual feedback regarding task progress. To indicate the percentage of overall run-time that the node is active, the border color of each node varies from bright green (low) to bright red (high). To indicate the amount of output per node per unit of time spent (throughput), the system indicates bytes per second as a text label under each node. A rectangular meter at the right of each node provides a graphic visualization of relative throughput; the node with the highest throughput will have a solid red meter, while other nodes will have a meter level which shows their throughput as a percentage of maximum throughput.
Node Library. The tree view to the upper left displays the library of nodes currently available on the user's machine for building and extending node chains. New nodes or node directories can be downloaded from the web and added while the system is running. The component loader examines each loaded class using Java's reflection capabilities, and places it in the appropriate place(s) in the component tree according to which of the Node subinterfaces it implements.
Node Property Editor. The Property Editor (table view) to the lower left in Figure 1 displays the properties of a selected node, which the user can update by clicking on it and entering a new value.
Node Chain Editor. IIM supports dynamic, interactive manipulation of node chains. The left side of the toolbar at the top of the IIM Window contains a set of chain editing but-tons. These allow the user to create, modify and tune new node chains built from pre-existing components.
Transport Bar. IIM uses a tape transport metaphor to model the operation of the node chain on a given data source. The "Play", "Pause" and "Rewind" buttons in the toolbar (right side) allow the user to pause the system in mid-task to adjust component parameters, or to start a task over after the node chain has been modified.
The run-time Controller subsystem is implemented as a Java class called ChainRunner, which can be invoked with or without a graphical interface component. ChainRunner is implemented as a Thread object separate from the Java Swing event dispatching thread, so that user actions can be processed concurrently with the ongoing operation of a node chain on a particular task.
IIM COMPONENTS
The current IIM system includes a variety of nodes which implement the different IIM component interfaces. These nodes are described in this section.
Source Nodes
EditableSource. Prompts the user to interactively enter sample documents (used primarily for testing, or entering queries).
WebSource. Generic support for access to web search engines (e.g., Google). Includes multithreading support for simultaneous retrieval of multiple result documents.
NativeBATSource. Generic support for access to document collections stored on local disk. Implemented in C, with a Java wrapper that utilized the Java Native Interface (JNI).
Filter Nodes
SizeFilter. Only passes documents which are above a userdefined size threshold.
RegexpFilter. Only passes documents which match a userdefined regular expression; incorporates the GNU regexp package.
Annotator Nodes
NameAnnotator. Locates named entities (currently, person names) in the body of the document, and adds appropriate annotations to the document.
IVEAnnotator. For each named entity (person) annotation, checks a networked database for supplemental information about that individual. An interface to a database of information about individuals, publications, and organizations, created as part of the Information Validation and Evaluation project at CMU [12]. Implemented using Java Database Connectivity (JDBC).
BrillAnnotator. Accepts a user-defined annotation (e.g., PAS-SAGE) and adds a new annotation created by calling the Brill Tagger [1] on the associated text. Implemented via a TCP/IP socket protocol which accesses a remote instance of the tagger running as a network service.
ChartAnnotator. Accepts a user-defined annotation, and adds new annotations based on the results of bottom-up chart parsing with a user-defined grammar. The user can select which linguistic categories (e.g., NP VP, etc.) are to be annotated.
RegexpAnnotator. Annotates passages which match a userdefined regular expression.
Transformer Nodes
BrillTransformer. Similar to the BrillAnnotator (see above), but operates directly on the document body (does not create separate annotations).
Inquery. Accepts a query (represented as an input document) and retrieves a set of documents from the Inquery search engine [2]. Accesses an Inquery server running as a networked service, using TCP/IP sockets.
WordNet. Accepts a document, and annotates each word with a hypernym retrieved from WordNet [19]. Accesses a Word-Net server running as a networked service, using TCP/IP sockets.
Sink Nodes
Ranker. Collects documents and sorts them according to a user-defined comparator. The current implementation supports sorting by document size or by annotation count.
CooccuranceSink. Builds a matrix of named entity associations within a given text window; uses NAME annotations created by the NameAnnotator (see above). The output of this node is a special subclass of Document, called Matrix-Document, which stores the association matrix created from the document collection.
QAnswer. Collects a variety of annotations from documents relevant to a particular query (e.g., "What is Jupiter?"), and uses them to synthesize an answer.
Renderer Nodes
StreamRenderer. Outputs any documents it receives to a user-specified file stream (or to standard output, by default).
DocumentViewer. Pops up a document display window, which allows the user to browse documents as they are accepted by this node.
MatrixRenderer.
A two-dimensional visualization of the association matrix created by the CoocurrenceSink (see above). Accepts instances of MatrixDocument.
IIM APPLICATIONS
The initial set of component nodes has been used as the basis for three experimental applications:
Filtering and Annotation. An interactive node chain that allows the user to annotate and collect documents matching any regular expression; the resulting collection can then be viewed interactively (with highlighted annotations) in a popup viewer window. Named Entity Association. A node chain which performs named-entity annotation using a phi-square measure [3], producin a MatrixDocument object (a user-defined Document subclass, which represents the association matrix). Note that the addition of a specialized Document subclass does not require recompilation of IIM (although the user must take care that specialized document objects are properly handled by user-defined nodes).
Question Answering. A node chain which answers "What is" questions by querying the web for relevant documents, finding relevant passages [8,10], and synthesizing answers from the results of various regular expression matches 3 .
PERFORMANCE
In order to support accurate side-by-side evaluation of different modules, IIM implements two kinds of instrumentation for runtime performance data:
Per-Node Run Time. The ChainRunner and Box classes automatically maintain run-time statistics for every node in a chain (including user-defined nodes). These statistics are printed at the end of every run.
Node-Specific Statistics. For user-defined nodes, it may be useful to report task-specific statistics (e.g., for an Annotator, the total number of annotations, the average annotation size, etc.). IIM provides a class called Options, which contains a set of optional interfaces that can be implemented to customize a node's behavior. Any node that wishes to report task-specific statistical data can implement the ReportsStatistics interface, which is called by the ChainRunner when the chain finishes.
An example of the statistical data produced by the system is shown in Figure 4. The system is careful to keep track of time spent "inside" the nodes, as well as the overall clock time taken for the task. This allows the user to determine how much overhead is added by the IIM system itself.
The throughput speed of the prototype system is acceptably fast, averaging better than 50M of text per minute on a sample filtering task (530M of web documents), running on a typical Pentium III PC with 128M RAM. IIM requires about 10M of memory (including the Java run-time environment) for the core system and user interface, with additional memory requirements depending on the size of the document stream and the sophistication of the node chain 4 . Although the core system is implemented in Java, we have also implemented nodes in C++, using appropriate wrapper classes and the Java Native Interface (JNI). This technique allows us to implement critical, resource-intensive nodes using native code, without sacrificing the benefits of the Java-based core system.
DISCUSSION
The preliminary results of the IIM prototype are promising. IIM's drag-and-drop component library makes it possible to build and tune a new application in a matter of minutes, greatly reducing the amount of effort required to integrate and reuse existing modules.
£ We are currently expanding this application to include part of speech tagging and syntactic parsing, both of which are straightforwardly modeled as examples of the Annotator interface. ¤ Node chains which create a high volume of annotations per document use more memory, as do node chains which create new collections, transform documents, etc. In the future, we hope this high degree of flexibility will encourage greater experimentation and the creation of new aggregate systems from novel combinations of components, leading to a true "marketplace of modules".
Building extensible architectures as "class library plus application framework" is not a new idea, and has been discussed before with respect to information retrieval systems [7,18,9]. One might claim that any new IR architecture should adopt a similar design pattern, given the proven benefits of separating the modules from the application framework (flexibility, extensibility, high degree of reuse, easy integration, etc.). To some extent, IIM consolidates, refines and/or reimplements ideas previously published in the literature. Specifically, the following characteristics of the IIM architecture can be directly compared with prior work:
The IIM classes Renderer, Document, MultiDocument, and annotations on Document can be considered alternative implementations of the InfoGrid classes Visualizer, Document, DocumentSet and DocumentPart [15]. However, in IIM annotations are "lightweight", meaning that they do not require the instantiation of a separate user object, but can be modeled as simple String instances in Java when a high degree of annotation requires optimal space efficiency.
The use of color to indicate status of a node is also used in the SketchTrieve system [18].
IIM's visualization of the document flow as a "node chain" can be compared to the "wire and dock" approach used in other IR interfaces [9,4,13].
The use of a Property Editor to customize component behavior is an alternative approach to the IrDialogs provided by the FireWorks toolkit [9] for display and update of a component's state.
Nevertheless, IIM is at once simpler and more general than systems such as InfoGrid [15] and FIRE [18]. One could claim that IIM supports a higher degree of informality [9] than FIRE, since it enforces no type-checking on node connectivity. Since all tasks are modeled abstractly as document flows, nodes need only implement one of the Node sub-interfaces, and each node chain must begin with a Source. Another point of comparison is the task-specific detail present in the FIRE class hierarchy. In IIM, task-specific objects are left up to the developer (for example, representing particulars of access control on information sources, or details of indexing and retrieval, such as Index, Query, etc.).
Hendry and Harper [9] have used the degree of user control as a dimension of comparison for IR architectures. At one extreme are systems which allow dynamic view and access to the run-time state of components, while at the other lie systems which hide implementation detail and perform some functions automatically, for improved performance. In their comparison of SketchTrieve and InfoGrid, Hendry and Harper note that "a software architecture should provide abstractions for implementing both these". In IIM, the use of macro nodes can hide component details from the end user, especially when the component's parameter values have been tuned in advance for optimal performance.
ONGOING RESEARCH
While the initial results reported here show promise, we are still evaluating the usability of IIM in terms of trainability (how fast does a novice learn the system), reusability (how easily a novice can build new applications from existing node libraries) and ease of integration (effort required to integrate external components and systems). The current version of IIM lacks the explicit document management component found in systems like GATE [4] and Corelli [20]; we are in the process of adding this functionality for the official release of IIM.
The IIM system (source code, class documentation, and node libraries) will be made available via the web as one of our final project milestones later in 2001. Anyone interested in using the system or participating in ongoing research and development is invited to visit the IIM web site and join the IIM mailing list:
¥ § ¦ ¦ © § ¥ § ¦ !" # ¦ $ !% ' & ( !% $ " # ) 0 !1( 2 ) 3 5 4 4 6
Rapid
Deployment. Speed up the initial development of new applications. ¡ This work is supported by National Science Foundation (KDI) grant number 9873009.
Figure 1 :
1IIM User Interface
Figure 2 :
2Node Interface and Subtypes.
Figure 3 :
3Exporting A Macro Node.
Figure 4 :
4Statistics for a Node Chain.
ACKNOWLEDGEMENTSThe authors would like to thank Jamie Callan for his guidance on the architecture design, and Krzysztof Czuba for providing networked instances of the Brill Tagger, Inquery, and WordNet.
A simple rule-based part of speech tagger. Eric Brill, Proceedings of the Third Conference on Applied Natural Language Processing. the Third Conference on Applied Natural Language ProcessingBrill, Eric (1992). "A simple rule-based part of speech tagger", Proceedings of the Third Conference on Applied Natural Language Processing.
The INQUERY Retrieval System. J P Callan, W B Croft, S M Harding, Proceedings of the 3rd International Conference on Database and Expert Systems. the 3rd International Conference on Database and Expert SystemsCallan, J. P., W. B. Croft, and S. M. Harding (1992). "The INQUERY Retrieval System", Proceedings of the 3rd International Conference on Database and Expert Systems.
A System for Discovering Relationships by Feature Extraction from Text Databases. J Conrad, M H Utt, 94Conrad, J., and M. H. Utt (1994). "A System for Discovering Relationships by Feature Extraction from Text Databases", SIGIR '94.
GATE -an environment to support research and development in natural language engineering. R Gaizauskas, H Cunningham, Y Wilks, P Rodgers, K Humphreys, Proceedings of the 8th IEEE International Conference on Tools with Artificial Intelligence (ICTAI96). the 8th IEEE International Conference on Tools with Artificial Intelligence (ICTAI96)Toulouse, FranceGaizauskas, R. Cunningham, H. Wilks, Y. Rodgers, P. and Humphreys, K. GATE -an environment to support research and development in natural language engineering. Proceedings of the 8th IEEE International Conference on Tools with Artificial Intelligence (ICTAI96) , Toulouse, France, pp 58-66, 1996.
Design Patterns: Elements of Reusable Object-Oriented Software. E Gamma, R Helm, R Johnson, J Vlissides, Addison-WesleyGamma, E., Helm, R., Johnson, R. and Vlissides, J. (1995). Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley.
Building an Architecture: A CAWG Saga. R Grishman, Advances in Text Processing: Tipster Program Phase II. sponsored by DARPA ITCGrishman, R. (1996). "Building an Architecture: A CAWG Saga", in Advances in Text Processing: Tipster Program Phase II, sponsored by DARPA ITC.
ECLAIR: An extensible Class Library for Information Retrieval. D J Harper, A D M Walker, Computer Journal. 353Harper, D.J. and A.D.M. Walker (1992). "ECLAIR: An extensible Class Library for Information Retrieval", Computer Journal, 35(3):256-267.
Automatic acquisition of hyponyms from large text corpora. M Hearst, 92Hearst, M. "Automatic acquisition of hyponyms from large text corpora." COLING '92.
An architecture for implementing extensible information-seeking environments. D G Hendry, D J Harper, 96Hendry, D. G., and Harper, D. J. (1996). "An architecture for implementing extensible information-seeking environments", SIGIR '96.
Retrieving descriptive phrases from large amounts of free text. H Joho, M Sanderson, Joho, H. and M. Sanderson, "Retrieving descriptive phrases from large amounts of free text", CIKM 2000.
ATTICS: A Software Platform for Online Text Classification. D Lewis, D Stern, A Singhal, 99Lewis, D., D. Stern and A. Singhal (1999). "ATTICS: A Software Platform for Online Text Classification", SIGIR '99.
Language Resources for Determining Authority. T Mitamura, unpublished manuscriptMitamura, T. (2001). "Language Resources for Determining Authority", unpublished manuscript.
Analyst's Workbench: A CAD-like GUI for Textual Search and Filter Creation. T Neuendorffer, HCII Seminar Series. Carnegie Mellon UniversityNeuendorffer, T. (2000). "Analyst's Workbench: A CAD-like GUI for Textual Search and Filter Creation", HCII Seminar Series, Carnegie Mellon University, November 29.
R Pressman, Software Engineering: A Practitioner's Approach. McGraw-Hill5th editionPressman, R. (2000). Software Engineering: A Practitioner's Approach, 5th edition, McGraw-Hill.
R Rao, S K Card, H D Jellinek, J D Mackinlay, G Robertson, The Information Grid: A Framework for Information Retrieval and Retrieval-Centred Applications. UIST '92. Rao, R., S.K. Card, H.D. Jellinek, J.D. MacKinlay and G. Robertson: The Information Grid: A Framework for Information Retrieval and Retrieval-Centred Applications. UIST '92.
Software Architecture: Perspectives on an Emerging Discipline. M Shaw, D Garlan, Prentice-HallShaw, M. and D. Garlan (1996). Software Architecture: Perspectives on an Emerging Discipline, Prentice-Hall.
The SMART Retrieval System -Experiments in Automatic Document Processing. G Salton, Prentice-HallSalton, G. (1971). The SMART Retrieval System - Experiments in Automatic Document Processing, Prentice-Hall.
Design of a reusable IR framework. G Sonnenberger, H Frei, 95Sonnenberger, G. and H. Frei (1995). "Design of a reusable IR framework", SIGIR '95.
WordNet: An electronic lexical database. C Fellbaum, MIT PressCambridge, MAFellbaum, C. (ed) (1998). WordNet: An electronic lexical database. Cambridge, MA: MIT Press.
An Open Distributed Architecture for Reuse and Integration of Heterogenous NLP Components. R Zajac, Proceedings of the 5th conference on Applied Natural Language Processing. the 5th conference on Applied Natural Language Processing97Zajac, R. (1997). "An Open Distributed Architecture for Reuse and Integration of Heterogenous NLP Components", In Proceedings of the 5th conference on Applied Natural Language Processing (ANLP-97). |
|
156,874 | Generating Recommendation Dialogs by Extracting Information from User Reviews | Recommendation dialog systems help users navigate e-commerce listings by asking questions about users' preferences toward relevant domain attributes.We present a framework for generating and ranking fine-grained, highly relevant questions from user-generated reviews. We demonstrate our approach on a new dataset just released by Yelp, and release a new sentiment lexicon with 1329 adjectives for the restaurant domain. | [
1965764,
12101738,
8162001,
2845337,
2096410
] | Generating Recommendation Dialogs by Extracting Information from User Reviews
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 4-9 2013. 2013
Kevin Reschke [email protected]
Stanford University Stanford
CAUSA
Adam Vogel [email protected]
Stanford University Stanford
CAUSA
Dan Jurafsky [email protected]
Stanford University Stanford
CAUSA
Generating Recommendation Dialogs by Extracting Information from User Reviews
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics
the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational LinguisticsAugust 4-9 2013. 2013
Recommendation dialog systems help users navigate e-commerce listings by asking questions about users' preferences toward relevant domain attributes.We present a framework for generating and ranking fine-grained, highly relevant questions from user-generated reviews. We demonstrate our approach on a new dataset just released by Yelp, and release a new sentiment lexicon with 1329 adjectives for the restaurant domain.
Introduction
Recommendation dialog systems have been developed for a number of tasks ranging from product search to restaurant recommendation (Chai et al., 2002;Thompson et al., 2004;Bridge et al., 2005;Young et al., 2010). These systems learn user requirements through spoken or text-based dialog, asking questions about particular attributes to filter the space of relevant documents.
Traditionally, these systems draw questions from a small, fixed set of attributes, such as cuisine or price in the restaurant domain. However, these systems overlook an important element in users' interactions with online product listings: usergenerated reviews. Huang et al. (2012) show that information extracted from user reviews greatly improves user experience in visual search interfaces. In this paper, we present a dialog-based interface that takes advantage of review texts. We demonstrate our system on a new challenge corpus of 11,537 businesses and 229,907 user reviews released by the popular review website Yelp 1 , focusing on the dataset's 4724 restaurants and bars (164,106 reviews).
This paper makes two main contributions. First, we describe and qualitatively evaluate a frame-1 https://www.yelp.com/dataset_challenge/ work for generating new, highly-relevant questions from user review texts. The framework makes use of techniques from topic modeling and sentiment-based aspect extraction to identify finegrained attributes for each business. These attributes form the basis of a new set of questions that the system can ask the user.
Second, we use a method based on informationgain for dynamically ranking candidate questions during dialog production. This allows our system to select the most informative question at each dialog step. An evaluation based on simulated dialogs shows that both the ranking method and the automatically generated questions improve recall.
2 Generating Questions from Reviews
Subcategory Questions
Yelp provides each business with category labels for top-level cuisine types like Japanese, Coffee & Tea, and Vegetarian. Many of these top-level categories have natural subcategories (e.g., ramen vs. sushi). By identifying these subcategories, we enable questions which probe one step deeper than the top-level category label.
To identify these subcategories, we run Latent Dirichlet Analysis (LDA) (Blei et al., 2003) on the reviews of each set of businesses in the twenty most common top-level categories, using 10 topics and concatenating all of a business's reviews into one document. 2 Several researchers have used sentence-level documents to model topics in reviews, but these tend to generate topics about finegrained aspects of the sort we discuss in Section 2.2 (Jo and Oh, 2011;Brody and Elhadad, 2010). We then manually labeled the topics, discarding junk topics and merging similar topics. Table 1 displays sample extracted subcategories.
Using these topic models, we assign a business to a subcategory based on the topic with highest probability in that business's topic distribution. Finally, we use these subcategory topics to generate questions for our recommender dialog system. Each top-level category corresponds to a single question whose potential answers are the set of subcategories: e.g., "What type of Japanese cuisine do you want?"
Questions from Fine-Grained Aspects
Our second source for questions is based on aspect extraction in sentiment summarization (Blair-Goldensohn et al., 2008;Brody and Elhadad, 2010). We define an aspect as any noun-phrase which is targeted by a sentiment predicate. For example, from the sentence "The place had great atmosphere, but the service was slow." we extract two aspects: +atmosphere and -service. Our aspect extraction system has two steps. First we develop a domain specific sentiment lexicon. Second, we apply syntactic patterns to identify NPs targeted by these sentiment predicates.
Sentiment Lexicon
Coordination Graph We generate a list of domain-specific sentiment adjectives using graph propagation. We begin with a seed set combining PARADIGM+ (Jo and Oh, 2011) with 'strongly subjective' adjectives from the OpinionFinder lexicon (Wilson et al., 2005), yielding 1342 seeds. Like Brody and Elhadad (2010), we then construct a coordination graph that links adjectives modifying the same noun, but to increase precision we require that the adjectives also be conjoined by and (Hatzivassiloglou and McKeown, 1997). This reduces problems like propagating positive sentiment to orange in good orange chicken. We marked adjectives that follow too or lie in the scope of negation with special prefixes and treated them as distinct lexical entries.
Sentiment Propagation Negative and positive seeds are assigned values of 0 and 1 respectively. All other adjectives begin at 0.5. Then a standard propagation update is computed iteratively (see Eq. 3 of Brody and Elhadad (2010)).
In Brody and Elhadad's implementation of this propagation method, seed sentiment values are fixed, and the update step is repeated until the nonseed values converge. We found that three modifications significantly improved precision. First, we omit candidate nodes that don't link to at least two positive or two negative seeds. This eliminated spurious propagation caused by one-off parsing errors. Second, we run the propagation algorithm for fewer iterations (two iterations for negative terms and one for positive terms). We found that additional iterations led to significant error propagation when neutral (italian) or ambiguous (thick) terms were assigned sentiment. 3 Third, we update both non-seed and seed adjectives. This allows us to learn, for example, that the negative seed decadent is positive in the restaurant domain. Table 2 shows a sample of sentiment adjectives Evaluative Verbs In addition to this adjective lexicon, we take 56 evaluative verbs such as love and hate from admire-class VerbNet predicates (Kipper-Schuler, 2005).
Extraction Patterns
To identify noun-phrases which are targeted by predicates in our sentiment lexicon, we develop hand-crafted extraction patterns defined over syntactic dependency parses (Blair-Goldensohn et al., 2008;Somasundaran and Wiebe, 2009) generated by the Stanford parser (Klein and Manning, 2003). Table 3 shows a sample of the aspects generated by these methods.
Adj + NP It is common practice to extract any NP modified by a sentiment adjective. However, this simple extraction rule suffers from precision problems. First, reviews often contain sentiment toward irrelevant, non-business targets (Wayne is the target of excellent job in (1)). Second, hypothetical contexts lead to spurious extractions. In (2), the extraction +service is clearly wrong-in fact, the opposite sentiment is being expressed.
(1) Wayne did an excellent job addressing our needs and giving us our options. (2) Nice and airy atmosphere, but service could be more attentive at times.
We address these problems by filtering out sentences in hypothetical contexts cued by if, should, could, or a question mark, and by adopting the following, more conservative extractions rules:
i) [BIZ + have + adj. + NP] Sentiment adjective modifies NP, main verb is have, subject is business name, it, they, place, or absent. (E.g., This place has some really great yogurt and toppings).
ii) [NP + be + adj.] Sentiment adjective linked to NP by be-e.g., Our pizza was much too jalapeno-y.
"Good For" + NP Next, we extract aspects using the pattern BIZ + positive adj. + for + NP, as in It's perfect for a date night. Examples of extracted aspects include +lunch, +large groups, +drinks, and +quick lunch.
Verb + NP Finally, we extract NPs that appear as direct object to one of our evaluative verbs (e.g., We loved the fried chicken).
Aspects as Questions
We generate questions from these extracted aspects using simple templates. For example, the aspect +burritos yields the question: Do you want a place with good burritos?
Question Selection for Dialog
To utilize the questions generated from reviews in recommendation dialogs, we first formalize the dialog optimization task and then offer a solution.
Problem Statement
We consider a version of the Information Retrieval Dialog task introduced by Kopeček (1999). Businesses b ∈ B have associated attributes, coming from a set Att. These attributes are a combination of Yelp categories and our automatically extracted aspects described in Section 2. Attributes att ∈ Att take values in a finite domain dom(att). We denote the subset of businesses with an attribute att taking value val ∈ dom(att), as B| att=val . Attributes are functions from businesses to subsets of values: att : B → P(dom(att)). We model a user information need I as a set of attribute/value pairs: val 1 ), . . . , (att |I| , val |I| )}. Given a set of businesses and attributes, a recommendation agent π selects an attribute to ask Chinese:
I = {(att 1 ,
Mexican: +beef +egg roll +sour soup +orange chicken +salsa bar +burritos +fish tacos +guacamole +noodles +crab puff +egg drop soup +enchiladas +hot sauce +carne asade +breakfast burritos +dim sum +fried rice +honey chicken +horchata +green salsa +tortillas +quesadillas Japanese:
American (New) +rolls +sushi rolls +wasabi +sushi bar +salmon +environment +drink menu +bar area +cocktails +brunch +chicken katsu +crunch +green tea +sake selection +hummus +mac and cheese +outdoor patio +seating area +oysters +drink menu +sushi selection +quality +lighting +brews +sangria +cheese plates Table 3: Sample of the most frequent positive aspects extracted from review texts. the user about, then uses the answer value to narrow the set of businesses to those with the desired attribute value, and selects another query. Algorithm 1 presents this process more formally. The recommendation agent can use both the set of businesses B and the history of question and answers H from the user to select the next query. Thus, formally a recommendation agent is a function π : B × H → Att. The dialog ends after a fixed number of queries K.
Input
Information Gain Agent
The information gain recommendation agent chooses questions to ask the user by selecting question attributes that maximize the entropy of the resulting document set, in a manner similar to decision tree learning (Mitchell, 1997). Formally, we define a function infogain : Att × P(B) → R:
Experimental Setup
We follow the standard approach of using the attributes of an individual business as a simulation of a user's preferences (Chung, 2004;Young et al., 2010). For every business b ∈ B we form an information need composed of all of b's attributes:
I b = {att∈Att|att(b) =∅} (att, att(b))
To evaluate a recommendation agent, we use the recall metric, which measures how well an information need is satisfied. For each information need I, let B I be the set of businesses that satisfy the questions of an agent. We define the recall of the set of businesses with respect to the information need as recall(B I , I) = b∈B I (att,val)∈I 1[val ∈ att(b)] |B I ||I| We average recall across all information needs, yielding average recall.
We compare against a random agent baseline that selects attributes att ∈ Att uniformly at random at each time step. Other recommendation dialog systems such as Young et al. (2010) select questions from a small fixed hierarchy, which is not applicable to our large set of attributes. Figure 1 shows the average recall for the random agent versus the information gain agent with varying sets of attributes. 'Top-level' repeatedly queries the user's top-level category preferences, 'Subtopic' additionally uses our topic modeling subcategories, and 'All' uses these plus the aspects extracted from reviews. We see that for sufficiently long dialogs, 'All' outperforms the other systems. The 'Subtopic' and 'Top-level' systems plateau after a few dialog steps once they've asked all useful questions. For instance, most businesses only have one or two top-level categories, so after the system has identified the top-level category that the user is interested in, it has no more good questions to ask. Note that the information gain agent starts dialogs with the top-level and appropriate subcategory questions, so it is only for longer dialogs that the fine-grained aspects boost performance.
Results
Below we show a few sample output dialogs from our 'All' information gain agent.
Conclusion
We presented a system for extracting large sets of attributes from user reviews and selecting relevant attributes to ask questions about. Using topic models to discover subtypes of businesses, a domain-specific sentiment lexicon, and a number of new techniques for increasing precision in sentiment aspect extraction yields attributes that give a rich representation of the restaurant domain. We have made this 1329-term sentiment lexicon for the restaurant domain available as useful resource to the community. Our information gain recommendation agent gives a principled way to dynamically combine these diverse attributes to ask relevant questions in a coherent dialog. Our approach thus offers a new way to integrate the advantages of the curated hand-build attributes used in statistical slot and filler dialog systems, and the distributionally induced, highly relevant categories built by sentiment aspect extraction systems.
:
Information need I Set of businesses B Set of attributes Att Recommendation agent π Dialog length K Output: Dialog history H Recommended businesses B Initialize dialog history H = ∅ for step = 0; step < K; step++ do Select an attribute: att = π(B, H) Query user for the answer: val = I(att) Restrict set of businesses: B = B| att=val Append answer: H = H ∪ {(att, val)} end Return (H, B) Algorithm 1: Procedure for evaluating a recommendation agent
infogain(att, B) = − vals∈P(dom(att)) |B att=vals | |B| log |B att=vals | |B| The agent then selects questions att ∈ Att that maximize the information gain with respect to the set of businesses satisfying the dialog history H: π(B, H) = arg max att∈Att infogain(att, B| H ) 4 Evaluation
Figure 1 :
1Average recall for each agent.
Table 1 :
1A sample of subcategory topics with hand-labels and top words.
Table 2 :
2Sample of Learned Sentiment Adjectives derived by this graph propagation method. The final lexicon has 1329 adjectives 4 , including 853 terms not in the original seed set. The lexicon is available for download. 5
Q :
QWhat kind of place do you want? A: American (New) Q: What kind of American (New) do you want: bar, bistro, standard, burgers, brew pub, or brunch? A: bistro Q: Do you want a place with a good patio? A: Yes Q: What kind of place do you want? A: Chinese Q: What kind of Chinese place do you want: buffet, dim sum, noodles, pan Asian, Panda Express, sit down, or veggie? A: sit down Q: Do you want a place with a good lunch special? A: Yes Q: What kind of place do you want? A: Mexican Q: What kind of Mexican place do you want: dinner, taqueria, margarita bar, or tortas? A: Margarita bar Q: Do you want a place with a good patio? A: Yes
We use the Topic Modeling Toolkit implementation: http://nlp.stanford.edu/software/tmt
Our results are consistent with the recent finding of Whitney and Sarkar (2012) that cautious systems are better when bootstrapping from seeds.
We manually removed 26 spurious terms which were caused by parsing errors or propagation to a neutral term. 5 http://nlp.stanford.edu/projects/ yelp.shtml
AcknowledgmentsThanks to the anonymous reviewers and the Stanford NLP group for helpful suggestions. The authors also gratefully acknowledge the support of the Nuance Foundation, the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-13-2-0040, ONR grants N00014-10-1-0109 and N00014-13-1-0287 and ARO grant W911NF-07-1-0216, and the Center for Advanced Study in the Behavioral Sciences.
Building a sentiment summarizer for local service reviews. Sasha Blair-Goldensohn, Kerry Hannan, Ryan Mcdonald, Tyler Neylon, A George, Jeff Reis, Reynar, WWW Workshop on NLP in the Information Explosion Era. Sasha Blair-Goldensohn, Kerry Hannan, Ryan McDon- ald, Tyler Neylon, George A Reis, and Jeff Reynar. 2008. Building a sentiment summarizer for local service reviews. In WWW Workshop on NLP in the Information Explosion Era.
Latent dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, The Journal of Machine Learning Research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993-1022.
Case-based recommender systems. Derek Bridge, H Mehmet, Lorraine Göker, Barry Mcginty, Smyth, Knowledge Engineering Review. 203Derek Bridge, Mehmet H. Göker, Lorraine McGinty, and Barry Smyth. 2005. Case-based recom- mender systems. Knowledge Engineering Review, 20(3):315-320.
An unsupervised aspect-sentiment model for online reviews. Samuel Brody, Noemie Elhadad, Samuel Brody and Noemie Elhadad. 2010. An unsu- pervised aspect-sentiment model for online reviews.
Proceedings of HLT NAACL 2010. HLT NAACL 2010In Proceedings of HLT NAACL 2010, pages 804- 812.
Natural language assistant -a dialog system for online product recommendation. Joyce Chai, Veronika Horvath, Nicolas Nicolov, Margo Stys, Wlodek Kambhatla, Prem Zadrozny, Melville, 23AI MagazineJoyce Chai, Veronika Horvath, Nicolas Nicolov, Margo Stys, A Kambhatla, Wlodek Zadrozny, and Prem Melville. 2002. Natural language assistant -a di- alog system for online product recommendation. AI Magazine, 23:63-75.
Developing a flexible spoken dialog system using simulation. Grace Chung, Proceedings of ACL 2004. ACL 2004Grace Chung. 2004. Developing a flexible spoken dia- log system using simulation. In Proceedings of ACL 2004, pages 63-70.
Predicting the semantic orientation of adjectives. Vasileios Hatzivassiloglou, Kathleen R Mckeown, Proceedings of EACL 1997. EACL 1997Vasileios Hatzivassiloglou and Kathleen R McKeown. 1997. Predicting the semantic orientation of adjec- tives. In Proceedings of EACL 1997, pages 174- 181.
Revminer: An extractive interface for navigating reviews on a smartphone. Jeff Huang, Oren Etzioni, Luke Zettlemoyer, Kevin Clark, Christian Lee, Proceedings of UIST 2012. UIST 2012Jeff Huang, Oren Etzioni, Luke Zettlemoyer, Kevin Clark, and Christian Lee. 2012. Revminer: An ex- tractive interface for navigating reviews on a smart- phone. In Proceedings of UIST 2012.
Aspect and sentiment unification model for online review analysis. Yohan Jo, Alice H Oh, Proceedings of the Fourth ACM International Conference on Web Search and Data Mining. the Fourth ACM International Conference on Web Search and Data MiningYohan Jo and Alice H Oh. 2011. Aspect and sentiment unification model for online review analysis. In Pro- ceedings of the Fourth ACM International Confer- ence on Web Search and Data Mining, pages 815- 824.
Verbnet: A broadcoverage, comprehensive verb lexicon. Karin Kipper, - Schuler, Karin Kipper-Schuler. 2005. Verbnet: A broad- coverage, comprehensive verb lexicon.
Accurate unlexicalized parsing. Dan Klein, D Christopher, Manning, Proceedings ACL 2003. ACL 2003Dan Klein and Christopher D Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings ACL 2003, pages 423-430.
Modeling of the information retrieval dialogue systems. I Kopeček, Proceedings of the Workshop on Text, Speech and Dialogue-TSD 99. the Workshop on Text, Speech and Dialogue-TSD 99Springer-Verlag1692I. Kopeček. 1999. Modeling of the information re- trieval dialogue systems. In Proceedings of the Workshop on Text, Speech and Dialogue-TSD 99, Lectures Notes in Artificial Intelligence 1692, pages 302-307. Springer-Verlag.
Tom M Mitchell, Machine Learning. New YorkMcGraw-HillTom M. Mitchell. 1997. Machine Learning. McGraw- Hill, New York.
Recognizing stances in online debates. Swapna Somasundaran, Janyce Wiebe, Proceedings of ACL 2009. ACL 2009Swapna Somasundaran and Janyce Wiebe. 2009. Rec- ognizing stances in online debates. In Proceedings of ACL 2009, pages 226-234.
A personalized system for conversational recommendations. Cynthia A Thompson, H Mehmet, Pat Goeker, Langley, Journal of Artificial Intelligence Research (JAIR). 21Cynthia A. Thompson, Mehmet H. Goeker, and Pat Langley. 2004. A personalized system for conver- sational recommendations. Journal of Artificial In- telligence Research (JAIR), 21:393-428.
Bootstrapping via graph propagation. Max Whitney, Anoop Sarkar, Proceedings of the ACL 2012. the ACL 2012Jeju Island, KoreaMax Whitney and Anoop Sarkar. 2012. Bootstrapping via graph propagation. In Proceedings of the ACL 2012, pages 620-628, Jeju Island, Korea.
Opinionfinder: A system for subjectivity analysis. Theresa Wilson, Paul Hoffmann, Swapna Somasundaran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, Siddharth Patwardhan, Proceedings of HLT/EMNLP 2005 on Interactive Demonstrations. HLT/EMNLP 2005 on Interactive DemonstrationsTheresa Wilson, Paul Hoffmann, Swapna Somasun- daran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patward- han. 2005. Opinionfinder: A system for subjectivity analysis. In Proceedings of HLT/EMNLP 2005 on Interactive Demonstrations, pages 34-35.
The hidden information state model: A practical framework for POMDP-based spoken dialogue management. Steve Young, Milica Gašić, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, Kai Yu , Computer Speech and Language. 242Steve Young, Milica Gašić, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for POMDP-based spoken di- alogue management. Computer Speech and Lan- guage, 24(2):150-174, April. |
9,787,601 | Graph-based Event Coreference Resolution | In this paper, we address the problem of event coreference resolution as specified in the Automatic Content Extraction (ACE | [
1578341,
11239061
] | Graph-based Event Coreference Resolution
2009. August 2009. 2009
Zheng Chen [email protected]
The Graduate Center
Queens College and The Graduate Center The City University of New York
The City University of New York
Heng Ji [email protected]
The Graduate Center
Queens College and The Graduate Center The City University of New York
The City University of New York
Graph-based Event Coreference Resolution
ACL and AFNLP
the 2009 Workshop on Graph-based Methods for Natural Language ProcessingSuntec, Singapore, 72009. August 2009. 2009
In this paper, we address the problem of event coreference resolution as specified in the Automatic Content Extraction (ACE
1 1 Introduction ) program.
In contrast to entity coreference resolution, event coreference resolution has not received great attention from researchers. In this paper, we first demonstrate the diverse scenarios of event coreference by an example. We then model event coreference resolution as a spectral graph clustering problem and evaluate the clustering algorithm on ground truth event mentions using ECM F-Measure. We obtain the ECM-F scores of 0.8363 and 0.8312 respectively by using two methods for computing coreference matrices.
Typically, an ACE Event Detection and Recognition (VDR) system consists of two steps: first, it detects all mentions of events with certain specified types occurring in the raw text (event mention detection) and second, it unifies the event mentions into equivalence classes so that all the mentions in a given class refer to an event (event coreference resolution). ACE defines the following terminologies related with VDR: Event: a specific occurrence involving participants. An ACE event has six attributes (type, subtype, modality, polarity, genericity and tense), zero or more event arguments, and a cluster of event mentions. Event trigger: the word that most clearly expresses an event's occurrence. Event argument: an entity, or a temporal expression or a value that has a certain role (e.g., Time-Within, Place) in an event. Event mention: a sentence (or a text span extent) that mentions an event, including a distinguished trigger and involving arguments. 1 http://www.nist.gov/speech/tests/ace/ In contrast to entity coreference, the scenarios in event coreference are more complicated, mainly because entity coreference is word (or phrase)-level coreference whereas event coreference is sentence-level coreference and therefore the coreferring event mentions may have more flexible linguistic structures than entity mentions. We provide an example to demonstrate this diversity. EM1 Table 1 shows the source text of a news story. As an example, we only tag the event mentions which have the event type and subtype of (Conflict:Attack). In each event mention, the trigger is surrounded by curly brackets, and arguments are underlined. Table 2 shows the tabular representation of those event mentions. Table 3 shows that the five event mentions in event EV1 corefer with each other. We summarize EV1 as follows: a bomb (E4-1) exploded in the restroom (E2-1) of a café (E1-1 or E1-2) during Tuesday morning's rush hour (combination of T1-1, T2-1 and T3-1). EV2 is a different attack event because the target (E6-1) in EV2 differs from the one (E1-3) in EV1. EV3 tells that the bombing attacks have occurred generically (thus the event attribute "genericity" is "General" whereas it is "Specific" in EV1 and EV2).
Event Coreference Resolution as Spectral Graph Clustering
We view the event coreference space as an undirected weighted graph in which the nodes represent all the event mentions in a document and the edge weights indicate the coreference confidence between two event mentions. In real implementation, we initially construct different graphs for separate event types 2 2 We view the 33 ACE event subtypes as event types , such that, in each graph, all the event mentions have the same event type. Similar to (Nicolae and Nicolae, 2006), we formally define a framework for event coreference resolution.
, . We then model event coreference resolution as a spectral graph clustering problem that optimizes the normalized-cut criterion (Shi and Malik, 2000). Such optimization can be achieved by computing the second generalized eigenvector, thus the name "spectral". In this paper, we do not try to propose a new spectral clustering algorithm or improve the existing algorithm. Instead, we focus on how to compute the coreference matrix (equivalently, the affinity matrix in Shi and Malik's algorithm) because a better estimation of coreference matrix can reduce the burden on clustering algorithm.
Coreference Matrix
Method 1: Computing a Coreference Formula
Obviously, the trigger pair and the argument sets owned by two event mentions carry much information about whether one event mention corefers with the other. Based on a corpus, we compute the statistics about event mention pairs (with the same event type) listed in Table 4. Let . be the trigger in , ( .
) be the stem of the trigger in , ( . , .
) be the semantic similarity between the two triggers in and as computed in (Seco et al., 2004), .
be the argument (ID and ROLE) set in . Let 1 be the conjunction operator on argument pairs whose ID 3 = 1 + where and ROLE match, 2 be the conjunction operator on argument pairs whose ID matches but ROLE does not match, 3 be the conjunction operator on argument pairs whose ROLE matches but ID does not match, 4 be the conjunction operator on argument pairs whose ID and ROLE do not match. We then propose the following formula to measure the coreference value between and . The strength of this formula is that it allows to give credit to different cases of trigger matching and argument pair matching between two event mentions. T11 in those coreferring event mention pairs, how many pairs use exactly the same triggers T12
in those non-coreferring event mention pairs, how many pairs use exactly the same triggers T21
in those coreferring event mention pairs, how many pairs do not have the same triggers, but have the same stems of triggers T22
non-coreferring version of T21 T31
in those coreferring event mention pairs, how many pairs do not have the same triggers nor the same stems, but the semantic similarity between two triggers is higher than 0 in WordNet. T32
non-coreferring version of T31 T41
in those non-coreferring event mention pairs, how many pairs are not in T11 or T21 or T31 T42
non-coreferring version that is not T12 or T22 or T32 A11 in those coreferring event mention pairs, how many argument pairs whose ID and ROLE match A12 non-coreferring version of A11 A21 in those coreferring event mention pairs, how many argument pairs whose ID matches but ROLE does not match A22 non-coreferring version of A21 A31 in those coreferring event mention pairs, how many argument pairs whose ROLE matches but ID does not match A32 non-coreferring version of A31 A41 in those non-coreferring event mention pairs, how many argument pairs whose ID and ROLE do not match A42 non-coreferring version of A41 Table 4. Statistics of event mention pairs
Method 2: Applying a Maximum Entropy Model
We train a maximum entropy model to produce the confidence values for . Each confidence value tells the probability that there exists coreference between event mention and .
, = ) where ( , , ) is a feature and is its weight;
( ( ,, )) ( ,
, is the normalizing factor. The feature sets applied in the model are listed in Table 5 by categories.
Experiments and Results
Data and Evaluation Metrics
We developed and tested the spectral clustering algorithm for event coreference resolution using the ACE 2005 English corpus which contains 560 documents. We used the ground truth event mentions and evaluated our algorithm based on ECM F-Measure (Luo, 2005). We reserved 60 documents for testing purpose and used the left 500 documents for training/developing purpose and for computing the statistics discussed above. We applied 10-fold cross-validation in the experiment of comparing two methods for computing coreference matrix.
Statistics of Event Mention Pairs
The results of the statistics discussed in Section 3.1 are presented in Table 6. T11=1042,T12=1297, T21=240,T22=840, T31=257, T32=2637, T41=784,T42=5628 A11=888, A12= 1485, A21=31, A22=146, A31=542, A32=6849, A41=323, A42=3000 Table 6. Results of statistics in 500 documents From Table 6, we observe that if two event mentions use the same trigger or if they have arguments whose ID and ROLE match, it is more probable for them to corefer with each other than other cases.
Comparison of the Two Methods for
Computing Coreference Matrix Table 5. EM(Event Mention)-pair features for the maximum entropy model Figure 1 shows the ECM-F scores for both methods by varying the cut threshold in the clustering algorithm. Both methods obtain the highest ECM-F score at threshold 0.85 and method 1 performs slightly better than method 2 (0.8449 vs. 0.8418, significant at 85% confidence level, p<=0.1447). We obtained the ECM-F scores of 0.8363 and 0.8312 on the test set for method 1 and method 2 respectively. We also obtained two baseline ECM-F scores, one is 0.535 if we consider all the event mentions with the same event type as a cluster, the other is 0.7635 if we consider each event mention as a cluster.
Related Work
Earlier work on event coreference (e.g. Humphreys et al., 1997;Bagga and Baldwin, 1999) in MUC was limited to several scenarios, e.g., terrorist attacks, management succession, resignation. The ACE program takes a further step towards processing more fine-grained events. To the best of our knowledge, this paper is the first effort to apply graph-based algorithm to the problem of event coreference resolution. Nicolae and Nicolae (2006) proposed a similar graph-based framework for entity coreference resolution. However, in our task, the event mention has much richer structure than the entity mention, thus, it is possible for us to harness the useful information from both the triggers and the attached arguments in the event mentions.
Conclusions and Future Work
In this paper, we addressed the problem of event coreference resolution in a graph-based frame-work, and presented two methods for computing the coreference matrix. A practical event coreference resolver also depends on high-performance event extractor. We will further study the impact of system generated event mentions on the performance of our coreference resolver.
Figure 1 .
1ECM-F scores for both methods
An {explosion} in a cafe at one of the capital's busiest intersections killed one woman and injured another Tuesday EM2 Police were investigating the cause of the {ex-plosion} in , police said. the restroom of the multistory Crocodile Cafe in the commercial district of Kizilay during the morning rush hour. EM3 The {blast} shattered walls and windows in the building EM4 Ankara police chief Ercument Yilmaz visited Table 1. Source text and event mentions.
the site of the morning blast but refused to say
if a bomb
EM5 The {explosion} comes a month after EM6
had caused the {explosion}.
a bomb
{exploded} at a McDonald's restaurant in Istanbul
EM7
,
causing damage but no injuries.
Radical leftist, Kurdish and Islamic groups are
active in the country and have carried out {bomb-
ings} in the past.
Table 2 .
2Tabular representation of event mentionsEvent
Included event mentions
EV1
{EM1,EM2,EM3,EM4,EM5}
EV2
{EM6}
EV3
{EM7}
Table 3 .
3Event coreference results
be the function that computes the coreference confidence between two event mentions ,Let
= {
: 1
} be event men-
tions in the document and
= { : 1
}
be
events. Let :
be the function
mapping from an event mention
to an
event
. Let
:
×
[0,1] . Let = { : 1
} be
event types.
Thus for each event type , we have a graph
( , ) , where = { | ( ).
= ,
} and = ( , ,
Arguments overlap_num,overlap_roles overlap number of arguments and their roles (role and id exactly match) between EM1 and EM2 prior_num, prior_roles the number and the roles of arguments that only appear in EM1 act_num, act_roles the number and the roles of arguments that only appear in EM2 coref_num the number of arguments that corefer each other but have different roles between EM1 and EM2Category
Features
Remarks (EM1: the first event mention, EM2: the second event
mention)
Lexicon
type_subtype
pair of event type and subtype in EM1
trigger_pair
trigger pair of EM1 and EM2
pos_pair
part-of-speech pair of triggers of EM1 and EM2
nominal
1 if the trigger of EM2 is nominal
exact_match
1 if the spellings of triggers in EM1 and EM2 exactly match
stem_match
1 if the stems of triggers in EM1 and EM2 match
trigger_sim
quantized semantic similarity score (0-5) using WordNet resource
Distance
token_dist
how many tokens between triggers of EM1 and EM2 (quantized)
sentence_dist
how many sentences EM1 and EM2 are apart (quantized)
event_dist
how many event mentions in between EM1 and EM2 (quantized)
We view two argument IDs "E1-1" and "E1-2" as a match if they mention the same entity which is "E1"
AcknowledgmentsThis material is based upon work supported by the Defense Advanced Research Projects Agency under Contract No. HR0011-06-C-0023 via 27-001022, and the CUNY Research Enhancement Program and GRTI Program.
Cross-document event coreference: Annotations, experiments, and observations. A Bagga, B Baldwin, Proc. ACL-99 Workshop on Coreference and Its Applications. ACL-99 Workshop on Coreference and Its ApplicationsA. Bagga and B. Baldwin. 1999. Cross-document event coreference: Annotations, experiments, and observations. In Proc. ACL-99 Workshop on Core- ference and Its Applications.
Bestcut: A graph algorithm for coreference resolution. C Nicolae, G Nicolae, EMNLP. Sydney, AustraliaC. Nicolae and G. Nicolae. 2006. Bestcut: A graph algorithm for coreference resolution. In EMNLP, pages 275-283, Sydney, Australia, July.
Event coreference for information extraction. J Shi, J Malik ; Puerto Rico, K Humphreys, R Gaizauskas, S Azzam, Proceedings of the ACL Workshop on Operational Factors in Practical Robust Anaphora Resolution for Unrestricted Texts. the ACL Workshop on Operational Factors in Practical Robust Anaphora Resolution for Unrestricted TextsProc. of IEEE Conf. on Comp. Vision and Pattern RecognitionJ. Shi and J. Malik.1997. Normalized Cuts and Image Segmentation. In Proc. of IEEE Conf. on Comp. Vision and Pattern Recognition, Puerto Rico K. Humphreys, R. Gaizauskas, S. Azzam. 1997. Event coreference for information extraction. In Proceedings of the ACL Workshop on Operational Factors in Practical Robust Anaphora Resolution for Unrestricted Texts.
An intrinsic information content metric for semantic similarity in WordNet. N Seco, T Veale, J Hayes, Proc. of ECAI-04. of ECAI-04N. Seco, T. Veale, J. Hayes. 2004. An intrinsic infor- mation content metric for semantic similarity in WordNet. In Proc. of ECAI-04, pp. 1089-1090.
On coreference resolution performance metrics. X Luo, Proc. of HLT-EMNLP. of HLT-EMNLPX. Luo. 2005. On coreference resolution performance metrics. Proc. of HLT-EMNLP. |
35,207,135 | Information Extraction from Biomedical Texts: Learning Models with Limited Supervision | Among the application domains of information extraction, the biomedical domain is one of the most important ones. This is due to the large amount of biomedical text sources including the vast scientific literature and collections of patient reports written in natural language. These sources contain a wealth of crucial knowledge that needs to be mined. Typical mining tasks regard entity recognition, entity-relation extraction, and event and event participant recognition. Recently we witness an interest in the recognition of spatial relationships between entities and of temporal relationships between events. One of the most important problems in information extraction regards dealing with a limited amount of examples that are manually annotated by experts and that can be used for training the extraction models. | [] | Information Extraction from Biomedical Texts: Learning Models with Limited Supervision
Association for Computational LinguisticsCopyright Association for Computational Linguistics17 September 2015. 2015
Leuven, BelgiumMarie-Francine Moens Kuleuven [email protected]
Information Extraction from Biomedical Texts: Learning Models with Limited Supervision
Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis (Louhi)
the Sixth International Workshop on Health Text Mining and Information Analysis (Louhi)Lisbon, PortugalAssociation for Computational Linguistics12017 September 2015. 2015
Among the application domains of information extraction, the biomedical domain is one of the most important ones. This is due to the large amount of biomedical text sources including the vast scientific literature and collections of patient reports written in natural language. These sources contain a wealth of crucial knowledge that needs to be mined. Typical mining tasks regard entity recognition, entity-relation extraction, and event and event participant recognition. Recently we witness an interest in the recognition of spatial relationships between entities and of temporal relationships between events. One of the most important problems in information extraction regards dealing with a limited amount of examples that are manually annotated by experts and that can be used for training the extraction models.
In this talk we discuss how we can leverage knowledge contained in unlabelled texts and ontological knowledge about known relationships between the output labels used for the extractions. The former aspect especially focuses on how to automatically create novel training examples from the unlabelled data, the latter on how to integrate the relationships in models for structured machine learning during training and testing of the extraction models in the most efficient way. We show promising results and point to directions of future research. |
128,358,604 | GumDrop at the DISRPT2019 Shared Task: A Model Stacking Approach to Discourse Unit Segmentation and Connective Detection | In this paper we present GumDrop, Georgetown University's entry at the DISRPT 2019 Shared Task on automatic discourse unit segmentation and connective detection. Our approach relies on model stacking, creating a heterogeneous ensemble of classifiers, which feed into a metalearner for each final task. The system encompasses three trainable component stacks: one for sentence splitting, one for discourse unit segmentation and one for connective detection. The flexibility of each ensemble allows the system to generalize well to datasets of different sizes and with varying levels of homogeneity. | [
5187426,
20110212,
18483718,
207556454,
1452940,
13374927,
14277905,
6174034,
15049973,
1957433,
252796,
6743006
] | GumDrop at the DISRPT2019 Shared Task: A Model Stacking Approach to Discourse Unit Segmentation and Connective Detection
23 Apr 2019
Yue Yu
Computer Science Georgetown University
Linguistics Georgetown University
Linguistics Georgetown University
Analytics Georgetown University
Linguistics Georgetown University
CCT Georgetown University
Linguistics Georgetown University
Yilun Zhu
Computer Science Georgetown University
Linguistics Georgetown University
Linguistics Georgetown University
Analytics Georgetown University
Linguistics Georgetown University
CCT Georgetown University
Linguistics Georgetown University
Yang Liu
Computer Science Georgetown University
Linguistics Georgetown University
Linguistics Georgetown University
Analytics Georgetown University
Linguistics Georgetown University
CCT Georgetown University
Linguistics Georgetown University
Yan Liu
Computer Science Georgetown University
Linguistics Georgetown University
Linguistics Georgetown University
Analytics Georgetown University
Linguistics Georgetown University
CCT Georgetown University
Linguistics Georgetown University
Siyao Peng
Computer Science Georgetown University
Linguistics Georgetown University
Linguistics Georgetown University
Analytics Georgetown University
Linguistics Georgetown University
CCT Georgetown University
Linguistics Georgetown University
Mackenzie Gong
Computer Science Georgetown University
Linguistics Georgetown University
Linguistics Georgetown University
Analytics Georgetown University
Linguistics Georgetown University
CCT Georgetown University
Linguistics Georgetown University
Amir Zeldes
Computer Science Georgetown University
Linguistics Georgetown University
Linguistics Georgetown University
Analytics Georgetown University
Linguistics Georgetown University
CCT Georgetown University
Linguistics Georgetown University
GumDrop at the DISRPT2019 Shared Task: A Model Stacking Approach to Discourse Unit Segmentation and Connective Detection
23 Apr 2019
In this paper we present GumDrop, Georgetown University's entry at the DISRPT 2019 Shared Task on automatic discourse unit segmentation and connective detection. Our approach relies on model stacking, creating a heterogeneous ensemble of classifiers, which feed into a metalearner for each final task. The system encompasses three trainable component stacks: one for sentence splitting, one for discourse unit segmentation and one for connective detection. The flexibility of each ensemble allows the system to generalize well to datasets of different sizes and with varying levels of homogeneity.
Introduction
Although discourse unit segmentation and connective detection are crucial for higher level shallow and deep discourse parsing tasks, recent years have seen more progress in work on the latter tasks than on predicting underlying segments, such as Elementary Discourse Units (EDUs). As the most recent overview on parsing in the framework of Rhetorical Structure Theory (RST, Mann and Thompson 1988) points out (Morey et al., 2017(Morey et al., , 1322) "all the parsers in our sample except [two] predict binary trees over manually segmented EDUs". Recent discourse parsing papers (e.g. Li et al. 2016, Braud et al. 2017a have focused on complex discourse unit span accuracy above the level of EDUs, attachment accuracy, and relation classification accuracy. This is due in part to the difficulty in comparing systems when the underlying segmentation is not identical (see Marcu et al. 1999), but also because of a relatively stable SOA accuracy of EDU segmentation as evaluated on the largest RST corpus, the English RST Discourse Treebank (RST-DT, Carlson et al. 2003), which already exceeded 90% accuracy in 2010 (Hernault et al., 2010).
However, as recent work (Braud et al., 2017b) has shown, performance on smaller or less homogeneous corpora than RST-DT, and especially in the absence of gold syntax trees (which are realistically unavailable at test time for practical applications), hovers around the mid 80s, making it problematic for full discourse parsing in practice. This is more critical for languages and domains in which relatively small datasets are available, making the application of generic neural models less promising.
The DISRPT 2019 Shared Task aims to identify spans associated with discourse relations in data from three formalisms: RST (Mann and Thompson, 1988), SDRT (Asher, 1993) and PDTB (Prasad et al., 2014). The targeted task varies actoss frameworks: Since RST and SDRT segment texts into spans covering the entire document, the corresponding task is to predict the starting point of new discourse units. In the PDTB framework, the basic locus identifying explicit discourse relations is the spans of discourse connectives which need to be identified among other words. In total, 15 corpora (10 from RST data, 3 from PDTB-style data, and 2 from SDRT) in 10 languages (Basque, Chinese, Dutch, English, French, German, Portuguese, Russian, Spanish, and Turkish) are used as the input data for the task. The heterogeneity of the frameworks, languages and even the size of the training datasets all render the shared task challenging: training datasets range from the smallest Chinese RST corpus of 8,960 tokens to the largest English PDTB dataset of 1,061,222 tokens, and all datasets have some different guidelines. In this paper, we therefore focus on creating an architecture that is not only tailored to resources like RST-DT, and takes into account the crucial importance of high accuracy sentence splitting for real-world data, generalizing well to different guidelines and datasets.
Our system, called GumDrop, relies on model stacking (Wolpert, 1992), which has been successfully applied to a number of complex NLP problems (e.g. Clark andManning 2015, Friedrichs et al. 2017). The system uses a range of different rule-based and machine learning approaches whose predictions are all fed to a 'metalearner' or blender classifier, thus benefiting from both neural models where appropriate, and strong rule-based baselines coupled with simpler classifiers for smaller datasets. A further motivation for our model stacking approach is curricular: the system was developed as a graduate seminar project in the course LING-765 (Computational Discourse Modeling), and separating work into many sub-modules allowed each contributor to work on a separate sub-project, all of which are combined in the complete system as an ensemble. The system was built by six graduate students and the instructor, with each student focusing on one module (notwithstanding occasional collaborations) in two phases: work on a high-accuracy ensemble sentence splitter for the automatic parsing scenario (see Section 3.2), followed by the development of a discourse unit segmenter or connective detection module (Sections 3.3 and 3.4).
Previous Work
Following early work on rule-based segmenters (e.g. Marcu 2000, Thanh et al. 2004), Soricut and Marcu (2003) used a simple probabilistic model conditioning on lexicalized constituent trees, by using the highest node above each word that has a right-hand sibling, as well as its children. Like our approach, this and subsequent work below perform EDU segmentation as a token-wise binary classification task (boundary/no-boundary). In a more complex model, Sporleder and Lapata (2005) used a two-level stacked boosting classifier on syntactic chunks, POS tags, token and sentence lengths, and token positions within clauses, all of which are similar to or subsumed by some of our features below. They additionally used the list of English connectives from Knott (1996) to identify connective tokens. Hernault et al. (2010) used an SVM model with features corresponding to token and POS trigrams at and preceding a potential segmentation point, as well as features encoding the lexical head of each token's parent phrase in a phrase structure syn-tax tree and the same features for the sibling node on the right. More recently, Braud et al. (2017b) used a bi-LSTM-CRF sequence labeling approach on dependency parses, with words, POS tags, dependency relations and the same features for each word's parent and grand-parent tokens, as well as the direction of attachment (left or right), achieving F-scores of .89 on segmenting RST-DT with parser-predicted syntax, and scores in the 80s, near or above previous SOA results, for a number of other corpora and languages.
By contrast, comparatively little work has approached discourse connective detection as a separate task, as it is usually employed as an intermediate step for predicting discourse relations. Pitler and Nenkova (2009) used a Max Entropy classifier using a set of syntactic features extracted from the gold standard Penn Treebank (Marcus et al., 1993) parses of PDTB (Prasad et al., 2008) articles, such as the highest node which dominates exactly and only the words in the connective, the category of the immediate parent of that phrase, and the syntactic category of the sibling immediately to the left/right of the same phrase. Patterson and Kehler (2013) presented a logistic regression model trained on eight relation types extracted from PDTB, with features in three categories: Relation-level features such as the connective signaling the relation, attribution status of the relation, and its relevance to financial information; Argument-level features, capturing the size or complexity of each of its two arguments; and Discourse-level features, which incorporate the dependencies between the relation in question and its neighboring relations in the text.
Polepalli Ramesh et al. (2012) used SVM and CRF for identifying discourse connectives in biomedical texts. The Biomedical Discourse Relation Bank (Prasad et al., 2011) and PDTB were used for in-domain classifiers and novel domain adaptation respectively. Features included POS tags, the dependency label of tokens' immediate parents in a parse tree, and the POS of the left neighbor; domain-specific semantic features included several biomedical gene/species taggers, in addition to NER features predicted by ABNER (A Biomedical Named Entity Recognition).
GumDrop
Our system is organized around three ensembles which implement model stacking.
A trainable sentencer ensemble which feeds
an off-the-shelf dependency parser 2. A discourse unit segmenter ensemble, operating on either gold or predicted sentences 3. A connective detector ensemble, also using gold or predicted sentences Each module consists of several distinct submodules, as shown in Figure 1. Predicted labels and probabilities from sub-modules, along with features for every token position are fed to a blender classifier, which outputs the final prediction for each token. By learning which modules perform better on which dataset, in which scenario (gold or predicted syntax) and in what linguistic environments, the ensemble remains robust at both tasks in both settings.
Since the sub-modules and the ensembles are trained on the same training data, a crucial consideration is to avoid over-reliance on modules, which may occur if the metalearner learns about module reliability from data that the sub-modules have already seen. To counter this, we use 5fold multitraining: each base module is trained five times, each time predicting labels for a disjoint held-out subset of the training data. These predictions are saved and fed to the ensemble as training data, thereby simulating the trained submodules' behavior when exposed to unseen data. At test time, live predictions are gathered from the sub-modules, whose reliability has been assessed via the prior unseen multitraining data. Table 1 gives an overview of the features we extract from the shared task data, and the modules using those features for sentence splitting and EDU segmentation/connective detection. Features derived from syntax trees are not available for sentence splitting, though automatic POS tagging using the TreeTagger (Schmid, 1994) was used as a feature for this task, due to its speed and good accuracy in the absence of sentence splits.
Features
Most modules represent underlying words somehow, usually in a 3 or 5-gram window centered around a possible split point. An exception is the LR module, which uses only the first/last (f/l in Table 1) characters to prevent sparseness, but which also uses #char types features, which give the count of digits, consonant, vowel and other characters per word. Modules with 'top 200/100' use only the n most frequent items in the data, and otherwise treat each word as its POS category. Neural modules (DNN, RNN) use 300 dimensional FastText (Bojanowski et al., 2017) word embeddings, and in the case of the RNN, character embeddings are also used. For Chinese in the LR module, we use the first/last byte in each word instead of actual characters.
The feature genre gives the genre, based on a substring extracted from document names, in corpora with multiple genres. The features quot/paren indicate, for each token, whether it is between quotation marks or parentheses, allowing modules to notice direct speech or uncompleted parentheses which often should not be split. The feature sent% gives the quantile position of f/l n y n n n n y n n upos/xpos y n n n n y y y y y case y n n n n y y n n y #char types y n n n n n n n n n tok len y n n n n y y n n y tok frq y n n n n n n n n n genre n n n n n y y y n y quot/paren n n n n n n y n n y sent% n n n n n y y n n y deprel - the current sentence in the document as a number between 0-1. This can be important for datasets in which position in the document interacts with segmentation behavior, such as abstracts in early portions of the academic genres in the Russian corpus, which often leave sentences unsegmented. The features deprel, headdist and depbracket are not available for sentence splitting, as they require dependency parses: they give the dependency relation, distance to the governing head token (negative/positive for left/right parents), and a BIEO (Begin/Inside/End/Out) encoded representation of the smallest relevant phrase boundaries covering each token for specific phrase types, headed by clausal functions such as 'advcl', 'xcomp' or 'acl' (see Figure 2). For the RNN, headdist is binned into 0, next-left/right, close-left/right (within 3 tokens) and far-left/right. The children feature set is unique to the Subtree module and is discussed below.
- - - - - y y n y headdist - - - - - - y bin n y depbracket - - - - - - y y n y children - - - - - - y n n n
Sentence Splitting
DNN Sentencer A simple Deep Neural Network classifier, using 300 dimensional word embeddings in a Multilayer Perceptron for tokens in a 5-9-gram window. Optimization on dev data determines the optimal window size for each dataset. Flexible window sizes enable the DNN model to remember the surrounding tokens in both small and large datasets. Starting and ending symbols ('<s>' and '</s>') for each document guarantee the model can always predict the correct label when a new document starts.
Logistic Regression Sentencer The Logistic Regression (LR) Sentencer uses sklearn's (Pedregosa et al., 2011) LogisticRegressionCV implementation to predict sentence boundaries given a variety of character-level information. The beginning/ending characters (first/last letter), auto-generated POS tags and character/frequency count representations (number of consonants/vowels/digits/other, token length, token frequency) are applied to a sliding 5-gram window (categorical features are converted into 1-hot features). One advantage of the LR model is its reliability for smaller datasets where character-level features prevent sparseness (including the top 200 feature decreases performance).
Wiki-Based Sentencer The Wiki-Based Sentencer relies on the frequencies and ratios of paragraph-initial tokens extracted from Wikipedia articles obtained from Wikipedia database dumps for all languages. 1 The rationale is that even though we have no gold sentence splits for Wikipedia, if a token occurs paragraph-initial, then it must be sentence-initial. For each Wiki paragraph, we extract the first "sentence" based on text up to the first sentence final character (./?/!), and then the first word is obtained based on automatic tokenization. Though this approach is coarse, we are able to get a good approximation of frequently initial words thanks to the large data. The frequencies and ratios of tokens being sentence initial are recorded, and thresholds of frequency>10 and ratio > 0.5 are set to collect the most relevant tokens. The main purpose of this module is to capture potential sentence split points such as headings, which are not followed by periods (e.g. Introduction in English).
UDPipe + NLTK Additionally, we used UD-Pipe and NLTK's freely available models as pre- Figure 2: Dependency features from a sentence fragment for a window surrounding 'given' in SubtreeSegmenter.
dictors for the ensemble. For Simplified Chinese, we retrained UPipe using data from the Chinese Treebank, not overlapping CDTB's shared task data.
EnsembleSentencer As a metalearner receiving input from the base-modules, we used tree-based algorithms selected via optimization on dev data, either RandomForest, ExtraTrees, GradientBoosting (using sklearn's implementation), or XGBoost (Chen and Guestrin, 2016). In addition to the submodules' probability estimates, the metalearner was given access to token features in a trigram window, including word identity (for the top 100 items), POS tags, and orthographic case.
Discourse Unit Segmentation
The feature space for segmentation is much larger than for sentence splitting, due to availability of syntactic features (cf. SubtreeSegmenter This module focuses on dependency subgraphs, looking at a trigram around the potential split point. In addition to word, orthographic case, POS, and deprel features from Table 1, the module uses a children feature set, extracting information for the node token, neigh-bors, parent and grandparent, including:
• their labels and depth (rank) in the tree
• labels of closest/farthest L/R children
• left/right span length and clause BIOE
• whether L/R neighbors share their parent
The features are illustrated in Figure 2. If we consider a split at the node word 'given', we collect features for two tokens in each direction, the parent ('ignore') and grandparent ('allowed'). The left span of children of 'given' is 1 token long, and the right 2 tokens long. We additionally collect for each of these tokens whether they have the same parent as their neighbor to the right/left (e.g. 'ants' has the same parent as 'as'), as well as the nearest and farthest dependency label on descendents to each side of the node (here, mark for both closest and farthest left child of 'given', and det (closest) and obj (farthest) on the right. The BIOE bracket feature is a flattened 'chunk' feature indicating clauses opening and closing (B-ADVCL, etc.) These features give a good approximation of the window's syntactic context, since even if the split point is nested deeper than a relevant clausal function, discrepancies in neighbors' dependency features, and distances implied by left/right spans along with dependency functions allow the reconstruction of pertinent subtree environments for EDU segmentation. The feature count varied between 86-119 (for rus.rst.rrt and eng.sdrt.stac respectively), due to automatic feature selection.
BOWCounter Rather than predicting exact split points, the BOWCounter attempts to predict the number of segments in each sentence, using a Ridge regressor with regularization optimized via cross-validation. The module uses the top 200 most frequent words as well as POS tags in a bag of words model and predicts a float which is fed directly to the ensemble. This allows the module to express confidence, rather than an integer prediction. We note that this module is also capable of correctly predicting 0 segmentation points in a sentence (most frequent in the Russian data).
RNNSegmenter To benefit from the predictive power of neural sequence models and word embeddings with good coverage for OOV items, we used NCRF++ (Yang and Zhang, 2018), a bi-LSTM/CNN-CRF sequence labeling framework. Features included Glove word embeddings for English (Pennington et al., 2014) and FastText embeddings (Bojanowski et al., 2017) for other languages, trainable character embeddings, as well as the features in Table 1, such as POS tags, dependency labels, binned distance to parent, genre, and BIEO dependency brackets, all encoded as dense embeddings. We optimized models for each dataset, including using CNN or LSTM encoding for character and word embeddings.
Ensemble Segmenter For the metalearner we used XGBoost, which showed high accuracy across dataset sizes. The ensemble was trained on serialized multitraining data, produced by training base-learners on 80% of the data and predicting labels for each 20% of the training data separately. At test time, the metalearner then receives live predictions from the sub-modules, whose reliability has been assessed using the multitraining data. In addition to base module predictions, the metalearner is given access to the most frequent lexemes, POS tags, dependency labels, genre, sentence length, and dependency brackets, in a trigram window.
Connective Detection
Frequency-based Connective Detector This module outputs the ratios at which sequences of lexical items have been seen as connectives in training data, establishing an intelligent 'lookup' strategy for the connective detection task. Since connectives can be either a single B-CONN or a B-CONN followed by several I-CONNs, we recover counts for each attested connective token sequence up to 5 tokens. For test data, the module reports the longest possible connective sequence containing a token and the ratio at which it is known to be a connective, as well as the training frequency of each item. Rather than select a cutoff ratio for positive prediction, we allow the ensemble to use the ratio and frequency dynamically as features.
RNN Connective Detector
This module is architecturally identical to the RNN EDU segmenter, but since connective labels are non-binary and may form spans, it classifies sequences of tokens with predicted connective types (i.e. B-CONN, I-CONN or not a connective). Rather than predicted labels, the system reports probabilities with which each label is suspected to apply to tokens, based on the top 5 optimal paths as ranked by the CRF layer of NCRF++'s output.
Ensemble Connective Detector
The connective ensemble is analogous to the segmenter ensemble, and relies on a Random Forest classifier fed the predicted labels and probabilities from base connective detectors, as well as the same features fed to the segmenter ensemble above.
Results
Sentence Splitting Although not part of the shared task, we report results for our Ensemble-Sentencer and LR module (best sub-module on average) next to a punctuation-based baseline (split on '.', '!', '?' and Chinese equivalents) and NLTK's (Bird et al., 2009) sentence tokenizer (except for Chinese, which is not supported). Since most sentence boundaries are also EDU boundaries, this task is critical, and Table 2 shows the gains brought by using the ensemble. GumDrop's performance is generally much higher than both baselines, except for the Portuguese corpus, in which both the system and the baseline make exactly 2 precision errors and one recall error, leading to an almost perfect tied score of 0.988. Somewhat surprisingly, NLTK performs worse on average than the conservative strategy of using sentence final punctuation. The LR module is usually slightly worse than the ensemble, but occasionally wins by a small margin. Table 3 gives scores for both the predicted and gold syntax scenarios. In order to illustrate the quality of the submodules, we also include scores for Subtree (the best non-neural model) and the RNN (best neural model), next to the ensemble. The baseline is provided by assuming EDUs overlap exactly with sentence boundaries. Overall the results compare favorably with previous work and exceed the previously reported state of the art for the benchmark RST-DT dataset, in both gold and predicted syntax (to the best of our knowledge, 93.7 and 89.5 respectively). At the same time, the ensemble offers good performance across dataset sizes and genres: scores are high on all English datasets, covering a range of genres, including gold STAC (chat data), as well as on some of the smaller datasets, such as Dutch, French and German (only 17K, 22K and 26K training tokens each). Performance is worse on the SCTB corpora and Russian, which may be due to low-quality parses in the gold scenario, and some inconsistencies, especially in the Russian data, where academic abstracts and bibliographies were sometimes segmented and sometimes not. Comparing the ensemble to the RNN or subtree modules individually shows that although they each offer rather strong performance, the ensemble outperforms them for all datasets, except German, where Subtree outperforms it by a small margin, and STAC, where the RNN is slightly better, both showing just half a point of improvement.
Discourse Unit Segmentation
For automatically parsed data, the table clearly shows that eng.rst.stac, eng.rst.gum and zho.rst.sctb are the most problematic, in the first case since chat turns must be segmented automatically into sentences. This indicates that a trustworthy sentencer is crucial for discourse unit segmentation and thus very useful for this shared task. Here the EnsembleSentencer brings results up considerably from the punctuation based baseline. The ensemble achieves top performance for most datasets and on average, but the RNN per-forms better on French, Subtree on Portuguese, and both are tied for Spanish RSTSTB.
Connective Detection Results for connective detection are shown in Table 4. As a baseline, we consider assigning each word in the test data a connective label if and only if it is attested exclusively as a connective in the training set (casesensitive). As the results show, the baseline has low recall but high precision, correlated with the size of the corpus (as exhaustivity of exclusive connective words increases with corpus size).
The frequency-based connective detector gives a reasonable result with a rather simple strategy, using a threshold of 0.5 as the connective detection ratio. More importantly, it is useful as input for the ensemble that outperforms the sequence labeling RNN by itself on every dataset. We suspect at least two factors are responsible for this improvement: firstly, the imbalanced nature of connective annotations (the vast majority of words are not connectives) means that the RNN achieves over 99% classification accuracy, and may have difficulty generalizing to rare but reliable connectives. Secondly, the RNN may overfit spurious features in the training data, to which the frequency detector is not susceptible. Coupled with the resistance of tree ensembles to overfitting and imbalanced problems, the ensemble is able to give a better solution to the task.
Error Analysis
EDU Segmenter
In both gold and predicted syntax scenarios, the RST corpora in Russian, Spanish and Chinese 0 .396 .567 .951 .945 .948 .932 .945 .939 .949 .965 .957 eng.sdrt.stac .999 .876 .933 .968 .930 .949 .946 .971 .958 .953 .954 .953 eus.rst.ert .981 .530 .688 .890 .707 .788 .889 .754 .816 .909 .740 .816 fra.sdrt.annodis 1.0 .310 .474 .943 .854 .897 .894 .903 .898 .944 .865 .903 nld.rst.nldt 1.0 .721 .838 .979 .927 .952 .933 .892 .912 .964 .945 .954 por.rst.cstn .878 .435 .582 .911 .827 .867 .815 .903 .857 .918 .899 .908 rus.rst.rrt .760 .490 .596 .809 .745 .775 .821 .710 .761 .835 .755 .793 spa.rst.rststb .974 .647 .777 .921 .792 .851 .759 .855 .804 .890 .818 .853 spa.rst.sctb .970 .577 .724 .938 .631 .754 .901 .649 .754 .898 .679 .773 zho.rst.sctb .924 .726 .813 .880 .744 .806 .843 .768 .804 .810 .810 .810 mean .957 .598 .724 .927 .823 .870 .881 .841 .858 .914 .853 0 .626 .770 .924 .867 .895 .876 .867 .872 .920 .898 .909 eng.rst.gum .956 .599 .737 .948 .777 .854 .910 .805 .854 .940 .772 .848 eng.rst.rstdt .906 .368 .524 .916 .871 .893 .883 .911 .897 .896 .914 .905 eng.sdrt.stac .956 .253 .401 .849 .767 .806 .819 .814 .817 .842 .775 .807 eus.rst.ert .970 .543 .696 .917 .705 .797 .877 .747 .807 .901 .734 .809 fra.sdrt.annodis .980 .285 .442 .938 .824 .877 .892 .915 .903 .945 .853 .896 nld.rst.nldt .991 .663 .794 .951 .849 .897 .938 .835 .883 .947 .884 .915 por.rst.cstn .879 .440 .586 .935 .867 .900 .788 .883 .833 .930 .851 .888 rus.rst.rrt .664 .463 .545 .825 .717 .767 .813 .731 .770 .821 .748 .783 spa.rst.rststb .912 .566 .698 .934 .772 .845 .820 .871 .845 .875 .798 .835 spa.rst.sctb .888 .565 .691 .870 .637 .735 .813 .595 .687 .853 .655 .741 zho.rst.sctb .798 .589 .678 .806 .643 .715 .803 .607 .692 .770 .696 .731 mean .908 .497 .630 .901 .775 .832 .853 .798 .822 .887 .798 .839 (rst.rus.rrt, spa.rst.sctb and zho.rst.sctb) achieve the lowest F-scores on this task. Leaving the sentencer performance aside, this error analysis for EDU segmentation will mainly focus on the gold syntax scenario of these three corpora. Subordinating Conjunctions (SCONJ) Gum-Drop sometimes fails when there is an ambiguity between adpositions and subordinating conjunc-tions. Words that can function as both cause problems for segmentation since subordinate clauses are discourse units but adpositional phrases are not in most datasets. Ambiguous tokens include to, by, after, before in English, en ('in'), de ('of'), con ('with'), por ('by') in Spanish, as well as zai ('at') in Chinese. Classifying the boundary of subordinate clauses is another problem. The depbracket feature can identify the beginning of a subordinate clause when the main clause precedes it. However, when they are in reverse order as in Figure 3, GumDrop fails to identify the beginning of the second discourse unit possibly due to the absence of a second B-feature at jiaoshi. Enumerations and Listings In rus.rst.rrt, the special combination of a number, a backslash and a period, e.g. 1\. , 2\. etc., is used for enumeration. However, their dependency labels vary: root, flat, nmod etc. Due to the instability of the labels, these tokens may result in recall errors, suggesting possibile improvements via parser postprocessing. Similar errors also occur with 1, 2 in Spanish and variants of hyphens/dashes in Russian.
Connective Detection
Co-occurring Connective Spans Unlike EDU segmentation, where only splits are marked, connectives are spans that consist of a mandatory B-Conn and possible I-Conn labels. However, in Chinese, it is possible for a a connective to consist of discontinuous spans. In (1), both zai 'at' and the localizer zhong, are connectives and are required to co-occur in the context. However, the system fails to capture the relationship between them.
(1) zai cunmin zizhi zhong ... P:at villager autonomy LC:in B-Conn B-Conn 'Under the autonomy of villagers...' Syntactic Inversions Syntactic inversion as a connective is also problematic since no content words are involved: For instance, though the system is able to identify B-Conn in both (2) and (3), it is hard to determine whether content words, such as the verbs (fueling and yinrenzhumu), belong to the connective span or not. The model can be potentially improved by handling these using dependency features.
(2) Further fueling the belief that ...
B-Conn I-Conn
(3) ... geng yinrenzhumude de shi ... more striking DE COP B-Conn I-Conn 'the more striking thing is that ...'
Conclusion and Future Work
A main lesson learned from the present work has been that while RNNs perform well on large and consistent datasets, such as RST-DT, they are not as robust when dealing with smaller datasets. This was especially apparent in the predicted syntax scenario, where decision tree ensembles outperformed the RNN on multiple datasets. At the same time, the model stacking approach offers the advantage of not having to choose between neural and tree-based models, by letting a metalearner learn who to believe and when.
Although we hope these results on the shared task dataset represent progress on discourse unit segmentation and connective detection, we would also like to point out that high accuracy (95% or better) is still out of reach, and especially so for languages with fewer resources and in the realistic 'no gold syntax' scenario. Additionally, the architecture used in this paper trades improvements in accuracy for a higher level of complexity, including complex training regimes due to multitraining and a variety of supporting libraries. In future work, we plan to integrate a simplified version of the system into tools that are easier to distribute. In particular, we aim to integrate automatic segmentation facilities into rstWeb (Zeldes, 2016), an open source RST editor interface, so that end users can more easily benefit from system predictions.
Table 1 :
1Features for sentence splitting and EDU segmentation modules.
Table 1 )
1. Additionally,
as usefulness of features varies across datasets
(for example, some lanaguage use only the UPOS
column, or UPOS is trivially predictable from
XPOS), we performed automatic variable filtering
per dataset for both the Subtree and the Ensemble
module below. We removed all categorical vari-
ables with a Theil's U value of implication above
.98 (meaning some feature A is predictable based
on some feature B), and for numerical variables,
based on Pearson's r>0.95.
Table 2 :
2GumDrop sentence splitting performance.
Table 3 :
3Subtree, RNN and full GumDrop discourse unit segmentation performance.
Coordinating Conjunctions (CCONJ) Only particular types of coordinated structure consist of two discourse units in different corpora, e.g. VP coordination, or each coordinate predicate having its own subject, etc. For example, in eng.rst.gum, two coordinated verb phrases ([John is athletic but hates hiking] are annotated as one discourse unit whereas [John is athletic] [but he hates hiking] is divided into two units since both coordinates have their own subjects. Additionally, if one coordinate VP has a dependent adverbial clause, multiple units are annotated. However, even with dependency features included in GumDrop, precision and recall errors happen with different coordinating conjunctions. These include and, or in English, y ('and'), o ('or') in Spanish, and i ('and'), a ('but'), ili ('or') in Russian.
... tongguo ... mokuai ...Figure 3: Example of a main clause preceded by asubordinate clause in zho.rst.sctb that causes a Recall Error (RErr) on the second instance of BeginSeg. eng.pdtb.pdtb .964 .022 .044 .836 .578 .683 .859 .871 .865 .879 .888 .884 tur.pdtb.tdb .333 .001 .002 .786 .355 .489 .759 .820 .788 .766 .816 .790 zho.pdtb.cdtb .851 .259 .397 .715 .618 .663 .726 .628 .674 .813 .702 .754 mean .716 .094 .148 .779 .517 .612 .781 .773 .776 .819 .802 .809 eng.pdtb.pdtb .964 .022 .044 .836 .578 .683 .811 .798 .805 .846 .828 .837 tur.pdtb.tdb .333 .001 .002 .786 .355 .489 .761 .821 .790 .768 .817 .792 zho.pdtb.cdtb .851 .259 .397 .715 .618 .663 .705 .590 .642 .806 .673 .734 mean .716 .094 .148 .779 .517 .612 .759 .736 .746 .806 .773 .788jiaoshi
zhangwo ...
through
module
teacher
master
B-nmod:prep E-nmod:prep
I-root
I-root
BeginSeg
BeginSeg RErr
root
nmod:prep
case
nsubj
Gold syntax
Baseline
Freq
RNN
GumDrop
corpus
P
R
F
P
R
F
P
R
F
P
R
F
Pred syntax
Baseline
Freq
RNN
GumDrop
corpus
P
R
F
P
R
F
P
R
F
P
R
F
Table 4 :
4Connective detection performance.
Figure 1: System architecture. The raw text from corpora without gold syntax is first split into sentences by the ensemble sentencer. Sentences are then parsed using UDPipe. Corpora with predicted or gold syntax can then be utilized for discourse unit segmentation and connective detection.
Traditional Chinese characters were converted into simplified Chinese to be consistent with shared task data.
Reference to Abstract Objects in Discourse. Nicholas Asher, Kluwer, DordrechtNicholas Asher. 1993. Reference to Abstract Objects in Discourse. Kluwer, Dordrecht.
Natural Language Processing with Python. Steven Bird, Edward Loper, Ewan Klein, O'Reilly, Sebastopol, CASteven Bird, Edward Loper, and Ewan Klein. 2009. Natural Language Processing with Python. O'Reilly, Sebastopol, CA.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, TACL. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135-146.
Cross-lingual RST discourse parsing. Chloé Braud, Maximin Coavoux, Anders Søgaard, Proceedings of EACL 2017. EACL 2017Valencia, SpainChloé Braud, Maximin Coavoux, and Anders Søgaard. 2017a. Cross-lingual RST discourse parsing. In Proceedings of EACL 2017, pages 292-304, Valen- cia, Spain.
Does syntax help discourse segmentation? not so much. Chloé Braud, Ophélie Lacroix, Anders Søgaard, Proceedings of EMNLP 2017. EMNLP 2017CopenhagenChloé Braud, Ophélie Lacroix, and Anders Søgaard. 2017b. Does syntax help discourse segmentation? not so much. In Proceedings of EMNLP 2017, pages 2432-2442, Copenhagen.
Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. Lynn Carlson, Daniel Marcu, Mary Ellen Okurowski, Current and New Directions in Discourse and Dialogue. DordrechtKluwer22Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged cor- pus in the framework of Rhetorical Structure Theory. In Current and New Directions in Discourse and Di- alogue, Text, Speech and Language Technology 22, pages 85-112. Kluwer, Dordrecht.
XGBoost: A scalable tree boosting system. Tianqi Chen, Carlos Guestrin, KDD '16 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CATianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. In KDD '16 Proceed- ings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, pages 785-794, San Francisco, CA.
Entity-centric coreference resolution with model stacking. Kevin Clark, D Christopher, Manning, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015). the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015)BeijingKevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguis- tics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015), pages 1405-1415, Beijing.
InfyNLP at SMM4H task 2: Stacked ensemble of shallow convolutional neural networks for identifying personal medication intake from Twitter. Jasper Friedrichs, Debanjan Mahata, Shubham Gupta, Proceedings of SMM4H@AMIA 2017. SMM4H@AMIA 2017Washington, DCJasper Friedrichs, Debanjan Mahata, and Shubham Gupta. 2017. InfyNLP at SMM4H task 2: Stacked ensemble of shallow convolutional neural networks for identifying personal medication intake from Twitter. In Proceedings of SMM4H@AMIA 2017, Washington, DC.
HILDA: A discourse parser using support vector machine classification. Hugo Hernault, Helmut Prendinger, David A Duverle, Mitsuru Ishizuka, Dialogue and Discourse. 13Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A discourse parser using support vector machine classification. Dialogue and Discourse, 1(3):1-33.
A Data-Driven Methodology for Motivating a Set of Coherence Relations. Alistair Knott, University of Edinburgh, University of EdinburghPh.D. thesisAlistair Knott. 1996. A Data-Driven Methodology for Motivating a Set of Coherence Relations. Ph.D. thesis, University of Edinburgh, University of Ed- inburgh.
Discourse parsing with attention-based hierarchical neural networks. Qi Li, Tianshi Li, Baobao Chang, Proceedings of EMNLP 2016. EMNLP 2016Austin, TXQi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural net- works. In Proceedings of EMNLP 2016, pages 362- 371, Austin, TX.
Rhetorical Structure Theory: Toward a functional theory of text organization. C William, Sandra A Mann, Thompson, Text. 83William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 8(3):243-281.
The Theory and Practice of Discourse Parsing and Summarization. Daniel Marcu, MIT PressCambridge, MADaniel Marcu. 2000. The Theory and Practice of Dis- course Parsing and Summarization. MIT Press, Cambridge, MA.
Experiments in constructing a corpus of discourse trees. Daniel Marcu, Estibaliz Amorrortu, Magdalena Romera, Proceedings of the ACL Workshop Towards Standards and Tools for Discourse Tagging. the ACL Workshop Towards Standards and Tools for Discourse TaggingCollege Park, MDDaniel Marcu, Estibaliz Amorrortu, and Magdalena Romera. 1999. Experiments in constructing a cor- pus of discourse trees. In Proceedings of the ACL Workshop Towards Standards and Tools for Dis- course Tagging, pages 48-57, College Park, MD.
Building a large annotated corpus of English: The Penn Treebank. Special Issue on Using Large Corpora. Mitchell P Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, Computational Linguistics. 192Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Special Is- sue on Using Large Corpora, Computational Lin- guistics, 19(2):313-330.
How much progress have we made on RST discourse parsing? a replication study of recent results on the RST-DT. Mathieu Morey, Philippe Muller, Nicholas Asher, Proceedings of EMNLP 2017. EMNLP 2017Copenhagen, DenmarkMathieu Morey, Philippe Muller, and Nicholas Asher. 2017. How much progress have we made on RST discourse parsing? a replication study of recent re- sults on the RST-DT. In Proceedings of EMNLP 2017, pages 1319-1324, Copenhagen, Denmark.
Predicting the presence of discourse connectives. Gary Patterson, Andrew Kehler, Proceedings of EMNLP 2013. EMNLP 2013Gary Patterson and Andrew Kehler. 2013. Predicting the presence of discourse connectives. In Proceed- ings of EMNLP 2013, pages 914-923.
Scikit-learn: Machine learning in Python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Journal of Machine Learning Research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, and Vincent Dubourg. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of EMNLP 2014. EMNLP 2014Doha, QatarJeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532-1543, Doha, Qatar.
Using syntax to disambiguate explicit discourse connectives in text. Emily Pitler, Ani Nenkova, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. the ACL-IJCNLP 2009 Conference Short PapersSuntec, SingaporeEmily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. In Proceedings of the ACL-IJCNLP 2009 Confer- ence Short Papers, pages 13-16, Suntec, Singapore.
Automatic discourse connective detection in biomedical text. Rashmi Balaji Polepalli Ramesh, Tim Prasad, Brian Miller, Hong Harrington, Yu, Journal of the American Medical Informatics Association. 195Balaji Polepalli Ramesh, Rashmi Prasad, Tim Miller, Brian Harrington, and Hong Yu. 2012. Automatic discourse connective detection in biomedical text. Journal of the American Medical Informatics Asso- ciation, 19(5):800-808.
The Penn Discourse Treebank 2.0. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, Bonnie Webber, Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). the 6th International Conference on Language Resources and Evaluation (LREC 2008)Marrakesh, MoroccoRashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bon- nie Webber. 2008. The Penn Discourse Treebank 2.0. In Proceedings of the 6th International Confer- ence on Language Resources and Evaluation (LREC 2008), pages 2961-2968, Marrakesh, Morocco.
The Biomedical Discourse Relation Bank. Rashmi Prasad, Susan Mcroy, Nadya Frid, Aravind Joshi, Hong Yu, BMC bioinformatics. 121188Rashmi Prasad, Susan McRoy, Nadya Frid, Aravind Joshi, and Hong Yu. 2011. The Biomedical Discourse Relation Bank. BMC bioinformatics, 12(1):188.
Reflections on the Penn Discourse Treebank, comparable corpora, and complementary annotation. Rashmi Prasad, Bonnie Webber, Aravind Joshi, Computational Linguistics. 404Rashmi Prasad, Bonnie Webber, and Aravind Joshi. 2014. Reflections on the Penn Discourse Treebank, comparable corpora, and complementary annota- tion. Computational Linguistics, 40(4):921-950.
Probabilistic part-of-speech tagging using decision trees. Helmut Schmid, Proceedings of the Conference on New Methods in Language Processing. the Conference on New Methods in Language ProcessingManchester, UKHelmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the Conference on New Methods in Language Process- ing, pages 44-49, Manchester, UK.
Sentence level discourse parsing using syntactic and lexical information. Radu Soricut, Daniel Marcu, Proceedings of HLT-NAACL 2003. HLT-NAACL 2003EdmontonRadu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical infor- mation. In Proceedings of HLT-NAACL 2003, pages 149-156, Edmonton.
Discourse chunking and its application to sentence compression. Caroline Sporleder, Mirella Lapata, Proceedings of EMNLP 2005. EMNLP 2005VancouverCaroline Sporleder and Mirella Lapata. 2005. Dis- course chunking and its application to sentence com- pression. In Proceedings of EMNLP 2005, pages 257-264, Vancouver.
Generating discourse structures for written text. Huong Le Thanh, Geetha Abeysinghe, Christian Huyck, Proceedings of COLING 2004. COLING 2004Geneva, SwitzerlandHuong Le Thanh, Geetha Abeysinghe, and Christian Huyck. 2004. Generating discourse structures for written text. In Proceedings of COLING 2004, pages 329-335, Geneva, Switzerland.
Stacked generalization. H David, Wolpert, Neural Networks. 52David H. Wolpert. 1992. Stacked generalization. Neu- ral Networks, 5(2):241-259.
NCRF++: An opensource neural sequence labeling toolkit. Jie Yang, Yue Zhang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourneJie Yang and Yue Zhang. 2018. NCRF++: An open- source neural sequence labeling toolkit. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 74-79, Mel- bourne.
rstWeb -a browser-based annotation interface for Rhetorical Structure Theory and discourse relations. Amir Zeldes, Proceedings of NAACL-HLT 2016 System Demonstrations. NAACL-HLT 2016 System DemonstrationsSan Diego, CAAmir Zeldes. 2016. rstWeb -a browser-based anno- tation interface for Rhetorical Structure Theory and discourse relations. In Proceedings of NAACL-HLT 2016 System Demonstrations, pages 1-5, San Diego, CA. |
226,262,427 | Biomedical Event Extraction as Sequence Labeling | We introduce Biomedical Event Extraction as Sequence Labeling (BEESL), a joint endto-end neural information extraction model. BEESL recasts the task as sequence labeling, taking advantage of a multi-label aware encoding strategy and jointly modeling the intermediate tasks via multi-task learning. BEESL is fast, accurate, end-to-end, and unlike current methods does not require any external knowledge base or preprocessing tools. BEESL outperforms the current best system (Li et al., 2019) on the Genia 2011 benchmark by 1.57% absolute F1 score reaching 60.22% F1, establishing a new state of the art for the task. Importantly, we also provide first results on biomedical event extraction without gold entity information. Empirical results show that BEESL's speed and accuracy makes it a viable approach for large-scale real-world scenarios. 1 | [
10743051,
8247565,
5071894,
51878680,
174800674,
67788603,
102351547,
2941631,
3626819,
17651150,
14158419,
67855320,
67855842,
52967399,
17905517,
3994096,
528369,
53047545,
29584126
] | Biomedical Event Extraction as Sequence Labeling
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 16-20, 2020. 2020
Alan Ramponi [email protected]
Department of Information Engineering and Computer Science
University of Trento
Italy
Centre for Computational and Systems Biology (COSBI)
Microsoft Research -University of Trento
Italy
♦ ♣ Rob Van Der Goot
Department of Computer Science
IT University of Copenhagen
Denmark
Rosario Lombardo [email protected]
Centre for Computational and Systems Biology (COSBI)
Microsoft Research -University of Trento
Italy
Barbara Plank
Department of Computer Science
IT University of Copenhagen
Denmark
Biomedical Event Extraction as Sequence Labeling
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
the 2020 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 16-20, 2020. 20205357
We introduce Biomedical Event Extraction as Sequence Labeling (BEESL), a joint endto-end neural information extraction model. BEESL recasts the task as sequence labeling, taking advantage of a multi-label aware encoding strategy and jointly modeling the intermediate tasks via multi-task learning. BEESL is fast, accurate, end-to-end, and unlike current methods does not require any external knowledge base or preprocessing tools. BEESL outperforms the current best system (Li et al., 2019) on the Genia 2011 benchmark by 1.57% absolute F1 score reaching 60.22% F1, establishing a new state of the art for the task. Importantly, we also provide first results on biomedical event extraction without gold entity information. Empirical results show that BEESL's speed and accuracy makes it a viable approach for large-scale real-world scenarios. 1
Introduction
Biomedical event extraction provides invaluable means for assisting domain experts in the curation of knowledge bases and biomolecular pathways (Ananiadou et al., 2010). While the task has received significant attention in research over the last decade, it remains challenging. Progress has been rather stagnating (see Figure 1).
Events are typically highly complex and nested structures, which require deep contextual knowledge to resolve. This is particularly the case for biomedical NLP (Kim et al., 2011), where biomolecular events can be nested (Miwa et al., 2014) and long-distance arguments are frequent (Li et al., 2019). Figure 2 shows an example with four events. Each event consists of an event mention (trigger) and one or more arguments. For instance, there is a +REGULATION event triggered by the 1 The source code is available at https://github. com/cosbi-research/beesl. Combination (Joint+parsing) SVM pipeline (+CR) (Miwa et al., 2013) SVM pipeline & MLN (Joint) (Venugopal et al., 2014) Stacked generalization (Majumder et al., 2016) CNN pipeline (+DP) (Björne & Salakoski, 2018) ensemble single KB-TreeLSTM pipeline (Li et al., 2019) TreeLSTM pipeline BEESL Figure 1: Performance of biomedical event extraction on the BioNLP Genia 2011 test set over time.
span "induced", with a PROTEIN entity (i.e., "IL-12") as CAUSE and a nested +REGULATION event (i.e., "activation") as THEME. Many state-of-theart biomedical event extraction systems still work as a pipeline and extract event triggers and their arguments independently (Björne and Salakoski, 2018;Li et al., 2019). They typically employ dependency parsing as features in a CNN model ensemble (Björne and Salakoski, 2018) or in Tree-LSTMs with knowledge bases (Li et al., 2019). We propose a new approach for biomedical event extraction by casting it as a sequence labeling task (BEESL). Our approach is conceptually simple: we convert the event structures into a representation suitable for sequence labeling, and leverage a multi-label aware decoder with BERT (Devlin et al., 2019) in a multi-task sequence labeling model. This reduces the problem to predicting a structured output for an input sequence to wordlevel tagging decisions. Compared to previous alternatives (cf. Section 7) which cast event extraction as syntactic or semantic tree-or graph-parsing task, this leads to a faster, joint model which also mitigates error propagation of locally-optimized classifier pipelines (Björne and Salakoski, 2018 Contributions To the best of our knowledge, we are the first to cast biomedical event extraction as sequence labeling. We demonstrate that BEESL is an attractive and efficient solution to extract biomedical events. We evaluate it on the BioNLP Genia 2011 benchmark, obtaining a new state of the art (cf. Figure 1), while gaining on efficiency. We additionally provide empirical results of the impact of alternative multi-task encodings, and to the best of our knowledge, the first results of biomedical event extraction without assuming gold entities.
Encoding Event Structures
This section introduces the event structures and how we encode them for sequence labeling.
Event structures
Events are structured representations which comprise multiple information units (Figure 2, top). An event is anchored to a trigger, a text span which indicates the presence of an event (Figure 2, rounded boxes). Each event has one or more arguments, namely entities or other events ( Figure 2, end of arrows), which are assigned a role in the event (Figure 2, labels on arrows). For example, an EXPRES-SION event is indicated in Figure 2 at "production" involving the PROTEIN "IL-10" as its argument. Nested structures are possible and frequent. For instance, the +REGULATION event centered on "activation" is both argument of the "induced"-anchored event as well as the "promote"-anchored event.
Sequence labeling encoding
Given [x 1 , ..., x n ] a sequence of n tokens, we encode event structures as token-level labels [y 1 , ..., y n ], to reduce the task to a sequence labeling problem. Adopting dependency parsing terminology, we encode the label y i for each token x i as a tuple d, r, h , where d is the dependent and refers to the token and its mention type (either trigger, entity, or nothing), r is the relation and used to refer to its role, and head (h) denotes the event the token refers to (Figure 2, bottom). In more detail, to discriminate event heads with the same type in text, we encode the heads h as relative head mention position. 2 For instance, h = +REG +1 means the head is the first +REGULATION on the right of d in the relative surface order, whereas h = +REG −2 means it is the second +REGULATION on the left. In Figure 2 the label for "production" is EXPRESSION, THEME, +REG −1 , denoting the token is an EXPRESSION trigger, THEME of the first +REGULATION event on the left. As opposed to dependency parsing, tokens may have zero or multiple roots, and thus multiple heads and relations. This poses additional challenges. For instance, the "activation"-anchored event (Figure 2) is both THEME and CAUSE of "induced"and "promote"-anchored event heads, respectively. As a result, both r and h are multi-label, and the label for "activation" is encoded as +REGULATION, [THEME, CAUSE], [+REG −1 , +REG +1 ] , where the order of r and h items is preserved.
Event Extraction as Sequence Labeling
Formally, we aim to learn a function f : X → Y that assigns each token x i a structured label y i , i.e., d, r, h . A straightforward solution is to predict the label y i as an atomic entity (i.e., single label) in a single-task model. For BEESL, we instead propose to use multi-task learning (MTL) which allows to learn interdependencies while cutting down the label space, paired with multi-label prediction. An overview of BEESL is shown in Figure 3. We use BERT (Devlin et al., 2019) as encoder, pretrained on biomedical texts (Section 4). We mask entity spans for better generalization (Alt et al., 2019). The first WordPiece (Schuster and Nakajima, 2012) of each token x i is used for prediction, where the contextual hidden representation e i of the token x i is encoded with layer-wise attention over the BERT layers, similarly to Kondratyuk and Straka, 2019). As decoders, we use standard softmax with a cross entropy loss unless otherwise specified, and introduce a multilabel decoder (Section 3.2) (Figure 3, upper right).
We empirically evaluate both single-task and multi-task setups, including several MTL encoding alternatives, discussing their limitations and benefits. In the following, we first introduce the multi-task setups, and then multi-label decoding.
Multi-task strategies
We denote the label spaces for each component of the labels as d i ∈ D, r i ∈ R, and h i ∈ H. Further, we use L to refer to the maximum label space size.
Single-task A single-task (ST) setup is used as a baseline. It predicts a single label y i = d, r, h for each input token x i . The label space is up to L = |D| × |R| × |H|.
Multi-task The label y i for each token x i is decomposed into parts (hereafter, sub-labels), each treated as a prediction task. The decomposition of the label space allows each sub-label space to be framed as a different task with its own private decoder, mitigating the output space sparsity . Depending on the decomposition of the label y i = d, r, h , we have four multi-task learning options (pairs of tasks, or each subpart as a task, respectively) with the following properties:
1. d , r, h : up to L = |D| + |R| × |H|; 2. d, r , h : up to L = |D| × |R| + |H|; 3. d, h , r : up to L = |D| × |H| + |R|; 4. d , r , h : up to L = |D| + |R| + |H|.
Option 4 encodes each subpart as its own task. While this leads to the smallest label space, it decouples the problem into 3 separate tasks. Options 1-3 are pair-wise task setups. We hypothesize that BEESL benefits from disentangling mention detection from head labeling (option 1).
As illustrated in Figure 3, BEESL uses the predicted sub-labels to form the complete label tuplê y i = d ,r,ĥ . In case r and h belong to different sub-label spaces (as is possible in options 2-4), we require that both predictionsr andĥ are present (non-empty) to ensure well-formedness. This is a downside of these alternative options 2-4, as we will see empirically (Section 5).
During training, the MTL loss is computed as
L = t λ t L t ,
where L t is the loss for each task t, given by the respective decoder (see also Section 3.2), with λ t a task-specific weighting parameter. In our experiments we kept λ = 1.0 for all, since preliminary experiments showed weighting sub-tasks differently was not beneficial. In the single-task setup, the loss reduces to L = L t .
Multi-label decoder
The multi-label decoder is designed to handle multiple labels per token, thus being suitable for predicting relations and heads. Given a task with l j ∈ L labels, it models P (l j |e i ) for each label l j . Differently from the single-label decoder, each label is predicted with a sigmoid, where all contribute equally to the loss. Given the probabilities P (l j |e i ) for the l j ∈ L labels and a threshold τ , the token x i is assigned all the labels l j with probability P (l j |e i ) ≥ τ . If no P (l j |e i ) ≥ τ is found, we take the highest scoring label l j (which may also be empty) as a fallback. 3 We employ a binary crossentropy loss, averaged across all batches.
Experimental Setup
We evaluate BEESL on the Genia 2011 benchmark (Kim et al., 2011), which comprises both abstracts and full-texts. The corpus consists of annotations for PROTEIN entities and 9 fine-grained event types. The Genia event extraction tasks expect both texts and entities as input, and complete events need to be predicted. Statistics on the dataset are shown in Table 1. Event types can be categorized into simple, binding and complex events, related to the number and types of arguments. Simple events require a THEME only, binding events require one or more THEME arguments, while complex events take both THEME and CAUSE arguments, where both can in turn be other events, resulting in nested structures. Björne and Salakoski (2011) estimated that 37.2% of the events in the data are nested. We refer the reader to Appendix A.1 for formal event definitions.
BEESL is based on MaChAmp (van der Goot et al., 2020), a toolkit for multi-task learning and fine-tuning of BERT-like models. We extend MaChAmp to also handle multi-label sequence labeling. We experiment with BEESL in single-and different multi-task setups.
After sequence labeling, token-level labels are converted into the official BioNLP-ST standoff format for evaluation (Kim et al., 2011). We simply split the event arguments based on their formal definition, producing complete structures (e.g., an EXPRESSION event with k THEME arguments is split into k EXPRESSION events, with one THEME each). Similarly to previous work, we focus on sentence-level events. We used BioBERT-Base 1.1 as our BERT model for experiments, since it provides state-of-the-art performance across multiple biomedical information extraction tasks . For multi-label decoding, we tune the threshold τ for each setup (yielding τ M T = 0.5 and τ ST = 0.7). Other hyper-parameter values and tuning details are provided in Appendix A.2.
Evaluation In line with previous work, we evaluate BEESL in terms of precision (P), recall (R), and F1 score according to the approximate recursive span matching criterion (Kim et al., 2011) using the official BioNLP online evaluation service. 4 For early stopping during training, we employ the simpler span-based F1 score (as used in named entity recognition) as our proxy metric. We found it highly correlates with the approximate recursive span based F1 official metric.
No gold entities In biomedical event extraction, entities are typically given in advance. To evaluate BEESL in a setup with predicted entities (Section 6.3), we firstly employ our model as singletask sequence labeler for BIO-tagged entity mentions using default settings and a standard CRF decoder . Note that for comparison purposes in all other experiments we assume entity mentions are gold-tagged. Then, we evaluate BEESL with raw texts and predicted entities as input, thus indirectly penalizing events that take over-predicted entities or that miss entities since they are under-predicted.
Results
First, we evaluate the MTL and multi-label decoding strategies on the development set to determine the best setup (Sections 5.1, 5.2). Then, we compare BEESL to the results obtained by the top performing systems on the official test set (Section 5.3). Finally, we gauge its speed (Section 5.4). outperforming the other MTL options, particularly in recall. These results show that a multi-task setup with separate tasks for mention detection and head labeling, respectively, is the most useful. Option 1, i.e., d , r, h defaults to the multi-task option for BEESL (Figure 3) used in the following experiments.
Multi-task settings
Adding the multi-label decoder
We evaluate the multi-label decoder for both singletask (BEESL ST ) and multi-task (BEESL M T ) setups (Table 3, bottom). Multi-label decoding is beneficial, as the data contains many multi-headed tokens, and modeling them improves both setups. Single task performance increases substantially, from 61.13 to 63.34 F1 score. Similar signifi-cant performance gains are observed for multi-task learning, from 62.37 to 65.04 F1 score. Regardless of the multi-label modeling, the multi-task setup provides the highest overall performance.
Comparison to the state of the art
We now compare the multi-task multi-label BEESL to the top performing systems (hereafter, simply BEESL). As shown in Table 2, BEESL outperforms the state-of-the-art by a large margin, i.e., an absolute improvement of 1.57 points in F1 score over the KB-Tree LSTM model (Li et al., 2019) (hereafter, KBTL). It improves over both precision and recall, and yields a new state of the art with an F1 score of 60.22%, yet being conceptually simple. Table 4 compares F1 scores of BEESL to the previous best model on a per-event level (precision and recall are provided in Appendix A.3). BEESL outperforms the KBTL approach (Li et al., 2019) overall on 7 out of the 9 event types. From a coarsegrained perspective, BEESL outperforms KBTL on simple, binding, and complex event categories. Particularly, improvements over KBTL on simple events are as large as +13% F1 score. Furthermore, noticeable are also the improvements for binding and nested, complex events, for which our model achieves 50.19% and 48.32% F1 score. From a closer look, the recall of BEESL on simple events is substantially higher than KBTL, which ease a correct identification of complex events.
Next, we look at performance per text type (i.e., abstract and full-text subsets). BEESL achieves 62.14% F1 score on abstracts-only documents, and 55.59% F1 score on full-texts. This confirms that full-texts are harder to process than abstracts, due to the differences in structural and content aspects (Cohen et al., 2010).
To sum up, BEESL handles events well, and unlike most prior work, does not use knowledge bases or dependency parsers as pre-processing step. BEESL uses multi-task learning with a contextual encoder and multi-label aware decoding, herewith bringing progress to the biomedical event extraction task as illustrated in Figure 1.
Speed comparison
We compare BEESL to TEES, the Turku Event Extraction System (Björne and Salakoski, 2018) to compare their speed at inference time on commodity hardware. TEES is the 2nd top-performing system (Figure 1), and its code is freely available. To the best of our knowledge, the source code of (Li et al., 2019) is not yet available. Table 5 show that BEESL is ∼2x faster and ∼5x faster on a consumer grade CPU 5 than TEES single and ensemble system, respectively. In terms of sentences per minute, BEESL processes ∼500 sents/min compared to 255 sents/min and 101 sents/min in TEES single (3.42% lower F1) and ensemble (2.12% lower F1), respectively. Table 6: Ablation study on BEESL when removing the multi-task capability (i.e., replacing MTL with independent classifiers) and the multi-label handling.
Results in
Analysis and Discussion
To gain insights about BEESL, we shed more light on several aspects. Firstly, we analyze how much BEESL gains from multi-task learning, compared to using a powerful contextualized BERT encoder alone in a single-task learning setup and a formulation with two independent classifiers (Section 6.1). Then, we quantify the stability of the threshold τ of the multi-label decoder (Section 6.2). We also aim to get deeper insight on model performance without gold entities (Section 6.3), and qualitatively study the sources of prediction errors of BEESL (Section 6.4).
How important is multi-task learning?
As opposed to running one single model which models d and r, h jointly in a multi-task setup, we also compare to single-task (ST) and an experiment in which we formulate two classifiers which predict the two labels from the best MTL setup separately. This allows us to gauge the effectiveness of the multi-task learning approach compared to local classifiers which use strong BERT-based encoding, and compared to predicting an atomic label in ST.
Results in Table 6 confirm that leveraging a shared encoder and multi-task learning for both triggers and heads is crucial. Without multi-task learning and multi-label decoding, the F1 score drops to 61.44 (independent classifiers) and 61.13 (multi-label) 65.04 with best-only prediction 64.54 -0.50 Table 7: Ablation study on the threshold τ of the multilabel decoder ("with best-only predicion": τ = 1.0). Table 3). Adding multilabel decoding helps, as expected. However, the full power of BEESL is only achieved by using both the multi-task and the multi-label approach, which leads to the novel state of the art.
How brittle is BEESL to the threshold τ ?
As shown in Table 3, using a multi-label decoder largely increases the performance of a system with a single-label decoder (from 62.37 to 65.04 F1 score). However, what is left is how much the threshold τ impacts the performance. To get insights on it, we firstly performed an ablation study setting τ = 1.0. As introduced in Section 3.2, this reduces to predicting the highest scoring label only -however, in a reduced label space induced by the multi-label aware decoder. We found only part of the improvement is due to the threshold τ in both multi-task and single-task settings (+0.50% and +0.47%, respectively) (Table 7).
Moreover, we evaluated BEESL with different τ values. As shown in Figure 4, a threshold in the range 0.3-0.7 only marginally alters the results, which are still better than predicting the highest scoring label only (τ = 1.0).
What is the effect of using gold entities?
The standard in biomedical event extraction is to evaluate the performance of a system on gold en- tities. In real-world situations it is unlikely that the data is annotated for entities. We believe it is important to estimate the impact non-gold entities have on system performance (hereafter, silver entities). The performance of the entity prediction on the development set is 87.95 span-based F1 score. The results on the event extraction task using silver entities are shown in Table 8. The overall drop in F1 amounts to around 5%, and it is wellbalanced across precision and recall. This shows that BEESL's performance is clearly affected, but that the system is relatively robust to noisy, nongold silver entities. We believe that this performance gap can be further minimized by using jackknifing (Agić and Schluter, 2017) to reduce data mismatch, however, this requires to align the predicted entities with the existing events in the training data, which is non-trivial, and we leave this for future work.
What are the sources of errors?
We randomly sampled 30 documents (comprising 168 gold events) from the development set for a manual scrutiny for sources of errors. We classified errors into two broad categories, namely trigger and argument errors. Further, we classify them in fine-grained categories based on the type of error, namely under-prediction, over-prediction, and wrong type. Table 9 summarizes the results.
We notice the largest fraction of errors is due to trigger errors. From a closer look, under-predicted triggers account for 31.43% of the total, whereas over-predicted triggers for 28.57%. We investigated the reason for these errors, finding that overpredicted triggers are often due to generic words used very frequently to indicate specific trigger types. For instance, BEESL identifies the +REG-ULATION event anchored at "activated" in the following sentence: "Tax [...] maximally activated HTLV-I-LTR-CAT and kappa B-fos-CA" albeit the gold standard does not contain the event in this instance. However, from a semantic point of view we believe these errors are acceptable. Other cases include the words such as "detected" and "influences", which are often used as EXPRESSION and REGULATION event triggers, respectively. Under-prediction of triggers is instead due to a variety of reasons. Both rare words (e.g., a +REG-ULATION event centered on "co-transfected") and uncertain events account for a large fraction of this error type. An example of uncertain event is represented by the +REGULATION trigger "importance" in the sentence "[...] importance of NF-kappa B in LT gene expression", that BEESL does not predict.
Wrongly typed triggers represent only 10% of the errors. An example is represented by ambiguous trigger types. In the sentence "T cells upregulates A3G mRNA levels", BEESL classifies "levels" as an EXPRESSION trigger, while the gold annotation indicates it is a TRANSCRIPTION trigger. By a closer look, we found some triggers in the corpora are annotated as EXPRESSION and TRANSCRIP-TION types interchangeably. This is due to the fact a TRANSCRIPTION is a gene EXPRESSION.
Regarding the identification of arguments, overpredictions are quite uncommon. If erroneous, the main error we found may benefit from syntactic information, which we aim to integrate in a multitask setup in future work. We found no misclassification of arguments in our document samples. Under-prediction of arguments are instead mostly due to under-predicted events.
Related Work
Biomedical event extraction has a long-standing tradition Miwa et al., 2012;Vlachos and Craven, 2012;Venugopal et al., 2014;Majumder et al., 2016). Current work has explored neural methods and uses multiple classification stages. Namely, first identifying trigger mentions, and then evaluating all entity pairs (Li et al., 2019;Björne and Salakoski, 2018). They come with the shortcomings of traditional pipeline methods. Many studies use dependency parsers to obtain features or for guidance of Tree-LSTMs (Li et al., 2019;Björne and Salakoski, 2018).
Recent work in syntactic parsing has shown that reducing parsing to sequence labeling is a viable alternative for both constituent and dependency parsing (Spoustová and Spousta, 2010;Gómez-Rodríguez and Vilares, 2018;Strzyz et al., 2019), which we took as inspiration. Moreover, earlier work framed biomedical event extraction as syntactic and semantic tree-or graph-parsing Rao et al., 2017). In particular, Mc-Closky et al. (2011) do dependency parsing, followed by a second-stage parse reranker model for event extraction, and Rao et al. (2017) cast the problem as subgraph identification problem.
Joint learning for biomedical event extraction was explored in early work Venugopal et al., 2014;Vlachos and Craven, 2012). Contemporary to our work, a very recent study proposes oneIE, a joint learning model for event extraction (Lin et al., 2020). It proposes a single end-to-end model for event extraction using 4 stages, paired with a beam search, obtaining good results on ACE data. Processing multiple heads has previously been done for relation extraction using multi-head selection (Bekoulis et al., 2018a,b), and sequence labeling has been employed for joint entity and relation classification (Dai et al., 2019) with inter-token attention. We employ it at the token-level for multi-label sequence labeling.
Conclusion
This paper proposes BEESL, a new end-to-end biomedical event extraction system which is both efficient and accurate. BEESL is broadly applicable to event extraction and other tasks that can be recast as sequence labeling. The system's strength comes from the joint multi-task modeling paired with multi-label decoding, which aids interdependencies between the tasks and is superior to alternative decoders based on strong contextualized BERT embeddings. BEESL is fast, and achieves stateof-the-art performance on the Genia 2011 event extraction benchmark without the need of external tools for features and resources such as knowledge bases. Our analysis shows that BEESL works very well across event types.
We release the code freely, to foster research on using BEESL for other NLP tasks as well, e.g., enhanced dependency parsing, fine-grained named entity recognition, and semantic parsing.
A Appendix
A.1 Data and formal event definitions
Events on the Genia 2011 benchmark follow the formal specification detailed in
A.2 Hyper-parameters
The list of hyper-parameter values and the search space are presented in Table 11, whereas the number of trainable parameters in BEESL is ≈ 110M . For tuning, we started from the values reported in previous works on multi-task learning for NLP evaluation benchmarks, e.g., UDify (Kondratyuk and Straka, 2019). We performed 32 search trials via grid search, in which "batch size" and "base learning rate" have been coupled -(32, 1e − 3) and (64, 1e − 2). Additional 9 search trials have been performed for threshold τ selection for the BEESL multi-task multi-label model. We used the official approximate recursive span matching based F1 score for model selection, whereas the sum of span-based F1 scores of the tasks was employed to determine early stopping of the training process.
A.3 Miscellaneous
Technical details Texts have been tokenized and segmented using scispaCy 0.2.4 (Neumann et al., 2019). In our data it is uncommon that multiple contiguous triggers have the same type, so BIO encoding is not needed. In the rare case of overlapping event triggers of different types, we create a single label d concatenating their types. Similarly Multi-label threshold 0.5 0.1, 0.2, ..., 1.0 to previous work, for BINDING events with multiple THEME arguments we employ a simple heuristic to convert them into the BioNLP-ST standoff format (Vlachos and Craven, 2012 Upper bound of the encoding We quantified the upper bound of our encoding strategy by directly evaluating the performance of the encoded development set. Results (P: 95.76%, R: 91.30%, F1: 93.48%) indicate the goodness of our strategy, and that the ≈ 6% missing is due to cross-sentence arguments we disregard, similarly to previous work.
Figure 4 :
4Stability of the threshold τ . Values in the range 0.3-0.7 only minimally alter BEESL scores. (ST setup, BEESL ST in
Table 3 (
3top) summarizes the main results for the MTL experiments. They confirm our hypothesis that d , r, h (option 1) is the most viable representation; it leads to the highest F1 score, largelyWork
Method
P
R
F1
Riedel et al. (2011)
FAUST -Model combination (joint+parsing) 64.75 49.41 56.04
Miwa et al. (2012)
EventMine -SVM pipeline (+coref)
63.48 53.35 57.98
Venugopal et al. (2014)
BioMLN -SVM pipeline & MLN (joint)
63.61 53.42 58.07
Majumder et al. (2016)
Stacked generalization
66.46 48.96 56.38
Björne and Salakoski (2018)
TEES -CNN pipeline (single model)
64.86 50.53 56.80
Björne and Salakoski (2018)
TEES -CNN pipeline (5x ensemble)
68.76 49.97 57.87
Björne and Salakoski (2018)* TEES -CNN pipeline (mixed 5x ensemble)
69.45 49.94 58.10
Li et al. (2019)
BiLSTM pipeline
62.18 48.44 54.46
Li et al. (2019)
Tree-LSTM pipeline
64.56 50.28 56.53
Li et al. (2019)
KB-driven Tree-LSTM pipeline
67.01 52.14 58.65
BEESL
Multi-task neural sequence labeling
69.72 53.00 60.22
Table 2 :
2Performance comparison on the test set of BioNLP Genia 2011. *indicates that the system was trained on training plus part of development data. BEESL uses the official training portion only. Top: traditional ML systems; Middle: state-of-the-art neural systems; Bottom: proposed multi-task sequence labeling system.Multi-task
P
R
F1
d , r, h
71.28 55.44 62.37
d, r , h
72.35 51.31 60.04
d, h , r
73.51 49.49 59.16
d , r , h
73.05 51.34 60.30
Multi-label
P
R
F1
BEESL ST
73.30 52.42 61.13
with multi-label 71.74 56.71 63.34
BEESL M T
71.28 55.44 62.37
with multi-label 71.84 59.42 65.04
Table 3 :
3Performance of diverse settings for BEESL (multi-task and multi-label) on the development set.
Table 4 :
4Per-event performance of BEESL and KBTL (KB-driven TreeLSTM) (Li et al., 2019) on the test set.
Table 5 :
5Speed comparison to TEES (Björne and
Salakoski, 2018) single and ensemble models at infer-
ence time. Results are sents/min, averaged over 5 runs.
Setting
P
R
F1
BEESL
71.84 59.42 65.04
-multi-task
71.66 56.95 63.47
-multi-label 74.28 52.39 61.44
Table 8 :
8Performance of BEESL with no gold entities.Error type
Fraction
Trigger
Under-prediction
31.43%
Over-prediction
28.57%
Wrong type
10.00%
Argument
Under-prediction
22.86%
Over-prediction
7.14%
Wrong type
0.00%
Table 9 :
9Error analysis on a random sample of 30 doc-
uments from the development set.
Table 10 .
10The full data can be downloaded from the official portal. 6 Positive regulation Theme(P/E), Cause(P/E) Negative regulation Theme(P/E), Cause(P/E)Event type
Arguments
Simple events
Gene expression
Theme(P)
Transcription
Theme(P)
Protein catabolism Theme(P)
Phosphorylation
Theme(P)
Localization
Theme(P)
Binding
Theme(P)+
Complex events
Regulation
Theme(P/E), Cause(P/E)
Table 10 :
10Formal definition of events. P: PROTEIN, E: any event type, +: 1 or more arguments.
6 http://bionlp-st.dbcls.jp/GE/2011/ downloads/Hyper-parameter
Value
Space
Optimizer
Adam
β 1 , β 2
0.9,0.99
Weight decay
0.01
Gradient clipping
10
Dropout
0.5
0.1, 0.3, 0.5
BERT dropout
0.1
0.1, 0.2
Mask probability
0.1
0.1, 0.15, 0.2
Layer dropout
0.1
Batch size
64
32, 64
Base learning rate
1e − 2
1e − 3, 1e − 2
BERT learning rate
5e − 5
Epochs
50
Patience
5
Table 11 :
11Hyper-parameter values and search space.
). For speed experiments with TEES (Björne and Salakoski, 2018), we removed extra modules for a fair comparison. Detailed per-event scores We present in Table 12 a complementary view of scores (i.e., with precision and recall) of BEESL and the previous state of the art (Li et al., 2019) on a per-event level. Protein catabolism 83.33 66.67 74.07 87.50 46.67 60.87 Phosphorylation 94.05 85.41 89.52 87.28 81.62 84.36 Localization 83.21 59.69 69.51 80.28 59.69 68.47 Binding 65.36 40.73 50.19 53.16 37.68 44.10 Complex events 58.54 41.14 48.32 55.73 41.73 47.72 Regulation 62.22 36.36 45.90 53.61 36.62 43.52 Positive regulation 60.14 41.93 49.41 57.90 41.37 48.26 Negative regulation 53.19 42.38 47.17 52.39 46.06 49.02BEESL
KBTL
Event type
P
R
F1
P
R
F1
Simple events
84.17 74.98 79.31 85.95 72.62 78.73
Gene expression
84.55 77.54 80.90 87.24 74.35 80.28
Transcription
72.50 66.67 69.46 82.31 69.54 75.39
All events
69.72 53.00 60.22 67.01 52.14 58.65
Table 12 :
12Detailed per-event performance of BEESL and KBTL (KB-driven TreeLSTM) on the test set.
In preliminary experiments we found this mitigates the label sparsity problem of other positional encodings, e.g., relative positional encoding(Strzyz et al., 2019). We additionally found relative head mention positions ≥ 2 are rare in our data.
In case τ = 0 ∨ τ = 1, we adopt the same strategy, since all or no labels would be potentially predicted, respectively.
http://bionlp-st.dbcls.jp/GE/2011/ eval-test/.
Intel Core i5-6360U (2 cores).
AcknowledgmentsThis research was supported by Fondazione the Microsoft Research -University of Trento Centre for Computational and Systems Biology, Italy, an Amazon Research Award, Independent Research Fund Denmark (Sapere Aude grant 9063-00077B), and NVIDIA corporation for sponsoring Titan GPUs.
How (not) to train a dependency parser: The curious case of jackknifing part-of-speech taggers. Zeljko Agić, Natalie Schluter, 10.18653/v1/P17-2107Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Short Papers)Zeljko Agić and Natalie Schluter. 2017. How (not) to train a dependency parser: The curious case of jack- knifing part-of-speech taggers. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 679-684, Vancouver, Canada. Association for Computational Linguistics.
Improving relation extraction by pre-trained language representations. Christoph Alt, Marc Hübner, Leonhard Hennig, Proceedings of AKBC 2019. AKBC 2019Christoph Alt, Marc Hübner, and Leonhard Hennig. 2019. Improving relation extraction by pre-trained language representations. In Proceedings of AKBC 2019.
Event extraction for systems biology by text mining the literature. Sophia Ananiadou, Sampo Pyysalo, Douglas Jun'ichi Tsujii, Kell, 10.1016/j.tibtech.2010.04.005Trends in biotechnology. 287Sophia Ananiadou, Sampo Pyysalo, Jun'ichi Tsujii, and Douglas Kell. 2010. Event extraction for sys- tems biology by text mining the literature. Trends in biotechnology, 28(7):381-390.
Adversarial training for multi-context joint entity and relation extraction. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder, 10.18653/v1/D18-1307Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsGiannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018a. Adversarial training for multi-context joint entity and relation extrac- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830-2836, Brussels, Belgium. Association for Computational Linguistics.
Joint entity recognition and relation extraction as a multi-head selection problem. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder, 10.1016/j.eswa.2018.07.032Expert Systems with Applications. 114Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018b. Joint entity recogni- tion and relation extraction as a multi-head selection problem. Expert Systems with Applications, 114:34- 45.
Generalizing biomedical event extraction. Jari Björne, Tapio Salakoski, Proceedings of BioNLP Shared Task 2011 Workshop. BioNLP Shared Task 2011 WorkshopPortland, Oregon, USAAssociation for Computational LinguisticsJari Björne and Tapio Salakoski. 2011. Generaliz- ing biomedical event extraction. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 183- 191, Portland, Oregon, USA. Association for Com- putational Linguistics.
Biomedical event extraction using convolutional neural networks and dependency parsing. Jari Björne, Tapio Salakoski, 10.18653/v1/W18-2311Proceedings of the BioNLP 2018 workshop. the BioNLP 2018 workshopMelbourne, AustraliaAssociation for Computational LinguisticsJari Björne and Tapio Salakoski. 2018. Biomedi- cal event extraction using convolutional neural net- works and dependency parsing. In Proceedings of the BioNLP 2018 workshop, pages 98-108, Mel- bourne, Australia. Association for Computational Linguistics.
The structural and content aspects of abstracts versus bodies of full text journal articles are different. Helen L K Bretonnel Cohen, Karin Johnson, Christophe Verspoor, Lawrence E Roeder, Hunter, https:/bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-492BMC bioinformatics. 111492K Bretonnel Cohen, Helen L Johnson, Karin Verspoor, Christophe Roeder, and Lawrence E Hunter. 2010. The structural and content aspects of abstracts versus bodies of full text journal articles are different. BMC bioinformatics, 11(1):492.
Joint extraction of entities and overlapping relations using position-attentive sequence labeling. Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiaoqiao She, Haifeng Wang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiao- qiao She, and Haifeng Wang. 2019. Joint ex- traction of entities and overlapping relations using position-attentive sequence labeling. In Proceed- ings of the AAAI Conference on Artificial Intelli- gence, volume 33, pages 6300-6308.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
AllenNLP: A deep semantic natural language processing platform. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer, 10.18653/v1/W18-2501Proceedings of Workshop for NLP Open Source Software (NLP-OSS). Workshop for NLP Open Source Software (NLP-OSS)Melbourne, AustraliaAssociation for Computational LinguisticsMatt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1- 6, Melbourne, Australia. Association for Computa- tional Linguistics.
Constituent parsing as sequence labeling. Carlos Gómez, -Rodríguez , David Vilares, 10.18653/v1/D18-1162Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsCarlos Gómez-Rodríguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1314- 1324, Brussels, Belgium. Association for Computa- tional Linguistics.
Massive choice, ample tasks (MaChAmp): A toolkit for multi-task learning in NLP. Rob Van Der Goot, Alan Ahmetüstün, Barbara Ramponi, Plank, arXiv:2005.14672arXiv preprintRob van der Goot, AhmetÜstün, Alan Ramponi, and Barbara Plank. 2020. Massive choice, ample tasks (MaChAmp): A toolkit for multi-task learning in NLP. arXiv preprint arXiv:2005.14672.
Overview of Genia event task in BioNLP shared task. Jin-Dong Kim, Yue Wang, Toshihisa Takagi, Akinori Yonezawa, Proceedings of BioNLP Shared Task 2011 Workshop. BioNLP Shared Task 2011 WorkshopPortland, Oregon, USAAssociation for Computational LinguisticsJin-Dong Kim, Yue Wang, Toshihisa Takagi, and Aki- nori Yonezawa. 2011. Overview of Genia event task in BioNLP shared task 2011. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 7-15, Portland, Oregon, USA. Association for Computa- tional Linguistics.
75 languages, 1 model: Parsing universal dependencies universally. Dan Kondratyuk, Milan Straka, 10.18653/v1/d19-1279Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsDan Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2779-2795, Hong Kong, China. As- sociation for Computational Linguistics.
BioBERT: A pretrained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, 10.1093/bioinformatics/btz682Bioinformatics. 364Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: A pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
Biomedical event extraction based on knowledgedriven tree-LSTM. Diya Li, Lifu Huang, Ji Heng, Jiawei Han, 10.18653/v1/N19-1145Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Long and Short PapersDiya Li, Lifu Huang, Heng Ji, and Jiawei Han. 2019. Biomedical event extraction based on knowledge- driven tree-LSTM. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1421-1430, Minneapolis, Minnesota. Association for Computational Linguistics.
A joint neural model for information extraction with global features. Ying Lin, Heng Ji, Fei Huang, Lingfei Wu, 10.18653/v1/2020.acl-main.713Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsYing Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7999-8009, Online. Association for Computational Linguistics.
Biomolecular event extraction using a stacked generalization based classifier. Amit Majumder, Asif Ekbal, Sudip Kumar Naskar, Proceedings of the 13th International Conference on Natural Language Processing. the 13th International Conference on Natural Language ProcessingVaranasi, India. NLP Association of IndiaAmit Majumder, Asif Ekbal, and Sudip Kumar Naskar. 2016. Biomolecular event extraction using a stacked generalization based classifier. In Proceedings of the 13th International Conference on Natural Lan- guage Processing, pages 55-64, Varanasi, India. NLP Association of India.
Event extraction as dependency parsing. David Mcclosky, Mihai Surdeanu, Christopher Manning, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsDavid McClosky, Mihai Surdeanu, and Christopher Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 1626-1635, Portland, Oregon, USA. Association for Computa- tional Linguistics.
Boosting automatic event extraction from the literature using domain adaptation and coreference resolution. Makoto Miwa, Paul Thompson, Sophia Ananiadou, 10.1093/bioinformatics/bts237Bioinformatics. 2813Makoto Miwa, Paul Thompson, and Sophia Ananiadou. 2012. Boosting automatic event extraction from the literature using domain adaptation and coreference resolution. Bioinformatics, 28(13):1759-1765.
Comparable study of event extraction in newswire and biomedical domains. Makoto Miwa, Paul Thompson, Ioannis Korkontzelos, Sophia Ananiadou, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersDublin, IrelandDublin City University and Association for Computational LinguisticsMakoto Miwa, Paul Thompson, Ioannis Korkontzelos, and Sophia Ananiadou. 2014. Comparable study of event extraction in newswire and biomedical do- mains. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguis- tics: Technical Papers, pages 2270-2279, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.
ScispaCy: Fast and robust models for biomedical natural language processing. Mark Neumann, Daniel King, Iz Beltagy, Waleed Ammar, 10.18653/v1/W19-5034Proceedings of the 18th BioNLP Workshop and Shared Task. the 18th BioNLP Workshop and Shared TaskFlorence, ItalyAssociation for Computational LinguisticsMark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In Pro- ceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy. Association for Computational Linguistics.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
Biomedical event extraction using abstract meaning representation. Sudha Rao, Daniel Marcu, Kevin Knight, Hal Daumé, Iii , 10.18653/v1/W17-2315Vancouver, CanadaAssociation for Computational LinguisticsSudha Rao, Daniel Marcu, Kevin Knight, and Hal Daumé III. 2017. Biomedical event extraction us- ing abstract meaning representation. In BioNLP 2017, pages 126-135, Vancouver, Canada,. Associ- ation for Computational Linguistics.
Robust biomedical event extraction with dual decomposition and minimal domain adaptation. Sebastian Riedel, Andrew Mccallum, Proceedings of BioNLP Shared Task 2011 Workshop. BioNLP Shared Task 2011 WorkshopPortland, Oregon, USAAssociation for Computational LinguisticsSebastian Riedel and Andrew McCallum. 2011. Ro- bust biomedical event extraction with dual decom- position and minimal domain adaptation. In Pro- ceedings of BioNLP Shared Task 2011 Workshop, pages 46-50, Portland, Oregon, USA. Association for Computational Linguistics.
Model combination for event extraction in BioNLP. Sebastian Riedel, David Mcclosky, Mihai Surdeanu, Andrew Mccallum, Christopher D Manning, Proceedings of BioNLP Shared Task 2011 Workshop. BioNLP Shared Task 2011 WorkshopPortland, Oregon, USAAssociation for Computational LinguisticsSebastian Riedel, David McClosky, Mihai Surdeanu, Andrew McCallum, and Christopher D. Manning. 2011. Model combination for event extraction in BioNLP 2011. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 51-55, Portland, Ore- gon, USA. Association for Computational Linguis- tics.
Japanese and korean voice search. Mike Schuster, Kaisuke Nakajima, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEMike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152. IEEE.
Dependency parsing as a sequence labeling task. Drahomíra Spoustová, Miroslav Spousta, The Prague Bulletin of Mathematical Linguistics. 94Drahomíra Spoustová and Miroslav Spousta. 2010. Dependency parsing as a sequence labeling task. The Prague Bulletin of Mathematical Linguistics, 94(1):7-14.
Viable dependency parsing as sequence labeling. Michalina Strzyz, David Vilares, Carlos Gómez-Rodríguez, 10.18653/v1/N19-1077Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Long and Short PapersAssociation for Computational LinguisticsMichalina Strzyz, David Vilares, and Carlos Gómez- Rodríguez. 2019. Viable dependency parsing as se- quence labeling. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 717-723, Minneapolis, Minnesota. As- sociation for Computational Linguistics.
Relieving the computational bottleneck: Joint inference for event extraction with high-dimensional features. Deepak Venugopal, Chen Chen, Vibhav Gogate, Vincent Ng, 10.3115/v1/D14-1090Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsDeepak Venugopal, Chen Chen, Vibhav Gogate, and Vincent Ng. 2014. Relieving the computational bot- tleneck: Joint inference for event extraction with high-dimensional features. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 831-843, Doha, Qatar. Association for Computational Lin- guistics.
Better, faster, stronger sequence tagging constituent parsers. David Vilares, Mostafa Abdou, Anders Søgaard, 10.18653/v1/N19-1341Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Long and Short PapersDavid Vilares, Mostafa Abdou, and Anders Søgaard. 2019. Better, faster, stronger sequence tagging con- stituent parsers. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3372-3383, Minneapolis, Minnesota. Association for Computational Linguistics.
Biomedical event extraction from abstracts and full papers using search-based structured prediction. Andreas Vlachos, Mark Craven, 10.1186/1471-2105-13-S11-S5BMC Bioinformatics. 1311Andreas Vlachos and Mark Craven. 2012. Biomedical event extraction from abstracts and full papers using search-based structured prediction. BMC Bioinfor- matics, 13(11):1759-1765. |
34,333,073 | Design and Development of the MERLIN 1 Learner Corpus Platform | In this paper we report on the design and development of an online search platform for the MERLIN 1 corpus of learner texts in Czech, German and Italian. It was created in the context of the MERLIN project, which aims at empirically illustrating features of the Common European Framework of Reference (CEFR) for evaluating language competences based on authentic learner text productions compiled into a learner corpus. Furthermore, the project aims at providing access to the corpus through a search interface adapted to the needs of multifaceted target groups involved with language learning and teaching. This article starts by providing a brief overview on the project ambition, the data resource and its intended target groups. Subsequently, the main focus of the article is on the design and development process of the platform, which is carried out in a user-centred fashion. The paper presents the user studies carried out to collect requirements, details the resulting decisions concerning the platform design and its implementation, and reports on the evaluation of the platform prototype and final adjustments. | [] | Design and Development of the MERLIN 1 Learner Corpus Platform
Verena Lyding [email protected]
Institute for Specialised Communication and Multilingualism
EURAC research
Bozen/BolzanoItaly
Karin Schöne [email protected]
Multimediales Sprachlernzentrum
Technische Universität Dresden
Germany
Design and Development of the MERLIN 1 Learner Corpus Platform
learner corporacorpus search toolsuser-centred development
In this paper we report on the design and development of an online search platform for the MERLIN 1 corpus of learner texts in Czech, German and Italian. It was created in the context of the MERLIN project, which aims at empirically illustrating features of the Common European Framework of Reference (CEFR) for evaluating language competences based on authentic learner text productions compiled into a learner corpus. Furthermore, the project aims at providing access to the corpus through a search interface adapted to the needs of multifaceted target groups involved with language learning and teaching. This article starts by providing a brief overview on the project ambition, the data resource and its intended target groups. Subsequently, the main focus of the article is on the design and development process of the platform, which is carried out in a user-centred fashion. The paper presents the user studies carried out to collect requirements, details the resulting decisions concerning the platform design and its implementation, and reports on the evaluation of the platform prototype and final adjustments.
Introduction
This article describes the design and development process of a search platform for a trilingual learner corpus with multifaceted target groups. The MERLIN project addresses the need for illustrating the CEFR levels of language proficiency with concrete and authentic examples of language use and tackles two related research questions. On a theoretical level, that is with regard to linguistics and language didactics, it researches on the best ways to pinpoint and describe relevant characteristics of learner language in relation to CEFR evaluation dimensions. On a practical level, that is with regard to system development, it researches about how to encode, make accessible and present this information to a varied set of target groups. This paper describes how the practical research question has been approached. It details the user-centered design and development process and discusses the decisions taken as well as the potential and limits of the developed system.
MERLIN Project
The overall objective of the MERLIN 1 project is to address the need for an empirical back-up of the CEFR levels by providing concrete examples of learner language features. Within an interdisciplinary trans-European project team three major tasks were approached within MERLIN:
1. to assemble a trilingual learner corpus, 2. to carefully evaluate relevant parameters for describing learner language and to annotate related features on the learner texts, and 3. to develop a search platform for providing the texts and means for their analysis to a diversified group of users.
MERLIN Corpus
The MERLIN corpus consists of written productions of foreign language learners of Czech, German and Italian, which are annotated for features of learner language and characteristics of the learners. The MERLIN corpus is a comprehensive collection of 2,286 authentic foreign language learner texts in Czech, German and Italian, produced in standardized language tests and collected by the established test institutions telc 2 and ÚJOP 3 . The corpus is annotated for learner language features on several linguistic levels, including orthography, grammar, coherence/cohesion etc. (see Abel et al. 2014, Boyd et al. 2014). In addition, personal characteristics of the tested learners, including their L1, age, gender, etc., as well as meta information on the texts, including underlying test task, CEFR level of the test, ratings according to the CEFR 4 , are recorded and associated with each text.
Target Groups
The MERLIN project targets professionals involved with analyzing learner language, which were grouped into four profiles: teachers (incl. material writers), teacher trainers, testers and linguists (including second language acquisition researchers and lexicographers). The target groups differ with regard to how their work relates to the CEFR, what perspectives they take on the data and how familiar they are with corpus interfaces.
Requirements Analysis for the MERLIN Platform
In order to inform the design of the MERLIN platform, a study to determine specific requirements of the different target groups was carried out at the start of the project. By means of an online questionnaire we investigated the users' needs regarding content (i.e. relevant linguistic annotations, metadata, and quantitative text characteristics) as well as interface aspects, including search and display functionalities as well as technical features. Overall, 55 people from all target groups and covering the three working languages participated in the survey. Most participants indicated to belong to more than one target group 5 , and the overall distribution of participants is shown in Figure 1.
The requirement analysis revealed their specific information needs and possible usage scenarios. Teachers and teacher trainers expressed the need for illustration of the CEFR descriptors with examples from learner texts. They would appreciate the option to extract sample productions to use them for training purposes and for the preparation of teaching materials, e.g. for self-reflection activities with advanced learner groups. Testers would use a corpus of standardized test samples for training purposes (e.g. have samples from the corpus re-rated and compared with the MERLIN rating) as well as a reference for the assessment of borderline performances. Linguists and SLA researchers would benefit from a thoroughly annotated corpus of learner language to explore L2-competence and to trace errors or features of learner language on different performance levels.
Results indicated that the majority of users considers it important to have search and filtering of learner texts based on different learner language features and metadata. The linguistic features vocabulary (87%) and grammar (76%) were rated most relevant, followed by text characteristics (67%) and sociolinguistic criteria/text type (67%). With respect to metadata, level (87%) and type (80%) of the test tasks as well as quantitative text characteristics like the size of vocabulary (82%) and sentence complexity (76%) were assigned the highest relevance. Compared by user group, metadata are most relevant for linguists (81%) and least relevant for teachers (35%). Users indicated that groups of texts (78%) are the prior unit for analysis followed by single texts (69%), and that data exports are of high relevance (73%).
Regarding the technical working environments, the survey showed that the Windows operating system (95%) and the browsers Internet Explorer, Firefox or Chrome are part of the most used setups. More than 65% of the respondents need technical assistance for installations, and the majority of respondents was not familiar with using non-office style file types like XML.
In addition to the large-scale survey, semi-guided interviews were carried out with one participant per target language in order to enquire about concrete use cases and related demands. The interviewees expressed a need for looking up prototypical examples of evaluations according to CEFR and to learn about typical learner language features for different groups of learners (e.g. common L1) and different levels. Accordingly a filtering by learner language features as well as by metadata was considered important. The annotated texts were expected to be useful for raising the learner's awareness on his competence level as well as for discussing evaluation criteria and measures with teachers and testers.
Corpus Preparation and Storage
Within the project all texts were manually transcribed (using the XML editor XMLmind 6 ) and annotated (using the annotation tools MMAX 7 ) for learner language features. Furthermore, for each text a minimally error-corrected version ('minimal target hypothesis'), and for the A2 and B2 subset of texts also a fully error-corrected version ('extended target hypothesis') has been created (cf. Reznicek et al., 2012). Furthermore, texts were automatically annotated for lemma and part-of-speech and various statistical measures were computed for German texts (e.g. average sentence complexity, lexical density and diversity, finite verb ratio). In order to provide the required search functionalities, the MERLIN corpus was transformed and imported into two tools for corpus management and retrieval: the search platform Lucene/SOLR 8 and the search and visualization architecture ANNIS 9 . MERLIN employs Lucene/SOLR for handling string-based searches on the plain texts and target hypotheses, as well as the filtering of texts by metadata and the creation of subcorpora. ANNIS is used to enable targeted searches on learner language annotations (e.g. capitalization error), as well as on words, lemmas and parts-of-speech, as the ANNIS architecture is particularly adapted to querying and displaying multilayer annotated corpora.
Design of the MERLIN Platform Structure and Search Interface
Design Principles
The primary aim of the MERLIN platform is to serve different user groups and usage scenarios. By pursuing a strict target group orientation we thus comply with elearning standards (Mirbach et al., 2009). Accordingly, the 8 http://lucene.apache.org/solr/ 9 http://corpus-tools.org/annis/ Figure 1: Professions of participants platform design followed two lines: on the basis of the requirement analysis and the expert interviews we modelled target-group specific use cases to determine concrete tasks, data types and display modes of particular relevance to the prospective user groups. For an example see table 1. As for the design of the technical requirements and format characteristics of the corpus data and annotations the results of the technical part of user study as well as general design principles from usability standards (ISO 9241-11) were taken into consideration.
In particular, target group orientation has been implemented in the macro-as well as in the micro-structure of the platform in the following ways:
• Implementation of different search areas, which respond to specific needs of the different target groups regarding search as well as results display • Modelled usage scenarios for different user groups • Implementation of different help and support structures Usability standards are respected by providing for selfdescriptiveness, controllability of the interaction and error tolerance. Regarding the users' technical requirements the platform avoids the need to install browser-plugins or additional software, and has been tested for the most frequently used browsers. Above that, the platform takes into account that teachers and testers are often not familiar with classical corpus interfaces. An end user study conducted by Campillos Llanos among teachers of Spanish as a foreign language who were to evaluate the interface of an oral learner corpus revealed a need for explanation of error descriptors and related terms and the wish for the visual simplification of the search interface (Campillos Llanos 2012, p. 245). As a conclusion Campillos Llanos recommends to present search options in a more dynamic way and to include a comprehensive glossary of terms (ibid p. 246). As for the MERLIN platform we decided to support the user in several regards: search options are presented in a task-oriented fashion, e.g. "Search for words in the learner texts and display them in context". Example queries present typical searches in a descriptive way and reveal results by just one click. To make sure that the interface does not appear overcrowded, help is contextualized and available in the interface as tooltip. Above that, users that are not familiar with corpus linguistic terms can refer to a glossary. Finally, material related to the learner texts, i.e. test tasks, rating criteria and scales and the annotation rules can be looked up at every point of the search process without interrupting the search.
Description of the Platform
The overall structure of the platform (see Figure 2) gives access to the search interface (1) and to an area providing background information on the corpus and specific usage scenarios (2) which present search functionalities in non-10 By using mailing lists in order to maximize the spreading of the questionnaire, the team was unable to control the exact proportions of participants by target group. corpus-linguistics style, but rather in a task-oriented fashion. In addition, the home page offers quick info for getting started (including a video tutorial) (3). The search interface (3) combines four different areas, which respond to specific needs of the different target groups regarding search as well as results display. In particular, the user study indicated that language teachers, testers and teacher trainers have similar demands that focus on the grouping and retrieval of texts and learner language features, while linguists demand finer-grained linguistic search options. Initially, the four areas were subdivided as follows (see Figure 3) ) and learner language features in order to create subcorpora, for reuse in the simple and advanced searches. • Search by learner language features to derive corresponding statistics for individual texts or text groups. Depending on the search mode, results are displayed as KWIC with or without linguistic annotations, as listings of texts and full text views, or as frequency tables for selected learner language features. Furthermore, metadata can be displayed for all results and texts are provided for download, with the option to include target hypotheses and metadata.
Evaluation of the Platform Prototype and Final Adjustments
In a pilot phase the platform prototype was tested and evaluated by targeted future users who were addressed via the project consortium's distribution lists in a direct mailing campaign. The online survey addressed the interface structure, functionality and content of the platform. Regarding the interpretation of results it should be noted that the major part of the 61 respondents were teachers and testers, less teacher trainer and linguists. 10 The overall distribution of participants is shown in Figure 2. 11 Overall, the aims for using the platform matched the indicated user profiles, e.g. 81% of the teachers managed to use the platform for preparing teaching material. 80% of testers would use the platform for preparing test material, but only 40% as reference for rating. Only a small percentage of teachers and almost no teacher trainer were interested in doing linguistic studies.
In general, full learner texts and words in sentence context were considered the most important types of data. This is true mainly for teacher trainers and testers. Linguists were much more interested in learner language features and ratings. Surprisingly, metadata were considered important only by about half of the participants. However, distinguishing the responses of the different user groups, it turned out that mainly teachers were little interested in metadata, while all linguists indicated a strong interest for metadata as well as 75% of the trainers and 80% of the testers. This was taken as indication that metadata might need a better explanation, as teachers might not be familiar enough with the concept of metadata. More than 80% of the pilot users were satisfied with the subdivision into four search areas, but the single search options were assessed differently, e.g. the document search was well appreciated by teachers and teacher trainers, whereas of lower value to almost half of the polled linguists.
The simple search has shown to be well accepted in general, but less valued by linguists. The learner language feature search was most difficult to understand. 79% of the respondents were positive about the provided help on the interface and explanations on the corpus and the annotations. In addition to sample searches, the MERLIN platform offers information on concrete use cases and presents didactically motivated procedures for interacting with the provided learner data. Despite this information given, some users indicated that possible usage scenarios are not clear, which indicates that the given information needs further improvement to be easily found and understood. Furthermore, comments revealed that more sample searches and a clear guidance on the differences and connectivity between the four search areas would be helpful.
In particular, the users had difficulty to understand that the 'document search' serves to create subcorpora for use within simple and advanced search modes. The pilot stage was followed by a comprehensive revision process in which, to name an example the 'document search' was renamed into 'define a subcorpus' and an introductory explanation was added.
Conclusion
Results of the pilot study showed that the multiple access modes are suitable to match different target user needs and that it is necessary to reduce complexity when presenting richly annotated data by grouping and faceting search options, offering sample searches and context-sensitive help, and giving clear guidance on what kind of information can be retrieved. The MERLIN platform aims at bridging the gap between technology development, multi-layer annotations and pedagogical applications by offering four approaches to the data: a simple and an advanced search, a metadata and feature-driven document search allowing for defining subcorpora at the same time, and a separate section for exploring frequency information. Within MERLIN, it was not feasible to implement functionalities of the collaborative web, but both studies clearly revealed that future corpus users would appreciate support for sharing and commenting search results, subcorpora and best practices.
Acknowledgements
The MERLIN project has been funded with support from the European Commission, Lifelong Learning Programme.
Figure 4 :Figure 3 :
43Diagram describing initial search interface design MERLIN Platform Start Page
explore typical errors in context search for words, adjacent words, POS in learner texts and target hypotheses L1 errors by having them re-rate MERLIN texts without showing the MERLIN ratings and then compare and discuss the differences / results extract a random sample of written tests on a specific tasks for sample texts by different metadata as e.g. L1, task type, CEFR level (of the test/rated CEFR level)Table 1: Example Use Cases and related Tasks and Data Typesaim / usage scenario
use case / task
data types
relevant
information
/
features
of
learner language
(LL)
relevant
annotation
levels
display
mode
annotated
learner
productions
POS, lemma,
features of LL:
grammar, vocab.,
etc.
learner
text, TH1
LL feature
in context
re-adjust
teachers
oversensitive to special
unannotated
learner
productions
available
metadata,
esp.
CEFR level of
test, fair CEFR
level
learner
text,
(TH1/2)
entire text/
text section,
metadata
Underpin the course
schedule with lists of
learner
language
features specific for
different CEFR levels &
identify typical and
relevant
milestones/errors
extract feature list by
CEFR level using different
filter criteria as L1 and
CEFR level
feature list
available
metadata,
esp.
CEFR level of
the test and
ratings
linguistic
annotations
(statistical
information
-
features
per
level)
learner
text, meta-
data, EA1,
EA2
feature list /
statistics
Multiple selects were possible in the questionnaire. 6 http://www.xmlmind.com/xmleditor/ 7 https://sourceforge.net/projects/mmax2/
The questionnaire allowed participants to indicate more than one profession.
A Trilingual Learner Corpus illustrating European Reference Levels. A Abel, K Wisniewski, Letterature e Culture Moderne. Ricognizioni -Rivista di Lingue21Abel, A.; Wisniewski, K. et al. (2014). A Trilingual Learner Corpus illustrating European Reference Levels. In: Ricognizioni -Rivista di Lingue, Letterature e Culture Moderne 2 (1), 111-126.
The MERLIN corpus: Learner Language and the CEFR. A Boyd, J Hana, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)ReykjavikBoyd, A.; Hana, J. et al. (2014). The MERLIN corpus: Learner Language and the CEFR. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), European Language Resources Association (ELRA), Reykjavik, May 26-31, 2014.
Available at www.lrec-conf.org/proceedings/lrec2012/ pdf/574_Paper.pdf (last accessed on 09.03.2016) Council of Europe. Campillos Llanos, L , Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012. the Eighth International Conference on Language Resources and Evaluation (LREC 2012Istanbul, TurkeyCambridge University PressCommon European Framework of Reference for Languages: Learning, Teaching, AssessmentCampillos Llanos, L. (2012), Designing a search interface for a Spanish learner oral corpus: The end-user's evaluation. In: Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012). Istanbul, Turkey, pp. 241-8. Available at www.lrec-conf.org/proceedings/lrec2012/ pdf/574_Paper.pdf (last accessed on 09.03.2016) Council of Europe. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge University Press.
Ergonomic requirements for office work with visual display terminals (VDTs) -Part 11: Guidance on usability. ISO 9241-11(1998), Ergonomic requirements for office work with visual display terminals (VDTs) -Part 11: Guidance on usability.
H Mirbach, H M P Sohn, J M Pawlowski, L Reß, J Sonnberger, C M Stracke, B Strahwald, QPL Qualitätsplattform Lernen: Das Instrument zur Qualitätssicherung in der Bildungsbranche', In: Fachausschuss Qualität des D-ELAN Deutsches. EssenNetzwerk der E-Learning Akteure e.V.Mirbach, H.; Sohn, H.M.P.; Pawlowski, J.M.; Reß, L.; Sonnberger, J.; Stracke, C.M. & Strahwald, B. (2009), 'QPL Qualitätsplattform Lernen: Das Instrument zur Qualitätssicherung in der Bildungsbranche', In: Fachausschuss Qualität des D-ELAN Deutsches Netzwerk der E-Learning Akteure e.V., Essen.
M Reznicek, A Lüdeling, 2.01Das Falko-Handbuch. Korpusaufbau und Annotationen. BerlinReznicek, M.; Lüdeling, A. et al. (2012): Das Falko- Handbuch. Korpusaufbau und Annotationen. Version 2.01. Berlin.
Figure 2: Participants by profession. Figure 2: Participants by profession |
251,253,119 | Exploring the GLIDE model for Human Action-effect Prediction | We address the following action-effect prediction task. Given an image depicting an initial state of the world and an action expressed in text, predict an image depicting the state of the world following the action. The prediction should have the same scene context as the input image. We explore the use of the recently proposed GLIDE model for performing this task. GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text. Our idea is to mask-out a region of the input image where the effect of the action is expected to occur. GLIDE is then used to inpaint the masked region conditioned on the required action. In this way, the resulting image has the same background context as the input image, updated to show the effect of the action. We give qualitative results from experiments using the EPIC dataset of ego-centric videos labelled with actions. | [
51880918
] | Exploring the GLIDE model for Human Action-effect Prediction
Fangjun Li
School of Computing
University of Leeds
UK
David C Hogg
School of Computing
University of Leeds
UK
Anthony G Cohn [email protected]
School of Computing
University of Leeds
UK
Luzhong Institute of Safety
Qingdao University of Science and Technology
China
College of Electronic and Information Engineering
Tongji University
China
School of Mechanical and Electrical Engineering
Qingdao University of Science and Technology
China
School of Control Science and Engineering
Shandong University
China {scfli, D.C.Hogg
Exploring the GLIDE model for Human Action-effect Prediction
diffusionGLIDEinpaintingaction-effect prediction
We address the following action-effect prediction task. Given an image depicting an initial state of the world and an action expressed in text, predict an image depicting the state of the world following the action. The prediction should have the same scene context as the input image. We explore the use of the recently proposed GLIDE model for performing this task. GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text. Our idea is to mask-out a region of the input image where the effect of the action is expected to occur. GLIDE is then used to inpaint the masked region conditioned on the required action. In this way, the resulting image has the same background context as the input image, updated to show the effect of the action. We give qualitative results from experiments using the EPIC dataset of ego-centric videos labelled with actions.
Introduction
The purpose of this study is to investigate the potential of a generative model to reason about human actions occurring in a complex physical environment. The model will be given a textual description for an action and an initial world state depicted in an image; it needs to predict an image depicting the final world state following the action. E.g., given an initial image depicting someone holding a carrot and a knife, and the action 'peel carrot', the model should predict an image in which 'peelings' have been separated from the carrot. For our action-effect task, the challenge is to generate an output image that both depicts the effect of the action and retains the scene context from the input image. In other words, when peeling the carrot, the kitchen should remain the same before and after. One way to approach the task would be to treat this as conditional video prediction, extending an input video into the future, as a sequence of new video frames and guided by the provided action. We explore an alternative approach based on a new generative model. GLIDE is a recent neural network model that has two modes of working. In the first, GLIDE generates an image given a piece of text. In the second, GLIDE inpaints a masked region of an image given a piece of text. This second mode can be used to edit images through delineating regions (masked areas) and describing the new content in natural language. We use the second mode of operation to undertake the action-effect task. In doing this, there are two critical sub-tasks: (1) delineate the region in which we expect the effects of the action to be visible; and (2) express the effects of an action as a short textual description. Typically, action datasets provide annotations for ac-tions expressed only as verb-noun pairs, emphasising the action rather than the effect of the action. The contributions of our work are as follows:
-Application of the image synthesis model GLIDE to the action-effect task; -Consideration of how to select masked regions for inpainting; -Consideration of how to map actions into actioneffect textual descriptions; -Qualitative experiments evaluating the approach on the EPIC dataset.
Background on the action-effect prediction task
Human action prediction has been a prevalent topic in recent years, with the goal of predicting forthcoming actions from temporally incomplete action videos. There are two primary research directions: predicting the category of a subsequent action and predicting a motion trajectory. Our action-effect prediction is distinct from both of these and can be regarded as a new kind of action prediction task. Here we give a general formulation of the task. Given the following: -An image depicting the initial world state before a human action. -A linguistic description of the action. Produce an image depicting the final world state following the action. For example, in Figure 1, for the action 'crack egg', "the end result is that the entire contents of the egg will be in the bowl, with the yolk unbroken, and that the two halves of the shell are held in the cook's fingers" (Davis, 1998). We expect that given a reference image depicting the action's start state and a text prompt about the action 'crack egg', the generative model can predict a future frame depicting the action's effect, that is, the end world state after the action. Our task can be viewed as a conditional image prediction problem. Thus it may benefit from architectures designed for image synthesis. Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) have gained great attention since their introduction in 2014. Variational auto-encoders (VAE), which were put forward around the same time, have also increased in popularity over recent years. Recent work on image synthesis using a VAE includes Dall.E . Inspired by simulated annealing and diffusion processes, the use of diffusion models in image synthesis (Ho et al., 2020;Dhariwal and Nichol, 2021) have recently achieved high quality results. The following logic leads us to focus on the GLIDE model . To begin with, we consider generative models that can take both visual and textual input. Following that, we concentrate on diffusionbased methods because they have shown superior performance in terms of image sample quality and have well-established model structures that make use of recent advances in transformers and diffusion methods. Finally, we choose GLIDE among diffusion-based models since it is trained on billions of images and can be used to perform image editing (inpainting).
Datasets
In general, there are two types of human action video datasets: those taken in the third-person and those taken in the first-person (egocentric). Third-person videos/images human action Datasets include UCF101, KTH, and UCFsports, Human3.6M, Sports1m, Penn Action and THUMOS-15 (Zhou et al., 2020). These datasets cover human actions like dancing, climbing, walking, etc. The viewpoint is from a third-person standpoint.
First-person (egocentric) video datasets include Extended GTEA Gaze+ (Li et al., 2021) and EPIC-Kitchens-100 (Damen et al., 2020). The majority of the actions in these datasets involve first-person observers holding or manipulating objects. The actions in these two datasets are all about the preparation of meals in a realistic kitchen scenario. We selected egocentric videos for two reasons:
1. In egocentric videos, most actions are close-ups of hand movements so that the regions of manipulated objects are prominent within the image, which allows for the transmission of sufficient information regarding object state changes after resizing to 64×64 as required for the GLIDE model; 2. The publicly available version of GLIDE ('filtered') was trained on a filtered version of a dataset that excluded all images of humans, so may have poor performance on whole-body state change prediction. EPIC-Kitchens was utilised as the reference dataset in our experiments because the video quality is better (full HD over 1280 × 920 and brighter lightening) and the dataset covers 100 hours of recording, more than three times the amount of Extended GTEA Gaze+.
Method
The proposed method for action-effect prediction using GLIDE depends on two key elements described in the following sections.
Setting of Mask Areas
The success in using GLIDE in the action-prediction task depends critically on the choice of the mask region for inpainting. We consider two alternatives: defining a fixed mask and generating a mask tailored to the content of the given image.
Using a fixed mask
The direct and easiest way to define a masked region for inpainting is to fix the mask area for all input images. For example, as shown in Figure 2 (left), the mask covers the lower two thirds of the image. The problem with a fixed mask is that the chosen region may not be appropriate for every instance of an action. If the mask region is too big, it may not include sufficient information about the scene context, and the generated image may not resemble the original scene context, except in the area of the fixed portion. If we set the mask region too small, we cannot be certain that the whole state changes occur in that area.
Using a generated mask around a region of interest
For action-effect prediction, the region of interest in an image is the area in which actions are performed. Ideally, we would set the inpainting mask to be this region.
The detection of such a region is meaningful because it indicates a zone around the centre of attention, that is where to look for action-relevant items in the scene in order to identify state changes. We adopt two methods for finding masks around regions of interest. In both cases, the regions have already been provided for the EPIC-KITCHENS-100 dataset 1 2 to delineate the prominent objects within the scene.
In the first method, we define object segmentation masks from the regions produced by Mask- RCNN (He et al., 2017).
In the second method, we define hand and object masks from detection boxes around the hands and the manipulated objects using a system (Shan et al., 2020) based on Faster-RCNN. In our experiments, we filter the detections to accept only those above a significance threshold of 0.1.
Generating the text prompt
The inpainted output image from GLIDE is generated in response to a text prompt, which is a description of the effect of an action. We generate this textual description automatically from the action phrase. To do this, we use the pre-trained auto-regressive language model GPT-3 (Brown et al., 2020) to obtain textual descriptions of future world states from action phrases. The input to GPT-3 is a sequence of randomly chosen pairs of action phases with the corresponding textual effect descriptions (two pairs in our experiments), followed by the given action phrase. The continuation of this sequence predicted by GPT-3 provides the textual description we require. We randomly selected the examples from the human collected (Gao et al., 2018) actioneffect pairs dataset. For example, for the action 'cut apple', the generated action effect description is 'Apple is cut in half with a knife'.
In experiments, we compare performance with an approach in which the action phrase is input directly to GLIDE as the text prompt.
Results
We visually compare performance on the action-effect prediction task with the three mask settings and two ways of generating text prompts.
Influence of Mask Areas
In Figure 3 we show three different types of action: add, cut, and remove. We set the fixed mask to the region that is perceived as the foreground in the majority of action instances. We observe that the GLIDE model with a fixed mask is capable of refilling the masked image with manipulated objects. But the generated object, which is 'chicken' for 'add chicken', 'apple' for 'cut apple' in Figure 3, takes the whole unmasked area. For the hand and object masks, the mask incorporates more information about the environment in comparison to the fixed mask. The objects in action can be projected to have a reasonable size and form. However, some vital regions may be cropped owing to the rectangular form of the detection boxes. For the action 'remove lid', the object detection area does not fully cover the lid, but rather the movable section.
With segmentation masks, we got better results on these three action instances. For action 'add chicken', apart from the manipulated object (chicken), potato and pot are also masked. The masks are more precise, and there is more visual information: part of hand, the chopping board and kitchen environment, allowing it to refill the pot and chopping board. The resulting picture is more compatible with its environment. For action 'cut apple', the apple is predicted to be of a suitable size and location, but the hand is not created in a sensible way. For action 'remove lid', the pot is well detected compared with using fixed and detection masks. Though the pot shape isn't quite round and the borders aren't perfectly connected, it best describes the lid removed state. While mask design improves prediction, there is still room for improvement: the model cannot include any information about the manipulated object, thus the newly produced objects are not exactly those that appeared in the start frame.
Influence of Text Prompts
The effect description for the action "add chicken" as shown in Figure 3 comes from GPT-3. In comparison to a pure action phrase, the text prompt "After add chicken, there are now chicken in the pot." contains more detailed information regarding the effects of an action, specifically that the chicken is now in the pot. We can observe that, with this text prompt, in all predicted images, the chicken is in the pot. We can also observe apparent improvement in generation results with a fixed mask on the action "cut apple" and segmentation mask on action "remove lid". We can see a noticeable improvement in generation image quality on action "cut apple" with fixed mask and action "remove lid" with segmentation mask. Figure 3: Examples of action-effect prediction on action "add chicken" (left), "cut apple" (middle) and "remove lid" (right) with GLIDE using different masks and text prompts. Within the panel for each action are shown the original start and end frames from the dataset (top row), the three masks (2nd row), the results using the action phrase as the text prompt to GLIDE (3rd row), and the results using the effect description from GPT-3 as the text prompt to GLIDE (4th row)
Failure cases
In Figure 4 we show several failure cases: some actions that involve changing the brightness of the environment rather than changing the attributes of items, e.g., 'turn on light'; certain position-changing actions such as 'switch cupboard' (i.e. open or close cupboard); and object-quantity-increasing actions such as 'cut carrots' and 'peel garlic', the initial masked area may be insufficiently large to fully fill in the newly formed pieces. Figure 4: Failure examples using segmentation mask and action phrase as text prompt.
Conclusions and Future Work
We have explored GLIDE's potential on our real-world action-effect prediction task. We have shown that by optimising the mask area design and converting actions into action-effect descriptions as text prompts, the GLIDE model can create more accurate predictions that are consistent with the start world state.
In future work, we plan to fine-tune GLIDE for our action-effect task using a specialised dataset. It would also be interesting to explore whether GLIDE could be developed to avoid the use of a mask and instead revise the whole image based on a text prompt.
Figure 1 :
1Examples of prediction of the world's future state after an action. The images are taken from the EPIC-Kitchen dataset.
Figure 2 :
2Examples of different mask area settings.
https://github.com/epic-kitchens/epic-kitchens-100object-masks 2 https://github.com/epic-kitchens/epic-kitchens-100hand-object-bboxes
T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, arXiv:2005.14165Language models are few-shot learners. arXiv preprintBrown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Lan- guage models are few-shot learners. arXiv preprint arXiv:2005.14165.
The epickitchens dataset: Collection, challenges and baselines. D Damen, H Doughty, G M Farinella, S Fidler, A Furnari, E Kazakos, D Moltisanti, J Munro, T Perrett, W Price, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4311Damen, D., Doughty, H., Farinella, G. M., Fidler, S., Furnari, A., Kazakos, E., Moltisanti, D., Munro, J., Perrett, T., Price, W., et al. (2020). The epic- kitchens dataset: Collection, challenges and base- lines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11):4125-4141.
Naive physics perplex. AI magazine. E Davis, 19Davis, E. (1998). Naive physics perplex. AI magazine, 19(4):51-51.
Diffusion models beat gans on image synthesis. P Dhariwal, A Nichol, Advances in Neural Information Processing Systems. 34Dhariwal, P. and Nichol, A. (2021). Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34.
What action causes this? towards naive physical action-effect prediction. Q Gao, S Yang, J Chai, L Vanderwende, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Gao, Q., Yang, S., Chai, J., and Vanderwende, L. (2018). What action causes this? towards naive physical action-effect prediction. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 934-945.
Mask r-cnn. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y ; K Bengio, G Gkioxari, P Dollár, R Girshick, Generative adversarial nets. Advances in neural information processing systems, 27. He. Proceedings of the IEEE international conference on computer visionGoodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Ben- gio, Y. (2014). Generative adversarial nets. Ad- vances in neural information processing systems, 27. He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969.
Denoising diffusion probabilistic models. J Ho, A Jain, Abbeel , P , Advances in Neural Information Processing Systems. 33Ho, J., Jain, A., and Abbeel, P. (2020). Denoising dif- fusion probabilistic models. Advances in Neural In- formation Processing Systems, 33:6840-6851.
In the eye of the beholder: Gaze and actions in first person video. Y Li, M Liu, J Rehg, IEEE Transactions on Pattern Analysis and Machine Intelligence. Li, Y., Liu, M., and Rehg, J. (2021). In the eye of the beholder: Gaze and actions in first person video. IEEE Transactions on Pattern Analysis and Machine Intelligence.
More control for free! image synthesis with semantic diffusion guidance. X Liu, D H Park, S Azadi, G Zhang, A Chopikyan, Y Hu, H Shi, A Rohrbach, Darrell , T , arXiv:2112.05744arXiv preprintLiu, X., Park, D. H., Azadi, S., Zhang, G., Chopikyan, A., Hu, Y., Shi, H., Rohrbach, A., and Darrell, T. (2021). More control for free! image synthe- sis with semantic diffusion guidance. arXiv preprint arXiv:2112.05744.
Glide: Towards photorealistic image generation and editing with text-guided diffusion models. A Nichol, P Dhariwal, A Ramesh, P Shyam, P Mishkin, B Mcgrew, I Sutskever, Chen , M , arXiv:2112.10741arXiv preprintNichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., and Chen, M. (2021). Glide: Towards photorealistic image gener- ation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741.
. A Ramesh, M Pavlov, G Goh, S Gray, C Voss, A Radford, M Chen, I Sutskever, Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. (2021).
Zero-shot text-to-image generation. PMLRInternational Conference on Machine Learning. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821- 8831. PMLR.
Understanding human hands in contact at internet scale. D Shan, J Geng, M Shu, D F Fouhey, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionShan, D., Geng, J., Shu, M., and Fouhey, D. F. (2020). Understanding human hands in contact at internet scale. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9869-9878.
Deep learning in next-frame prediction: A benchmark review. Y Zhou, H Dong, A Saddik, IEEE Access. 8Zhou, Y., Dong, H., and El Saddik, A. (2020). Deep learning in next-frame prediction: A benchmark re- view. IEEE Access, 8:69273-69283. |
6,884,242 | The CLE Urdu POS Tagset | The paper presents a design schema and details of a new Urdu POS tagset. This tagset is designed due to challenges encountered in working with existing tagsets for Urdu. It uses tags that judiciously incorporate information about special morpho-syntactic categories found in Urdu. With respect to the overall naming schema and the basic divisions, the tagset draws on the Penn Treebank and a Common Tagset for Indian Languages. The resulting CLE Urdu POS Tagset consists of 12 major categories with subdivisions, resulting in 32 tags. The tagset has been used to tag 100k words of the CLE Urdu Digest Corpus, giving a tagging accuracy of 96.8%. | [
15348803,
2714227,
14937015
] | The CLE Urdu POS Tagset
Tafseer Ahmed [email protected]
Centre for Language Engineering
Al-Khawarizmi Institute of Compute Science
UET
LahorePakistan
DHA Suffa University
KarachiPakistan
Saba Urooj
Centre for Language Engineering
Al-Khawarizmi Institute of Compute Science
UET
LahorePakistan
Sarmad Hussain
Centre for Language Engineering
Al-Khawarizmi Institute of Compute Science
UET
LahorePakistan
Asad Mustafa
Centre for Language Engineering
Al-Khawarizmi Institute of Compute Science
UET
LahorePakistan
Rahila Parveen
Centre for Language Engineering
Al-Khawarizmi Institute of Compute Science
UET
LahorePakistan
Farah Adeeba
Univ. of Konstanz
KonstanzGermany
Annette Hautli
Univ. of Konstanz
KonstanzGermany
Miriam Butt
The CLE Urdu POS Tagset
POS TagsetUrduCorpus
The paper presents a design schema and details of a new Urdu POS tagset. This tagset is designed due to challenges encountered in working with existing tagsets for Urdu. It uses tags that judiciously incorporate information about special morpho-syntactic categories found in Urdu. With respect to the overall naming schema and the basic divisions, the tagset draws on the Penn Treebank and a Common Tagset for Indian Languages. The resulting CLE Urdu POS Tagset consists of 12 major categories with subdivisions, resulting in 32 tags. The tagset has been used to tag 100k words of the CLE Urdu Digest Corpus, giving a tagging accuracy of 96.8%.
Introduction
Choosing an appropriate tagset is a preliminary and vital task for successful POS tagging. A tagset needs to be able to encode the grammatical distinctions that are of interest for further steps in natural processing or for linguistic research, while allowing for efficient and accurate automatic tagging (MacKinlay, 2005). With respect to the South Asian language Urdu (spoken mainly in Pakistan and India), several different POS tagsets have already been developed. However, in the process of POS tagging the CLE Urdu Digest corpus, the only large generally available corpus for Urdu, 1 we identified several shortcomings with the existing POS tagsets and came to the conclusion that a new revised tagset needed to be designed to: (a) provide access to the kinds of linguistic distinctions we found necessary for further natural language processing such as grammar development, machine translation and generation; (b) improve the automatic tagging. This paper discusses the existing tagsets for Urdu (Muaz, Ali & Hussain, 2009;Sajjad, 2007;Sajjad & Schmid, 2009;Schmid, 1995) and presents a new POS tagset that has been used to tag the CLE Urdu Digest Corpus.
Literature Review
POS tagsets have been reviewed and revised for a variety of languages due to a variety of motivations. Lüdeling & Kytö (2008) provides a detailed comparison of a range of English POS tagsets (including tagsets for the Brown, LOB, UPENN, BNC-C5, BNC-C6, ICE, PoW and LLc corpora) along with their differences. Lüdeling reports that these tagsets differ in accordance to the requirement of the target application of the tagged corpus as well as according to the underlying linguistic theory. For example, the ICE tagging scheme differs from other tagsets mainly due to the fact that it was developed at the time when syntactic theories like Generalized Phrase Structure Grammar and Lexical-Functional Grammar had proposed the notion that a category is composed of a 1 See http://www.cle.org.pk/clestore/.
bundle of features. Therefore, this tagging scheme was more useful for feature-based parsers. It is not uncommon to experiment with different tagset designs and to repeatedly revise an existing tagset in order to capture typological properties in a more linguistically adequate and computationally efficient manner. Some examples come from work on Vietnamese (Tran et al., 2009), Slovene (Dzeroski, Erjavec & Zavrel, 2000), Swedish (Carlberger & Kann, 1999) and Persian (Oroumchian et al., 2006).
South Asian POS tagsets
With respect to South Asian languages, several different tagsets have been designed. These differ in terms of morpho-syntactic features, tag definition and tag granularity. However, South Asian languages form a common linguistic area and therefore share many structural characteristics. This realization is reflected in Baskaran et al. (2008), which contains a proposal for a framework that defines an overall common POS tagset for the languages of India (see also Chandrashekar (2007) on Sanskrit). The framework follows certain principles, i.e., a tagset should be hierarchically organized and include reference to morpho-syntactic features. Further, a balanced approach should be followed in using the form vs. function as criteria for the classification of tags. This framework ensures that common categories across Indian languages are annotated in the same way.
Urdu POS tagsets
The search for a good Urdu POS tagset has already gone through multiple iterations. In 2003, Hardie designed the first POS tagset for Urdu. He followed the EAGLES guidelines (Hardie, 2003). This tagset was based on morpho-syntactic categories of Urdu and contained 350 tags. As a large number of tags is difficult to handle for computational processing (with a small-sized corpus), there has been limited follow up work based on this tagset, beyond the initial POS tagger through the EMILLE project (Lüdeling & Kytö, 2008 (Hardie, 2003). This tagset contains finer distinctive categories for pronouns and demonstratives, but does not do sufficient justice to the Urdu verbal and tense/aspect system. In 2008, another tagset 2 was developed by the Center for Research on Urdu Language Processing (CRULP), following the guidelines of the Penn Treebank, which contains 46 tags. In this tagset, a verb category has multiple tags based on the morphology of the verbs. Similarly, common nouns were also classified with finer distinctions than previously available. Muaz, Ali & Hussain (2009) make a comparison of these tagsets and propose a new tagset with 32 tags. 17 tags are the same as in the previous two tagsets, but differences among types of nouns, for example, (with or without case, compounding) were eliminated as their syntactic distribution is identical.
Tagset Design
As part of a larger effort whose aim it is to develop and tag a balanced corpus of Urdu (Ijaz & Hussain, 2007;Urooj et al., 2012) for use in Urdu linguistic and computational research, a revision of (Muaz, Ali & Hussain, 2009), the most recent tagset, has been undertaken. We analyzed design principles and individual tags of the currently available tagsets, and provide a new tagset which combines qualities of all of them. The new CLE Urdu POS Tagset is logically hierarchical i.e. it provides 12 primary POS categories and then 35 subcategories. For the design of individual tags, our primary inspiration is the tagset by Muaz, Ali & Hussain (2009). However, we added, deleted and merged different tags on the basis of: (a) comparison with other tagsets, (b) syntactic distribution and other linguistic issues (examples provided below in the discussion of the tags) and, (c) the tagging of 100K words of the CLE Urdu Digest balanced corpus. Mainly, we improved the tagset by proposing tags that are motivated by a readily identifiable morphosyntactic pattern and distribution. The following is a brief description of the tags. The detailed tagset is available via the CLE website. 3
Noun
Nouns are divided into two sub-categories, common noun (NN) and proper noun (NNP). We decided that a single POS tag will be assigned to multiwords and name entities. For example, "islAm AbAd 4 " (having a space or zero-width-joiner) is tagged as NNP.
Some canonical examples of common nouns are kitAb 'book', pAnI 'water' and yAd 'memory'. However, the category also includes other nouns that display an adverbial nature like time, place, manner, etc. Some examples of these are: andar 'inside' and yahAN 'here' etc. These adverbal nominals can occur with or without specifiers/modifiers.
(1) vuh andar AI PRP NN VB 3Sg inside come.Perf.Sg 'She came inside.' (2) vuh [ghar kE andar] AI PRP NN PSP NN VB 3Sg
inside of inside come 'She came inside the house.'
There was a disagreement in previous tagsets about these adverbial nominals. The Hindi/Indian language Tagset (Bharati et al., 2006) introduced a new tag category NST (Noun Spatial Temporal) for these words. The previous Urdu tagset (Muaz, Ali & Hussain, 2009) classifies these words as postpositions. We differ from both of these approaches for the following reason. These words allow specifiers/modifiers (cf. example (2) above) and so are different from the case markers and simple postpositions that have a noun or pronoun preceding them. Hence, we do not classify adverbial nominals with the postpositions. The other choice was to create a separate noun (sub-)tag for these words. However, we found that their syntactic behavior is similar to that of common nouns. Hence, we did not create a new tag to cater to the semantic difference between two sets of words and instead subsumed these adverbial nominals under the common noun (NN) tag.
Pronoun
Pronouns are divided into 7 subcategories. The personal pronoun (PRP) appears as a replacement of the noun. Some examples are meN (1Sg.Nom/Erg), mujHE (1Sg.Acc/Dat), vuh (3Sg.Nom) and usE (3Sg.Acc/Dat). The demonstrative (PDM) appears before a noun as its specifier., as in (3).
( There are two separate subcategories for relative pronouns: Relative Personal (PRR) and Relative Demostrative (PRD). The syntactic behaviour of these pronouns is different from personal pronouns and demonstrative. The following example demonstrates the relative personal (PRR) jo 'who'. It was discussed whether we should create separate categories for interrogative pronouns. We found that the interrogative pronoun can replace other related POS tags e.g. pronoun, adverb and quantifier etc. Hence no special tag for interrogative pronouns is created, and the interrogative words are merged into the relevant POS category. For example, kon 'who' is personal pronoun (PRP) and kitnA 'how much' is quantifier (Q).
Verb
Urdu verbs can be differentiated into canonical main verbs (6) light verbs appearing with a noun or adjective (7), and copular verbs (8). In (9), he comes after the main verb and expresses tense information, hence it is a tense auxiliary. However, in (8) it is functioning as a main verb. For this reason, it is tagged as VB.
There are different morphological forms of Urdu verbs. The root A 'come' has the morphological forms A-tE (imperfective masculine plural), A-tI (imperfective feminine singular), A-ON (subjunctive first person singular) etc. Unlike Hardie (2003) and following Muaz, Ali & Hussain (2009) and Bharati et al. (2006), we do not create separate tags to encode morphological information. There is a single tag VB for all forms of Urdu main verbs. However, there is an exception to this rule. The verb in the infinitive form is tagged as VBI. We provide a special tag for verbal infinitives because these act as verbal nouns and therefore display a syntactic distribution that differs from that of main verbs. We have also found that we would have liked to have been able to conduct a targeted extraction of instances of verbal infinitives in our previous work within Urdu NLP. This has not been possible with existing tagsets.
Auxiliary
The tagset encodes the fine distinctions necessary for the complex nature the verbal complex in Urdu. There are 4 types of auxiliaries; Aspectual (AUXA), Progressive (AUXP), Tense (AUXT) and Modals (AUXM). An example of a tense auxiliary (AUXT) is given in (9). The examples of the other tags are as follows:
Nominal Modifiers
Nominal modifiers convey information about a noun. This include adjectives (JJ) e.g. accHA 'good', quantifiers (Q) e.g. kucH 'some', cardinal (CD) e.g. dO 'two', ordinal (OD) e.g. dUsrA 'second', fraction (FR) e.g. AdHA 'half' and multiplicative (QM) e.g. gunA 'times'. We found that there are many adjectives that also appear as a noun. We decided to assign the POS according to the syntactical function. For example, GulAm 'slave' appears as an adjective in (15) and as a noun in (16) As discussed in section 3.1, we consider multiwords as a single token. The superlative and comparative forms of some borrowed adjectives have Persian suffixes tarIn and tar respectively. A space occurs between the adjective and the suffix e.g "AzIm tar" 'greater'and "sust tarIn" 'slowest'. We consider these as multiwords and assign the tag JJ.
Adverb
There are two sub-categories of adverbs: general adverb (RB) and negation (NEG We discussed in section 3.5 that spatial and temporal adverbials e.g. andar 'ínside', ab 'now', kal 'tomorrow' are tagged as common noun (NN) because of their syntactic behavior.
Adposition
There are two subcategories of adpositions:. re-and postpositions. Some examples of Urdu prepositions are: fI 'in'/'per', az 'from' , sivAE 'except' and bajuz 'except' etc. (Raza, 2011). An example with fI (borrowed from Arabic) is given below. Examples of postpositions are nE (the ergative marker), kO (the accusative and dative), tak 'till', liE 'for' and bin 'without'. As discussed in section 3.1, we consider adverbial nominals e.g. andar 'inside', Upar 'above'/'over' etc. as common nouns.
Conjunction
The category conjunction is divided into the usual coordinate and subordinate conjunction, but also provides for two Urdu specific categories. The examples of co-ordinating conjunction (CC) are or 'and' and lEkin 'but'/'however' etc. The examples of sub-ordinating conjunctions (SC) are kiyUnkah 'because' and tO 'then' etc. An example of a SC is given below. successful be future 'íf (you) will work hard then (you) will be successful.'
The above example have agar 'if' as pre-sentential (SCP). These words appear before the first clause in subordinating constructions. Following Bharati et al. (2006), we introduced the tag subordinating-conjunction-kar (SCK) for the verb kar(/kE) 'do' appearing at the the end of embedded nonfinite clauses. An example of this construction is given below.
Particle
Particles are divided into two subcategories: a general particle tag (PRT) and a VALA tag for a language specific category ('the X one'). The general particle tag (PRT) includes emphatic particles e.g. bHI 'also' and hI 'even'. The usages of the particle vAl-are described in detail in Muaz & Khan (2009). An example of is given below.
(23) sabzI valA NN VALA vegetable one 'The thing (e.g. meal) that has vegetables'/ 'the person who sells vegetable.'
Symbol
Symbol has two categories: Punctuation (PU) and other symbols (SYM).
Residual
Residual contains one tag for Foreign Fragment (FF) covering all foreign language elements. This tag is assigned only in that situation when we cannot assign an Urdu POS tag to that word (or multiword). For example, subh2An Allah 'glory to Allah' is an Arabic fragment, but we assign the interjection tag (INJ) to it. Similarly, the English noun book in the following example is treated as noun because it has been absorbed into standard Urdu usage via intensive language contact with English. If we cannot assign an Urdu POS tag to a foreign fragment, then we consider it as a foreign fragment (FF).
Tagging the CLE Urdu Digest Corpus
The updated tagset was used to tag the CLE Urdu Digest Corpus, covering an 80% training corpus and a 20% testing corpus. The files were selected randomly. The Tree Tagger (Schmid, 1994;Schmid, 1995)
Discussion and Conclusion
In analyzing the results of the tagger, it was observed that the tagger encounters problems in disambiguating between some particular pairs of tags. While there are two tags for nouns (noun vs. proper noun), Urdu does not make a clear distributional distinction between these nouns. We have decided to nevertheless keep both tags since information about proper nouns is generally important for further natural language processing. Nouns are confused with adjectives when they occur adjacent to one another. The same issue was found by Muaz, Ali & Hussain (2009). Due to the fact that the postposition 'in' and the personal pronoun 'I' are written the same in Urdu ,)ںﯼیﻡم( the tagger confuses the two when they occur in syntactic positions where both options are possible. Similarly, the tagger finds the Urdu word ﻭوﺕت 'to' confusing, as it can act both as a discourse particle and as introducing a subordinate clause. On the other hand, the results of the newly added tag Foreign Fragment (FF) has shown a good accuracy as compared to the previous tagsets where this category was dealt with under expressions (Exp) (Sajjad, 2007, Sajjad & Schmid, 2009 or was ignored (Muaz, Ali & Hussain, 2009
Note that the same form vuh acts as personal pronoun (PRP) or demonstrative (PDM). They can be differentiated on the basis of syntactic context. In (6), vuh is the head of noun phrase, hence it is tagged as PRP.The possessive pronouns (PRS) are the pronouns used to show the relation of ownership. Some examples are mErA 'my', tumhArA 'your' and hamArA 'our'. The reflexive pronouns (PRF) are used for referring to oneself. The examples are xud 'self' and apnE Ap 'self'. The reflexive apna (APNA) is used to show self's relation with the noun. An example is given in (4).conf.org/proceedings/lrec2010/pdf/194_Paper.pdf.vuh
laRkI AI
PDM NN
VB
3Sg
girl
come.Perf.F.Sg
'That girl came.'
(4)
mErI
apnI
gHaRI
PRS
APNA
NN
my
own
watch
'my own watch'
He/She read the book.us
nE
buk
paRHI
PRP
PSP
NN
VB
3Sg
Erg
book
read.Perf.F.Sg
'
was used for automatic tagging, with a machine learning technique of Decision Trees and smoothing technique of Class Equivalence. The results are given in table 1. It shows a tagging accuracy of 96.8%, indicating that our tagset is performing well.
).In conclusion, we have presented a new POS tagset for Urdu. It is based on a critical analysis of several previous iterations of tagset proposals and builds on these. The new CLE Urdu POS Tagset has been used to tag 100k words of the publically available balanced CLE Urdu Digest corpus. Work is continuing to extend the tagged corpus to 1 million words.Tag
Total
Tokens
Error Error
%
Maximum
Misclassification
VBF
2602
119
4.57
30 AUXT/NN
AUXA 760
102
13.42 98 VBF
PDM
428
77
17.99 69 PRP
PRP
1091
72
6.60
53 PDM
NN
6266
65
1.04
11 JJ
JJ
1820
54
2.97
30 NN
PSP
3844
53
1.38
30 PRP
SC
454
52
11.45 35 PRT
AUXT 704
43
6.11
28 AUXA
NNP
1014
40
3.94
37 NN
Q
291
20
6.87
15 NN
RB
462
19
4.11
9
NN
CC
502
17
3.39
6
NN
PRR
139
14
10.07 5
PRP
PRT
395
13
3.29
9
PSP
AUXP 121
9
7.44
7
VBF
PRS
115
7
6.09
6
PDM
AUXM 104
6
5.77
5
AUXA
INJ
17
6
35.29 6
NN
SCK
154
6
3.90
3
RB
SCP
65
6
9.23
5
SC
CD
622
4
0.64
2
PU
PU
2536
4
0.16
2
VBF
VBI
438
4
0.91
2
VBF
FF
72
3
4.17
3
PU
OD
150
3
2.00
2
CD
PRF
14
2
14.29 2
NN
Table 1: Results and Error Analysis
6.
See http://www.cle.org.pk/software/ling_resources/UrduNepaliEngl ishParallelCorpus.htm. 3 See http://www.cle.org.pk/software/langproc/POStagset.htm. 4 Urdu is written in a modified Persio-Arabic script. In this paper, we present a Latin script transliteration of the Urdu words . The transliteration scheme followed is described in http://www.lrec-
See http://cle.org.pk/eulr/.
AcknowledgementThis work has been supported by a DAAD Research Grant, Essential Urdu Linguistic Resources. 5ReferencesBaskaran S., Bali K., Bhattacharya T., Bhattacharyya P., Jha G. N., Rajendran S., Saravanan K., Sobha L. and Subbarao K. V. (2008). Designing a Common POS-
Tagset Framework for Indian Languages. Proceedings of the 6th Workshop on Asian Language Resources. the 6th Workshop on Asian Language ResourcesTagset Framework for Indian Languages. In Proceedings of the 6th Workshop on Asian Language Resources, 2008.
Anncorra: Annotating corpora guidelines for POS and chunk annotation for Indian languages. A Bharati, R Sangal, D M Sharma, L Bai, LTRC-TR31Bharati A., Sangal R., Sharma D. M. and Bai L. (2006). Anncorra: Annotating corpora guidelines for POS and chunk annotation for Indian languages. LTRC-TR31.
Implementing an efficient part-of-speech tagger. Software-Practice and Experience. J Carlberger, V Kann, Carlberger J. and Kann V. (1999). Implementing an efficient part-of-speech tagger. Software-Practice and Experience. pp. 815-832.
POS tagger for Sanskrit. R Chandrashekar, New DelhiJawaharlal Nehru UniversityPh.D. thesisChandrashekar R. (2007). POS tagger for Sanskrit. Ph.D. thesis. Jawaharlal Nehru University, New Delhi.
. S Dzeroski, T Erjavec, J Zavrel, Dzeroski S., Erjavec T. and Zavrel J. (2000).
Morphosyntactic Tagging of Slovene: Evaluating Taggers and Tagsets. Proceedings of the Second International Conference on Language Resources and Evaluation. the Second International Conference on Language Resources and EvaluationMorphosyntactic Tagging of Slovene: Evaluating Taggers and Tagsets. In Proceedings of the Second International Conference on Language Resources and Evaluation, 2000.
Developing a tag-set for automated part-of-speech tagging in Urdu. A Hardie, Proceedings of the Corpus Linguistics 2003 Conference. Archer D., Rayson P., Wilson A., and McEnery T.the Corpus Linguistics 2003 ConferenceHardie A. (2003). Developing a tag-set for automated part-of-speech tagging in Urdu. In Archer D., Rayson P., Wilson A., and McEnery T. (eds.), Proceedings of the Corpus Linguistics 2003 Conference.
Corpus Based Urdu Lexicon Development. M Ijaz, S Hussain, Corpus Annotation: Linguistic Information for Computer Text Corpora. Longman. Garsire R., Leech G. and McEnery A.LondonProceedings of the Conference on Language Technology (CLT07)Ijaz M. and Hussain S. (2007). Corpus Based Urdu Lexicon Development. In Proceedings of the Conference on Language Technology (CLT07). University of Peshawar, Pakistan. Leech G. (1997). Grammatical Tagging. In Garsire R., Leech G. and McEnery A. (eds.), Corpus Annotation: Linguistic Information for Computer Text Corpora. Longman, London.
Corpus Linguistics: An International Handbook. Lüdeling A., Kytö M.Walter de GruyterBerlinLüdeling A., Kytö M. (eds.). (2008). Corpus Linguistics: An International Handbook. Berlin:Walter de Gruyter.
The effects of part-of-speech tagsets on tagger performance. Honours thesis. A Mackinlay, University of MelbourneMacKinlay A. (2005). The effects of part-of-speech tagsets on tagger performance. Honours thesis. University of Melbourne.
Analysis and development of Urdu POS tagged corpus. A Muaz, A Ali, S Hussain, Proceedings of the 7th Workshop on Asian Language Resources, IJCNLP'09. Suntec City. the 7th Workshop on Asian Language Resources, IJCNLP'09. Suntec CitySingaporeMuaz A., Ali A. and Hussain S. (2009). Analysis and development of Urdu POS tagged corpus. In Proceedings of the 7th Workshop on Asian Language Resources, IJCNLP'09. Suntec City, Singapore.
The Morphosyntactic Behavior of 'Wala' in Urdu Language. A Muaz, A N Khan, Proceedings of 28th Annual Meeting of the South Asian Language Analysis Roundtable. 28th Annual Meeting of the South Asian Language Analysis RoundtableSALA'09. University of North Texas, USMuaz A. and Khan A. N. (2009). The Morphosyntactic Behavior of 'Wala' in Urdu Language. In Proceedings of 28th Annual Meeting of the South Asian Language Analysis Roundtable, SALA'09. University of North Texas, US.
Creating a Feasible Corpus for Persian POS Tagging. F Oroumchian, S Tasharofi, H Amiri, H Hojjat, F Raja, Department of Electrical and Computer Engineering, University of TehranOroumchian F., Tasharofi S., Amiri H., Hojjat H. and Raja F. (2006). Creating a Feasible Corpus for Persian POS Tagging. Department of Electrical and Computer Engineering, University of Tehran.
Subcategorization Acquisition and Classes of Predication in Urdu. G Raza, University of Konstanz. GermanyPhD ThesisRaza G. (2011). Subcategorization Acquisition and Classes of Predication in Urdu. PhD Thesis. University of Konstanz. Germany.
Statistical Part of Speech Tagger for Urdu. H Sajjad, Lahore, PakistanNational University of Computer and Emerging SciencesMS ThesisSajjad H. (2007). Statistical Part of Speech Tagger for Urdu. MS Thesis. National University of Computer and Emerging Sciences, Lahore, Pakistan.
Tagging Urdu Text with Parts of Speech: A Tagger Comparison. H Sajjad, H Schmid, Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-09). the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-09)Sajjad H. and Schmid H. (2009). Tagging Urdu Text with Parts of Speech: A Tagger Comparison. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-09).
Probabilistic Part-of-Speech Tagging Using Decision Trees. H Schmid, Proceedings of International Conference on New Methods in Language Processing. International Conference on New Methods in Language ProcessingManchester, UKSchmid H. (1994). Probabilistic Part-of-Speech Tagging Using Decision Trees. In Proceedings of International Conference on New Methods in Language Processing,. Manchester, UK.
Improvements in Part-of-Speech Tagging with an Application to German. H Schmid, Proceedings of the ACL SIGDAT-Workshop. the ACL SIGDAT-WorkshopDublin, IrelandSchmid H. (1995). Improvements in Part-of-Speech Tagging with an Application to German. In Proceedings of the ACL SIGDAT-Workshop,. Dublin, Ireland.
An Experimental Study on Vietnamese POS Tagging. O T Tran, C A Le, T Q Ha, Q H Le, Proceedings of Asian Language Processing. Asian Language ProcessingTran O. T., Le C. A., Ha T. Q. and Le Q. H. (2009). An Experimental Study on Vietnamese POS Tagging. In Proceedings of Asian Language Processing, pp. 23- 27.
CLE Urdu Digest Corpus. S Urooj, S Hussain, F Adeeba, F Jabeen, R Perveen, Proceedings of Conference on Language and Technology 2012 (CLT12). Conference on Language and Technology 2012 (CLT12)Lahore, PakistanUrooj S., Hussain S., Adeeba F., Jabeen F. and Perveen R. (2012). CLE Urdu Digest Corpus. In Proceedings of Conference on Language and Technology 2012 (CLT12). Lahore, Pakistan. |
16,796,126 | Tree Linearization in English: Improving Language Model Based Approaches | We compare two approaches to dependency tree linearization, a task which arises in many NLP applications. The first one is the widely used 'overgenerate and rank' approach which relies exclusively on a trigram language model (LM); the second one combines language modeling with a maximum entropy classifier trained on a range of linguistic features. The results provide strong support for the combined method and show that trigram LMs are appropriate for phrase linearization while on the clause level a richer representation is necessary to achieve comparable performance. | [
13466080,
2680971,
13955192,
6207667
] | Tree Linearization in English: Improving Language Model Based Approaches
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2009. 2009
Katja Filippova
EML Research gGmbH Schloss
Wolfsbrunnenweg 3369118HeidelbergGermany
Michael Strube
EML Research gGmbH Schloss
Wolfsbrunnenweg 3369118HeidelbergGermany
Tree Linearization in English: Improving Language Model Based Approaches
Proceedings of NAACL HLT 2009: Short Papers
NAACL HLT 2009: Short PapersBoulder, ColoradoAssociation for Computational LinguisticsJune 2009. 2009
We compare two approaches to dependency tree linearization, a task which arises in many NLP applications. The first one is the widely used 'overgenerate and rank' approach which relies exclusively on a trigram language model (LM); the second one combines language modeling with a maximum entropy classifier trained on a range of linguistic features. The results provide strong support for the combined method and show that trigram LMs are appropriate for phrase linearization while on the clause level a richer representation is necessary to achieve comparable performance.
Introduction
To date, many natural language processing applications rely on syntactic representations and also modify them by compressing, fusing, or translating into a different language. A syntactic tree emerging as a result of such operations has to be linearized to a string of words before it can be output to the end-user. The simple and most widely used trigram LM has become a standard tool for tree linearization in English (Langkilde & Knight, 1998). For languages with less rigid word order, LM-based approaches have been shown to perform poorly (e.g., Marsi & Krahmer (2005) for Dutch), and methods relying on a range of linguistic features have been successfully applied instead (see Uchimoto et al. (2000) and Ringger et al. (2004), Filippova & Strube (2007) for Japanese and German resp.). To our knowledge, none of the linearization studies have compared a LM-based method with an alternative. Thus, it would be of interest to draw such a comparison, especially on English data, where LMs are usually expected to work well.
As an improvement to the LM-based approach, we propose a combined method which distinguishes between the phrase and the clause levels:
• it relies on a trigram LM to order words within phrases;
• it finds the order of clause constituents (i.e., constituents dependent on a finite verb) with a maximum entropy classifier trained on a range of linguistic features.
We show that such a differentiated approach is beneficial and that the proposed combination outperforms the method which relies solely on a LM. Hence, our results challenge the widespread attitude that trigram LMs provide an appropriate way to linearize syntactic trees in English but also indicate that they perform well in linearizing subtrees corresponding to phrases.
LM-based Approach
Trigram models are easy to build and use, and it has been shown that more sophisticated n-gram models (e.g., with higher n, complex smoothing techniques, skipping, clustering or caching) are often not worth the effort of implementing them due to data sparseness and other issues (Goodman, 2001). This explains the popularity of trigram LMs in a variety of NLP tasks (Jurafsky & Martin, 2008), in particular, in tree linearization where they have become given a syntactic tree, one needs to consider all possible linearizations and then choose the one with the lowest entropy. Given a projective dependency tree 1 , all linearizations can be found recursively by generating permutations of a node and its children. Unfortunately, the number of possible permutations grows factorially with the branching factor. Hence it is highly desirable to prohibit generation of clearly unacceptable permutations by putting hard constraints encoded in the English grammar. The constraints which we implement in our study are the following: determiners, possessives, quantifiers and noun or adjective modifiers always precede their heads. Conjunctions, coordinated elements, prepositional objects always follow their heads. These constraints allow us to limit, e.g., the total of 96 (2 × 2 × 4!) possibilities for the tree corresponding to the phrase all the brothers of my neighbor (see Figure 1) to only two (all the brothers of my neighbor, the all brothers of my neighbor). Still, even with such constraints, in some cases the list of possible linearizations is too long and has to be reduced to the first N , where N is supposed to be sufficiently large. In our experiments we break the permutation generation process if the limit of 20,000 variants is reached.
Combined Approach
The LM approach described above has at least two disadvantages: (1) long distance dependencies are not captured, and (2) the list of all possible linearizations can be huge which makes the search for the best string unfeasible. However, our combined approach is based on the premise that trigram LMs are well-suited for finding the order within NPs, PPs and other phrases where the head is not a finite verb. E.g., given a noun modified by the words big, red and the, a LM can reliably rank the correct order higher than incorrect ones ( the big red N vs. the red big N, etc.). Next, on the clause level, for every finite verb in the tree we find the order of its dependents using the method which we originally developed for German (Filippova & Strube, 2007), which utilizes a range of such linguistic features as PoS tag, syntactic role, length in words, pronominalization, semantic class, etc. 2 For the experiments presented in this paper, we train two maximum entropy classifiers on all but the semantic features:
1. The first classifier determines the best starting point for a sentence: for each constituent dependent on the verb it returns the probability of this constituent being the first one in a sentence. The subject and also adjuncts (e.g. temporal adjuncts like yesterday) are usually found in the beginning of the sentence.
2. The second classifier is trained to determine whether the precedence relation holds between two adjacent constituents and is applied to all constituents but the one selected by the first classifier. The precedence relation defined by this classifier has been shown to be transitive and thus can be used to sort randomly ordered constituents. Note that we do not need to consider all possible orders to find the best one.
Once the order within clause constituents as well as the order among them is found, the verb is placed right after the subject. The verb placing step completes the linearization process. The need for two distinct classifiers can be illustrated with the following example:
( (1a,b) are grammatical while (1c) is hardly acceptable, and no simple precedence rule can be learned from pairs of constituents in (1a) and (1b): the temporal adjunct earlier today can precede or follow each of the other constituents dependent on the verb (she, him, an email). Thus, the classifier which determines the precedence relation is not enough. However, an adequate rule can be inferred with an additional classifier trained to find good starting points: a temporal adjunct may appear as the first constituent in a sentence; if it is not chosen for this position, it should be preceded by the pronominalized subject (she), the indirect object (him) and the short non-pronominalized object (an email).
Experiments
The goal of our experiments is to check the following hypotheses:
1. That trigram LMs are well-suited for phrase linearization.
2. That there is a considerable drop in performance when one uses them for linearization on the clause level.
3. That an approach which uses a richer representation on the clause level is more appropriate.
Data
We take a subset of the TIPSTER 3 corpus -all Wall Street Journal articles from the period of 1987-92 (approx. 72 mill. words) -and automatically annotate them with sentence boundaries, part of speech tags and dependency relations using the Stanford parser (Klein & Manning, 2003). We reserve a small subset of about 600 articles (340,000 words) for testing and use the rest to build a trigram LM with the CMU toolkit (Clarkson & Rosenfeld, 1997, with Good-Turing smoothing and vocabulary size of 30,000). To train the maximum entropy classifiers we use about 41,000 sentences.
Evaluation
To test the trigram-based approach, we generate all possible permutations of clause constituents, place 3 Description at http://www.ldc.upenn.edu/ Catalog/CatalogEntry.jsp?catalogId= LDC93T3A. the verb right after the subject and then rank the resulting strings with the LM taking the information on sentence boundaries into account. To test the combined approach, we find the best candidate for the first position in the clause, then put the remaining constituents in a random order, and finally sort them by consulting the second classifier.
The purpose of the evaluation is to assess how good a method is at reproducing the input from its dependency tree. We separately evaluate the performance on the phrase and the clause levels. When comparing the two methods on the clause level, we take the clause constituents as they are presented in the input sentence. Although English allows for some minor variation in word order and it might happen that the generated order is not necessarily wrong if different from the original one, we do not expect this to happen often and evaluate the performance rigorously: only the original order counts as the correct one. The default evaluation metric is perphrase/per-clause accuracy:
acc = |correct| |total|
Other metrics we use to measure how different a generated order of N elements is from the correct one are:
1. Kendall's τ , τ = 1 − 4 t N (N −1)
where t is the minimum number of interchanges of consecutive elements to achieve the right order (Kendall, 1938;Lapata, 2006).
2. Edit distance related di, di = 1 − m N where m is the minimum number of deletions combined with insertions to get to the right order (Ringger et al., 2004).
E.g., on the phrase level, the incorrectly generated phrase the all brothers of my neighbor ('1-0-2-3-4-5') gets τ = 0.87, di = 0.83. Likewise, given the input sentence from (1a), the incorrectly generated order of the four clause constituents in (1c) -'1-0-2-3' -gets τ of 0.67 and di of 0.75.
Results
The results of the experiments on the phrase and the clause levels are presented in Tables 1 and 2 respectively. From the total of 5,000 phrases, 55 (about 1%) were discarded because the number of admissible linearizations exceeded the limit of 20,000. In the first row of Table 1 we give the results for cases where, with all constraints applied, there were still several possible linearizations (non-triv; 1,797); the second row is for all phrases which were longer than one word (> 1; 2,791); the bottom row presents the results for the total of 4,945 phrases (all).
Discussion
The difference in accuracy between the performance of the trigram model on the phrase and the clause level is considerable -76% vs. 49%. The accuracy of 76% is remarkable given that the average length of phrases which counted as non-triv is 6.2 words, whereas the average clause length in constituents is 3.3. This statistically significant difference in performance supports our hypothesis that the 'overgenerate and rank' approach advocated in earlier studies is more adequate for finding the optimal order within phrases. The τ value of 0.85 also indicates that many of the wrong phrase linearizations were near misses. On the clause level, where long distance dependencies are frequent, an approach which takes a range of grammatical features into account is more appropriate -this is confirmed by the significantly better results of the combined method (67%).
Conclusions
We investigated two tree linearization methods in English: the mainstream trigram-based approach and the one which combines a trigram LM on the phrase level with two classifiers trained on a range of linguistic features on the clause level. The results demonstrate (1) that the combined approach reproduces the word order more accurately, and (2) that the performance of the trigram LM-based method on phrases is significantly better than on clauses.
Figure 1 :
1A tree of the noun phrase all the brothers of my neighbor de facto the standard tree linearization tool in accordance with the 'overgenerate and rank' principle:
1) a [Earlier today] [she] sent [him] [an email]. b [She] sent [him] [an email] [earlier today]. c *[She] sent [earlier today] [him] [an email].
Table 1: Results of the trigram method on the phrase levelacc
τ
di
non-triv 76% 0.85 0.94
> 1
85% 0.90 0.96
all
91% 0.94 0.98
Table 2
2presents the results of the trigram-based (TRIGRAM) and combined (COMBINED) methods on the clause level. Here, we filtered out trivial cases and considered only clauses which had at least two constituents dependent on the verb (approx. 5,000 clauses in total).acc
τ
di
TRIGRAM
49% 0.49 0.81
COMBINED 67% 0.71 0.88
Table 2 :
2Results of the two methods on the clause level
Note that a phrase structure tree can be converted into a dependency tree, and some PCFG parsers provide this option.
See the cited paper for the full list of features and implementation details.
Acknowledgments: This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a KTF grant (09.009.2004). We would like to thank the anonymous reviewers for their feedback.
Statistical language modeling using the CMU-Cambridge toolkit. P & R Clarkson, Rosenfeld, Proc. of EUROSPEECH-97. of EUROSPEECH-97Clarkson, P. & R. Rosenfeld (1997). Statistical language modeling using the CMU-Cambridge toolkit. In Proc. of EUROSPEECH-97, pp. 2707-2710.
Generating constituent order in German clauses. K & M Filippova, Strube, Proc. of ACL-07. of ACL-07Filippova, K. & M. Strube (2007). Generating constituent order in German clauses. In Proc. of ACL-07, pp. 320- 327.
A bit of progress in language modeling. J T Goodman, Computer Speech and Language. Goodman, J. T. (2001). A bit of progress in language modeling. Computer Speech and Language, pp. 403- 434.
Speech and Language Processing. D J H Jurafsky, Martin, Prentice HallUpper Saddle River, N.J.Jurafsky, D. & J. H. Martin (2008). Speech and Language Processing. Upper Saddle River, N.J.: Prentice Hall.
A new measure of rank correlation. M G Kendall, Biometrika. 30Kendall, M. G. (1938). A new measure of rank correla- tion. Biometrika, 30:81-93.
Accurate unlexicalized parsing. D C D Klein, Manning, Proc. of ACL-03. of ACL-03Klein, D. & C. D. Manning (2003). Accurate unlexical- ized parsing. In Proc. of ACL-03, pp. 423-430.
Generation that exploits corpus-based statistical knowledge. I & K Langkilde, Knight, Proc. of COLING-ACL-98. of COLING-ACL-98Langkilde, I. & K. Knight (1998). Generation that ex- ploits corpus-based statistical knowledge. In Proc. of COLING-ACL-98, pp. 704-710.
Automatic evaluation of information ordering: Kendall's tau. M Lapata, Computational Linguistics. 324Lapata, M. (2006). Automatic evaluation of information ordering: Kendall's tau. Computational Linguistics, 32(4):471-484.
Explorations in sentence fusion. E Marsi, & E Krahmer, Proc. of ENLG-05. of ENLG-05Marsi, E. & E. Krahmer (2005). Explorations in sentence fusion. In Proc. of ENLG-05, pp. 109-117.
Linguistically informed statistical models of constituent structure for ordering in sentence realization. E Ringger, M Gamon, R C Moore, D Rojas, M Smets, & S Corston-Oliver, Proc. of COLING-04. of COLING-04Ringger, E., M. Gamon, R. C. Moore, D. Rojas, M. Smets & S. Corston-Oliver (2004). Linguistically informed statistical models of constituent structure for ordering in sentence realization. In Proc. of COLING-04, pp. 673-679.
Word order acquisition from corpora. K Uchimoto, M Murata, Q Ma, S Sekine, & H Isahara, Proc. of COLING-00. of COLING-00Uchimoto, K., M. Murata, Q. Ma, S. Sekine & H. Isahara (2000). Word order acquisition from corpora. In Proc. of COLING-00, pp. 871-877. |
18,454,449 | Sub-Word Similarity based Search for Embeddings: Inducing Rare-Word Embeddings for Word Similarity Tasks and Language Modelling | Training good word embeddings requires large amounts of data. Out-of-vocabulary words will still be encountered at test-time, leaving these words without embeddings. To overcome this lack of embeddings for rare words, existing methods leverage morphological features to generate embeddings. While the existing methods use computationally-intensive rule-based (Soricut and Och, 2015) or tool-based (Botha and Blunsom, 2014) morphological analysis to generate embeddings, our system applies a computationally-simpler sub-word search on words that have existing embeddings. Embeddings of the sub-word search results are then combined using string similarity functions to generate rare word embeddings. We augmented pre-trained word embeddings with these novel embeddings and evaluated on a rare word similarity task, obtaining up to 3 times improvement in correlation over the original set of embeddings. Applying our technique to embeddings trained on larger datasets led to on-par performance with the existing state-of-theart for this task. Additionally, while analysing augmented embeddings in a log-bilinear language model, we observed up to 50% reduction in rare word perplexity in comparison to other more complex language models. | [
8796808,
11332377,
806709,
16326127
] | Sub-Word Similarity based Search for Embeddings: Inducing Rare-Word Embeddings for Word Similarity Tasks and Language Modelling
December 11-17 2016
Mittul Singh
Spoken Language Systems (LSV)
Saarbrücken Graduate School of Computer Science
Saarland Informatics Campus
Collaborative Research Center on Information Density and Linguistic Encoding Saarland University
SaarbrückenGermany
Clayton Greenberg
Spoken Language Systems (LSV)
Saarbrücken Graduate School of Computer Science
Saarland Informatics Campus
Collaborative Research Center on Information Density and Linguistic Encoding Saarland University
SaarbrückenGermany
Youssef Oualil
Spoken Language Systems (LSV)
Collaborative Research Center on Information Density and Linguistic Encoding Saarland University
SaarbrückenGermany
Dietrich Klakow
Spoken Language Systems (LSV)
Saarbrücken Graduate School of Computer Science
Saarland Informatics Campus
Collaborative Research Center on Information Density and Linguistic Encoding Saarland University
SaarbrückenGermany
Sub-Word Similarity based Search for Embeddings: Inducing Rare-Word Embeddings for Word Similarity Tasks and Language Modelling
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanDecember 11-17 2016
Training good word embeddings requires large amounts of data. Out-of-vocabulary words will still be encountered at test-time, leaving these words without embeddings. To overcome this lack of embeddings for rare words, existing methods leverage morphological features to generate embeddings. While the existing methods use computationally-intensive rule-based (Soricut and Och, 2015) or tool-based (Botha and Blunsom, 2014) morphological analysis to generate embeddings, our system applies a computationally-simpler sub-word search on words that have existing embeddings. Embeddings of the sub-word search results are then combined using string similarity functions to generate rare word embeddings. We augmented pre-trained word embeddings with these novel embeddings and evaluated on a rare word similarity task, obtaining up to 3 times improvement in correlation over the original set of embeddings. Applying our technique to embeddings trained on larger datasets led to on-par performance with the existing state-of-theart for this task. Additionally, while analysing augmented embeddings in a log-bilinear language model, we observed up to 50% reduction in rare word perplexity in comparison to other more complex language models.
Introduction
Word embeddings have been successfully applied to many NLP tasks (Collobert and Weston, 2008;Collobert, 2011;Socher et al., 2011;Socher et al., 2012;Hermann and Blunsom, 2014;Bengio and Heigold, 2014;Yang et al., 2015), and these systems often achieved state-of-the-art performance. This success has been ascribed to embeddings' ability to capture regularities traditionally represented in core NLP features. Most of these embeddings were trained on large amounts of data, allowing them to have good coverage of the relevant vocabularies. However, embeddings often still cannot satisfactorily represent rare words, i.e. words with few occurrences in training data.
To generate useful embeddings for words too rare for standard methods to handle, Luong et al. (2013) and Botha and Blunsom (2014) leveraged the segmentation tool, Morfessor (Creutz and Lagus, 2005), while Cotterell et al. (2016) used morphological lexica to generate rare-word embeddings. In general, these methods added resource-based knowledge to their systems in order to form word vector representations, showing impressive performance gains over methods which did not address the rare words problem.
In contrast, Soricut and Och (2015) applied an automatic method to induce morphological rules and transformations as vectors in the same embedding space. More specifically, they exploited automaticallylearned prefix-and suffix-based rules using the frequency of such transformations in the data and induced a morphological relationship-based word graph. Then, they searched over this graph for rules that best infer the morphology of the rare words. The embeddings were then estimated using these rare-word explaining rules. In this method, creating and tuning this morphological graph could lead to a high initial cost. Table 1: This table reports various statistics for different language datasets used for language modelling. The last column shows the coverage of our method in percentage.
Task V #ENF Coverage Rare Word (Luong et al., 2013(Luong et al., ) 2951 1073 100 Gur65 (Gurevych, 2005) 49 4 100 Rare Word + Google News 2951 173 100 In order to overcome this cost and still be able to automatically induce rare word representations, we propose a sub-word similarity-based search. This technique maps a rare word to a set of its morphologically-similar words and combines the embeddings of these similar words to generate the rare word's representation (further discussed in Section 2). These generated embeddings can then be combined with existing word embeddings to be applied in various tasks.
In Section 3, we evaluate our embeddings on word similarity tasks. For further evaluation, in Section 4, we instantiate a log-bilinear language model (Mnih and Hinton, 2007) with our word embeddings and analyse their perplexity performance on rare words over various language modelling corpora. Finally, we summarise our findings in Section 5.
Rare-Word Embeddings
Rare words form a large part of a language's vocabulary. This is illustrated in Table 1, which reports the vocabulary size and number of rare words (RW) with zero (out-of-vocabulary words) or one training set occurrence for our corpora. As shown in this table, rare words constitute 10%-50% of the vocabulary. Further, it is widely known that in English, roughly half of all tokens in a given corpus occur only once. Thus, it is essential to handle rare words properly to obtain good performance.
In the context of word embeddings-related tasks, training good word embeddings can incur huge computational costs (Al-Rfou et al., 2013). So, in this work, we focus on augmenting readily available embeddings rather than creating new ones from scratch. To increase the availability of resources for many languages, Al-Rfou et al. (2013) released 1 pre-trained word embeddings for more than one hundred languages. These pre-trained word embeddings, namely Polyglot, were constructed by applying the method outlined in Bengio et al. (2009) on Wikipedia text, which vary in size from millions of tokens to a few billion tokens.
Among other available pre-trained word embeddings, Google released word2vec (Mikolov et al., 2013)-based embeddings 2 trained on their English News dataset (about 100 billion tokens). In our experiments, we applied both of these embeddings sets to jump start generating the rare word embeddings for different languages.
Inducing Rare-Word Embeddings
Statistics about the various language modelling corpora and word similarity tasks that we used in our experiments are shown in Table 1 and Table 2. In these tables, along with the vocabulary size and number of rare words, we also report the number of words for which the embeddings were not found (ENF = Embedding Not Found) in the pre-trained embedding sets. For most of the language and pretrained embedding pairs, number of ENFs formed a large share of the vocabulary for word similarity tasks and of rare-word set size for language modelling tasks. Hence, we estimated the missing word embeddings before using them in our tasks. We first provide a high level description of the steps of our method to induce the word embeddings for these missing rare words, followed by detailed description of each step. For a given set of pre-trained embeddings with a finite vocabulary V E applied to a task with vocabulary V T and a finite set of given rare words RW = {w|w / ∈ V E & w ∈ V T }, we apply the following steps:
1. Map every word w ∈ V E to its sub-word features 2. Index w ∈ V T using its sub-word features 3. Search the index for matches of w ∈ RW 4. For every w ∈ RW , combine matched words' embeddings to generate its embedding
Step 1: Map words to sub-words Although a word may be rare, substrings of that word are, in general, less rare. Hence, we start by breaking down each word w ∈ V into its constituent N -sized sub-word units: D N (w). For example, given the sub-word size N = 3: ang, ngu, gua, uag, age} In our experiments, we worked with value of N = 3. However, it remains to be seen how using differently sized sub-word units or even morphemes affects the performance of this method. Note that our procedure does not formally require that sub-word units be of equal length, so linguistically-sensible morphemes may be used if the resource is available for that language.
D N (language) = {lan,
Step 2: Index word using its sub-words Pre-trained sets of embeddings can cover large numbers of words already (for example, Polyglot embeddings have 100K words in their vocabulary). So, performing substring searches and comparisons can become quite computationally expensive. To speed up the search for sub-word units, we create an inverted index on words. For each w ∈ V , we treat D N (w) as a document and feed it into a search engine-based indexer. In this work, we used Lucene 3 (McCandless et al., 2010) to index the words.
Step 3: Search for matches of a rare word Next, we break down the rare word w / ∈ V into its sub-word units (D N (w )) and search for D N (w ) using the index. We restrict the search results set to the top K results, denoted by R K (w ). R K (w ) contains the words having similar sub-word units as w , hence, containing words which are sub-word similar to w . In our experiments, we fixed K = 10.
Step 4: Generating rare-word embeddings To estimate the word embedding of w ∈ RW , we compute the weighted average of embeddings (v) of the rare-word matches. For this weighted average, we employ a string similarity function S, such that
v w = w:D N (w)∈R K (w ) S(w , w) × v w
The above method particularly hinges on the third step, where we utilise sub-word similarity of morphologically similar words to search for rare word alternatives, leading to embedding combination in the fourth step. Hence, we refer to the above technique as Sub-Word Similarity based Search (SWordSS: pronounced swordz). The SWordSS embeddings ({v w : w ∈ RW }) are used along with {v w : w ∈ V } to perform rare word-related tasks.
In the fourth step, we apply different string similarity functions (S), described in the list below, to average different embeddings of matches from the third step. These different similarity functions help provide a more morphologically-sensible scoring of matches and eventually are used to weight the inputs of the final rare word embeddings.
• Jaccard Index, Jaccard (1912) computes the size of the character intersection over the size of the character union. Therefore, order of characters is not considered by this metric. Frequent characters such as vowels lead to uninteresting intersections, and short words could possibly suffer from an unfair floor.
• Jaro similarity, Jaro (1989) considers the number of matching characters in corresponding positions and the number of transpositions detected. So, order of characters does matter for this metric. Insertions and deletions are treated similarly, and the frequency and length effects from Jaccard could also affect this metric.
• Most frequent K Characters similarity, Seker et al. (2014) considers the counts of the top K characters in each string. Thus, if the "root morphemes" are long enough to create nontrivial count statistics, this metric may, too, favor a more linguistic similarity, but as before, shorter strings could have unwanted effects.
• Subsequence Kernels, Lodhi et al. (2002) create automatically-generated features based on sequences of characters within the strings to be compared. Therefore, those sequences that do not cross morpheme boundaries could be especially helpful for estimating morphological similarity.
• Tversky coefficient, Tversky (1977) breaks down the union in the Jaccard index, allowing different weights for the denominator intersection, those characters that only appear in the first string, and those characters that only appear in the second string. These metaparameters allow the metric some flexibility that the others do not.
In our experiments on rare word-related tasks, we mostly observed that using SWordSS led to high coverage rates, also presented in Table 1 and Table 2. We note that whenever words w resulted in zero matches in our experiments, they were either removed completely (in case of word similarity tasks) or substituted with random vectors (in case of language modelling tasks, Section 4).
Word Similarity Task
To test the efficacy of SWordSS embeddings, we evaluated them on two standard word similarity tasks. In such tasks, the correlation between the human annotator ratings of word pairs and the scores generated using embeddings was calculated. A good set of embeddings would achieve a high correlation. Specifically, we evaluated the SWordSS embeddings on Luong et al. (2013)'s English Rare Words dataset with 2034 word pairs (Luong2034) and also evaluated these embeddings on a German word similarity task (Gurevych, 2005) with 65 word pairs (Gur65).
Experimental Setup
For the German word similarity task, we used only Polyglot word embeddings, which are 64-dimensional vectors. For English along with Polyglot word embeddings, we used the Google News word2vec embeddings, which are 300-dimensional vectors.
As a baseline, we used the existing pre-trained word embeddings, which are compared to their augmented SWordSS versions. While augmenting the pre-trained set with the SWordSS embeddings, we also explored various string similarity functions to be used in the fourth step (Section 2.1), namely, Jaccard Index (SWordSS ji ), Jaro similarity (SWordSS jaro ), Most Frequent K Characters similarity (SWordSS mf k ), Subsequence Kernels (SWordSS ssk ) and Tversky Coefficient (SWordSS tc ).
To evaluate the effect of these string similarity functions, we also implemented a constant similarity function (S(w, w ) = 1, where w and w are words) used in the fourth step, denoting the corresponding embeddings by SWordSS 1 . Finally, we also compared the SWordSS embeddings to SO2015 (Soricut and Och, 2015), which also applies morphological analysis to generate missing word embeddings quite similar to SWordSS embeddings.
Results
Using SWordSS embeddings definitely increased the correlation with humans in comparison to the original on the Gur65 task (shown in Table 3), though the different string similarity functions except the constant function (SWordSS 1 ) led to correlations in a very close range, showing that particularly for German, different similarity functions behave very similarly. Henceforth, we only report the best correlation coefficient after applying these functions. Table 4: Spearman's rank correlation (%) based evaluation of techniques with and without morphological features used to generate representations for the word similarity task.
Next, we compared SWordSS versions of Polyglot embeddings and Google News Embeddings on the Luong2034 task. When the SWordSS versions were compared to the original (labelled w/o SWordSS) it led to a higher correlation, as shown in Table 4. However, for each set of embeddings, the difference between SWordSS 1 and SWordSS sim remained small. The correlations for the SWordSS version of Polyglot were still lower than the correlation rates reported by SO2015. This was due to the difference in initial quality of embeddings used by each method. As Polyglot embeddings trained on a lesser amount of data than SO2015, they were easily outperformed.
In Table 4, we addressed this lower performance issue by replicating our experiment using Google News word2vec embeddings to jump start the SWordSS versions for the Luong2034 task. Using these embeddings, trained on a larger dataset than used by Polyglot, led to SWordSS versions having on-par results with the SO2015 results for the Luong2034 task.
Overall the SWordSS technique was able to drastically improve pre-trained embeddings performance on the above word similarity tasks. Even though SWordSS-augmented Google News embeddings did not significantly outperform SO2015, this method provides a simpler sub-word search based alternative to the graph search over morphological relationships performed by SO2015. Furthermore, by applying sub-word search in the third step as shown in Section 2.1, SWordSS overcomes the need for creating and tuning the graph of morphological relationships as required by SO2015.
Word Embeddings in Language Models
Training language models (LMs) using an expanded vocabulary (having more word types than contained in the training corpus) requires assigning probabilities to words which are not present in the training set. Traditionally, these rare words are assigned a default value of probability in conventional N-gram and long short term memory (LSTM)-based reccurrent neural network LMs (Sundermeyer et al., 2012). This is usually not beneficial for spoken term detection and automatic speech recognition systems made for low resourced languages, since presence of rare words in speech queries is high (Logan et al., 1996;Logan et al., 2005).
To avoid this misrepresentation of rare words, we apply SWordSS embeddings in a language modelling framework. Specifically, a log-bilinear language model (LBL) (Mnih and Hinton, 2007). In our experiments, when the SWordSS embeddings were used to initialise an LSTM's input layer, the system obtained the same perplexity values as the LSTM initialised with random embeddings. This observation suggests that the LBL framework is better suited than LSTMs for this naïve way of initialising neural language models with SWordSS embeddings and improving perplexity on rare words.
LBL predicts the next word vector p ∈ R d , given a context of n − 1 words, as a transformed sum of context word vectors q j ∈ R d , as:
p = n−1 j=1 q j Cj
where C j ∈ R d×d are position-specific transformation matrices. p is compared with the next word w's representation r w . This comparison is performed using the vector dot product and then is used in a softmax function to obtain the probability of the next word as follows:
p(w i |w i−1 i−n+1 ) = exp(p · r w + b w ) v∈V exp(p · r v + b v )
where b is the bias term encoding the prior probability of word type w. First, Q the collection of context word vectors (q j ) and R the collection next word representations (r w ) are initialised with the pre-trained word embeddings. Thereafter, we train the LBL using stochastic gradient descent.
Previously, extensions to class based and factor based formulations have provided impressive improvements over regular N-gram LMs for morphological languages (Botha and Blunsom, 2014). But, these LMs do not provide straightforward ways of incorporating pre-trained word embeddings, so we use the original LBL because of the ease with which it incorporates pre-trained embeddings in its formulation.
Data
To evaluate the SWordSS embeddings for language modelling, we used the Europarl-v7 corpus of German (de) language as processed by Botha and Blunsom (2014). We also performed language modelling experiments with the SWordSS embeddings on Tagalog (tl), Turkish (tr) and Vietnamese (vi) corpora, which include transcriptions of phone conversations collected under the IARPA Babel Program language collection releases babel106b-v0.2f, babel105-v0.5 and babel107b-v0.7 respectively. Table 5: Statistical summary of corpora used for the language modelling experiments. Information corresponding to a language is presented in a column.
The German corpus was processed to have no outof-vocabulary words (OOVs), however, it still had a lot of low frequency words (see Table 2). Contrastingly, the Babel corpora have OOVs as well as other low frequency words.
The Babel corpora were provided with training and development sets. We divided the existing development set into two halves to use one as the test set and the other half as the new development set. The statistics on these corpora are summarised in Table 5.
In Tables 1 & 2, we had shown that even though a lot of rare-word embeddings are missing from the pre-trained set, SWordSS was able to generate and obtain high coverage rates for such words, giving this method added benefit in the context of rare words.
Experimental Setup
Before evaluating the SWordSS embeddings for predicting rare words, we used all the OOVs to expand the corresponding vocabulary. SWordSS embeddings for all the words in the expanded vocabulary were used to initialise LBL framework as described in Section 4. A bigram version of this LBL (LBL2 SW ordSS ) was further trained on language corpora before being evaluated.
We compare our LBL2 SW ordSS model with the conventional Modified-Kneser-Ney five-gram LM (MKN5) (Kneser and Ney, 1995;Chen and Goodman, 1996) and also with the bigram (LBL2) based log-bilinear LM. As a more powerful baseline, we also trained an LSTM based RNN LM to compare with LBL2 SW ordSS . Moreover, we compare the LBL2 SW ordSS , with a character aware language model (Kim et al., 2015), denoted as CCNN-LSTM. The CCNN-LSTMs were chosen for comparison because of their ability to use character-based features to implicitly handle OOVs and rare words. For training each of these LMs, we used the expanded vocabulary as used by LBL2 SW ordSS . In training neural network-based language models, we restricted the number of parameters to have a similar number of parameters as LBL2 SW ordSS .
Perplexity Experiments
We compare the language models described in Section 4.2 using perplexity values calculated on test sets of different languages, shown in the Overall in terms of test set perplexity, CCNN-LSTM outperformed LBL2 SW ordSS comfortably on most language corpora. However, on Vietnamese (in which characters represent meaning units rather than sounds) CCNN-LSTM suffered and the LSTM outperformed the other language models. In comparison to LSTM and CCNN-LSTM, LBL2 SW ordSS 's lower performance on test data was expected as the former are more non-linearly complex language models.
However, for tasks like spoken term detection, having low perplexities on most frequent set of words is not good enough and hence, we compare LMs on the perplexity of a rare-word based test set. To perform this comparison, we computed perplexity only on rare words (RW1PPL), i.e. with training-set frequency of one, present in the test set. As shown in Table 6, we observe that LBL2 SW ordSS performed better than the LSTM-based LMs across various languages in terms of RW1PPL.
We note that CCNN-LSTM model cannot include SWordSS embeddings easily. Hence, they are not directly comparable to LBL2 SW ordSS , as the latter has more information at its disposal.
Performance on OOVs and Rare Words
To further compare the performance of the aforementioned language models on rare words, we analyse perplexities of such words (RWPPL) in the test set as a variation of the frequency classes of these words in the training set. This variation is displayed in Figure 1.
For OOVs (rare words with zero training-set frequency), LBL2 SW ordSS outperformed the other language models built with similar number of parameters, on the Tagalog and Turkish corpora. In these cases, LBL2 SW ordSS reduced rare-word perplexities by a factor of two over the character-feature rich CCNN-LSTM, whose design allows it to implicitly handle rare words.
Even for rare words with training set frequency up to one, LBL2 SW ordSS reduced perplexity up to a factor of 2.5 times with respect to CCNN-LSTM, on the German, Tagalog and Turkish corpora. Interestingly on these particular language corpora, Figure 1 shows that LBL also performed better than both the LSTM-based LMs in modelling OOV and rare words of frequency up to ten.
For Vietnamese, LBL alone was able to improve OOV and RW1 words over the other LMs. We attribute this to lower coverage of Vietnamese rare words by SWordSS than for other languages. Instead adding SWordSS embeddings harmed the prediction of OOV and RW1 words.
These perplexity improvements stared to wane when higher frequency words were included into the rare word set, across the different languages. Nevertheless, for languages with rich morphology, initialising LBL with SWordSS embeddings reduced perplexities on rare words.
Conclusion
In this paper, we introduced SWordSS, a novel sub-word similarity based search for generating rare word embeddings. It leverages the sub-word similarity in morphologically rich languages to search for close 4 when initialised with SWordSS embeddings it obtained the same perplexity values Figure 1: Variation of rare-word perplexity versus threshold on frequency of training-set words on German, Tagalog, Turkish and Vietnamese corpora matches of a rare word, and then combines these close matches to estimate the embedding of a rare word.
Even though SWordSS is an unsupervised approach like Soricut and Och (2015), it differs from latter in the way it utilises the morphological information. The latter automatically induces morphological rules and transformations to build a morphological word graph. This graph is then tuned and used to induce embedding of a rare word. Instead, SWordSS replaces the overhead of induction of rules and creation of graph by searching a sub-word inverted index to find rare-word matches and combining their embeddings to estimate rare-word embedding.
To test the SWordSS technique, we augmented pre-trained embeddings and then evaluated them on word similarity tasks. The augmented embeddings outperformed the initial set of embeddings drastically. However, it lagged behind the state-of-the-art performance of Soricut and Och (2015). But, by employing embeddings trained on larger corpora, SWordSS was able to perform comparably on a rare-word task.
We also investigated the effects of using SWordSS augmented embeddings for modelling rare words. To perform this experiment, we trained LBL SW ordSS LM and compared it with language models like the character aware LM, LSTM-based RNN LM restricted to similar size. On almost all datasets, the character aware LM outperformed the other LMs with respect to perplexity on complete test sets. But on rare words, SWordSS showed up to 50 % reduced perplexity values in comparison to other LMs. Hence, SWordSS embeddings contributed substantially in modelling rare-word tasks.
In future work, we plan to incorporate SWordSS embeddings into more complex LMs than LBL and further analyse the different string similarity functions used in SWordSS's formulation.
Table 2 :
2This table reports various statisticsof a few language word similarity datasets used in our experiments. The last column shows the coverage of our method in percentage.
Table 3 :
3Spearman's rank correlation (%)
based evaluation of various string similarity
functions used to generate augmented word
vectors for the German word similarity task
(Gur65)
Task
Luong2034
Word Vectors
Polyglot Google News
SO2015 w/o morph
-
44.7
SO2015 w/ morph
-
52.0
w/o SWordSS
9.7
45.3
w/ SWordSS 1
28.9
51.3
w/ SWordSS sim
30.4
51.4
Table 6 .
6LBL2 SW ordSS was able to outperform the conventional LBL2 comfortably on all the corpora except Vietnamese. For Vietnamese, LBL2 SW ordSS and LBL2 performed comparably. Due to SWordSS' low coverage of Vietnamese vocabulary, initialising LBL2 with SWordSS embedding led to only a marginal performance gain.Language Model
German
Tagalog
Turkish
Vietnamese
PPL RW1PPL PPL RW1PPL PPL RW1PPL PPL RW1PPL
MKN5
364.2
559K
162.6
420K
478.9
139K
120.8
174K
LBL2
391.1
404K
171.4
204K
649
94K
137.6
100K
LSTM 4
323.1
596K
134.7
343K
489.8
110K
102.1
457K
CCNN-LSTM
315.7
636K
117.4
354K
408.7
168K
182.7
516K
LBL2 SW ordSS
369.4
260K
167.2
167K
513.2
110K
136.4
143K
#PAR
4.7 M
2.9 M
3.2 M
0.8 M
Table 6: Perplexities on test set (PPL), RW1 perplexities (RW1PPL) in thousands and number of param-
eters (#PAR) for LBL and LSTM LMs in millions, presented on four language corpora
As shown in Table 6,
https://sites.google.com/site/rmyeid/projects/polyglot 2 https://code.google.com/archive/p/word2vec/
https://lucene.apache.org/
AcknowledgmentsWe would like to thank anonymous reviewers for their comments, which helped improve this paper. We are also immensely grateful to Rose Hoberman for her comments on an earlier version of this manuscript.
Polyglot: Distributed word representations for multilingual nlp. Rami Al-Rfou, Bryan Perozzi, Steven Skiena, Proceedings of the Seventeenth Conference on Computational Natural Language Learning. the Seventeenth Conference on Computational Natural Language LearningSofia, BulgariaAssociation for Computational LinguisticsRami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilin- gual nlp. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 183-192, Sofia, Bulgaria, August. Association for Computational Linguistics.
Word embeddings for speech recognition. Samy Bengio, Georg Heigold, INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association. SingaporeSamy Bengio and Georg Heigold. 2014. Word embeddings for speech recognition. In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, pages 1053-1057, Singapore, September.
Curriculum learning. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, Jason Weston, Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09. the 26th Annual International Conference on Machine Learning, ICML '09New York, NY, USAACMYoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceed- ings of the 26th Annual International Conference on Machine Learning, ICML '09, pages 41-48, New York, NY, USA. ACM.
Compositional morphology for word representations and language modelling. Jan A Botha, Phil Blunsom, abs/1405.4273CoRRJan A. Botha and Phil Blunsom. 2014. Compositional morphology for word representations and language mod- elling. CoRR, abs/1405.4273.
An empirical study of smoothing techniques for language modeling. F Stanley, Joshua Chen, Goodman, Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics. the 34th Annual Meeting of the Association for Computational LinguisticsSanta Cruz, California, USAAssociation for Computational LinguisticsStanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 310-318, Santa Cruz, California, USA, June. Association for Computational Linguistics.
A unified architecture for natural language processing: Deep neural networks with multitask learning. Ronan Collobert, Jason Weston, Proceedings of the 25th International Conference on Machine Learning, ICML '08. the 25th International Conference on Machine Learning, ICML '08New York, NY, USAACMRonan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, pages 160-167, New York, NY, USA. ACM.
Deep learning for efficient discriminative parsing. Ronan Collobert, International Conference on Artificial Intelligence and Statistics. Ronan Collobert. 2011. Deep learning for efficient discriminative parsing. In International Conference on Artifi- cial Intelligence and Statistics.
Morphological smoothing and extrapolation of word embeddings. Ryan Cotterell, Hinrich Schütze, Jason Eisner, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Ryan Cotterell, Hinrich Schütze, and Jason Eisner. 2016. Morphological smoothing and extrapolation of word em- beddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1651-1660, Berlin, Germany, August. Association for Computational Linguistics.
Unsupervised Morpheme Segmentation and Morphology Induction from Text Corpora Using Morfessor 1.0. M Creutz, K Lagus, Helsinki University of TechnologyTechnical reportM. Creutz and K. Lagus. 2005. Unsupervised Morpheme Segmentation and Morphology Induction from Text Corpora Using Morfessor 1.0. Technical report, Helsinki University of Technology.
Iryna Gurevych, Proceedings, chapter Using the Structure of a Conceptual Network in Computing Semantic Relatedness. chapter Using the Structure of a Conceptual Network in Computing Semantic RelatednessJeju Island, Korea; Berlin Heidelberg; Berlin, HeidelbergSpringerNatural Language Processing -IJCNLP 2005: Second International Joint ConferenceIryna Gurevych, 2005. Natural Language Processing -IJCNLP 2005: Second International Joint Conference, Jeju Island, Korea, October 11-13, 2005. Proceedings, chapter Using the Structure of a Conceptual Network in Computing Semantic Relatedness, pages 767-778. Springer Berlin Heidelberg, Berlin, Heidelberg.
Multilingual models for compositional distributed semantics. abs/1404.4641Karl Moritz Hermann and Phil BlunsomCoRRKarl Moritz Hermann and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. CoRR, abs/1404.4641.
The distribution of the flora in the alpine zone. Paul Jaccard, New Phytologist. 112Paul Jaccard. 1912. The distribution of the flora in the alpine zone. New Phytologist, 11(2):37-50, February.
Advances in record-linkage methodology as applied to matching the 1985 census of tampa, florida. Matthew A Jaro, Journal of the American Statistical Association. 84406Matthew A. Jaro. 1989. Advances in record-linkage methodology as applied to matching the 1985 census of tampa, florida. Journal of the American Statistical Association, 84(406):414-420.
Character-aware neural language models. Yoon Kim, Yacine Jernite, David Sontag, Alexander M Rush, abs/1508.06615CoRRYoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2015. Character-aware neural language models. CoRR, abs/1508.06615.
Improved backing-off for m-gram language modeling. Reinhard Kneser, Hermann Ney, Acoustics, Speech, and Signal Processing. IEEE1International Conference onReinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pages 181- 184. IEEE.
Text classification using string kernels. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, Chris Watkins, The Journal of Machine Learning Research. 2Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. The Journal of Machine Learning Research, 2:419-444.
An experimental study of an audio indexing system for the web. Beth Logan, Pedro Moreno, Jean-Manuel Van Thong, Proceedings of the 4th International Conference of Spoken Language Processing. the 4th International Conference of Spoken Language ProcessingCiteseerBeth Logan, Pedro Moreno, Jean-Manuel Van Thong, et al. 1996. An experimental study of an audio index- ing system for the web. In Proceedings of the 4th International Conference of Spoken Language Processing. Citeseer.
Approaches to reduce the effects of oov queries on indexed spoken audio. B Logan, J M Van Thong, P J Moreno, IEEE Transactions on Multimedia. 75B. Logan, J. M. Van Thong, and P. J. Moreno. 2005. Approaches to reduce the effects of oov queries on indexed spoken audio. IEEE Transactions on Multimedia, 7(5):899-906, Oct.
Better word representations with recursive neural networks for morphology. Thang Luong, Richard Socher, Christopher D Manning, CoNLL. Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104-113.
. Michael Mccandless, Erik Hatcher, Otis Gospodnetic, Lucene in Action. Manning Publications CoSecond Edition: Covers Apache Lucene 3.0Michael McCandless, Erik Hatcher, and Otis Gospodnetic. 2010. Lucene in Action, Second Edition: Covers Apache Lucene 3.0. Manning Publications Co., Greenwich, CT, USA.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Three new graphical models for statistical language modelling. Andriy Mnih, Geoffrey Hinton, Proceedings of the 24th International Conference on Machine Learning, ICML '07. the 24th International Conference on Machine Learning, ICML '07New York, NY, USAACMAndriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine Learning, ICML '07, pages 641-648, New York, NY, USA. ACM.
A novel string distance function based on most frequent K characters. Sadi Evren, Oguz Seker, Ugur Altun, Cihan Ayan, Mert, abs/1401.6596CoRRSadi Evren Seker, Oguz Altun, Ugur Ayan, and Cihan Mert. 2014. A novel string distance function based on most frequent K characters. CoRR, abs/1401.6596.
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, Andrew Ng, Advances in Neural Information Processing Systems. Richard Socher, Eric H. Huang, Jeffrey Pennin, Christopher D. Manning, and Andrew Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801-809.
Semantic compositionality through recursive matrix-vector spaces. Richard Socher, Brody Huval, Christopher D Manning, Andrew Y Ng, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsRichard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201-1211, Jeju Island, Korea, July. Association for Computational Linguistics.
Unsupervised morphology induction using word embeddings. Radu Soricut, Franz Och, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsRadu Soricut and Franz Och. 2015. Unsupervised morphology induction using word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1627-1637, Denver, Colorado, May-June. Association for Computational Linguistics.
Lstm neural networks for language modeling. Martin Sundermeyer, Ralf Schlüter, Hermann Ney, INTERSPEECH. Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In INTERSPEECH, pages 194-197.
Features of similarity. Amos Tversky, Psychological review. 844327Amos Tversky. 1977. Features of similarity. Psychological review, 84(4):327.
Concept graph learning from educational data. Yiming Yang, Hanxiao Liu, Jaime Carbonell, Wanli Ma, Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM '15. the Eighth ACM International Conference on Web Search and Data Mining, WSDM '15New York, NY, USAACMYiming Yang, Hanxiao Liu, Jaime Carbonell, and Wanli Ma. 2015. Concept graph learning from educational data. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM '15, pages 159-168, New York, NY, USA. ACM. |
6,864,375 | Semantics, Discourse and Statistical Machine Translation | In the past decade, statistical machine translation (SMT) has been advanced from word-based SMT to phrase-and syntax-based SMT. Although this advancement produces significant improvements in BLEU scores, crucial meaning errors and lack of cross-sentence connections at discourse level still hurt the quality of SMT-generated translations. More recently, we have witnessed two active movements in SMT research: one towards combining semantics and SMT in attempt to generate not only grammatical but also meaningpreserved translations, and the other towards exploring discourse knowledge for document-level machine translation in order to capture intersentence dependencies.The emergence of semantic SMT are due to the combination of two factors: the necessity of semantic modeling in SMT and the renewed interest of designing models tailored to relevant NLP/SMT applications in the semantics community. The former is represented by recent numerous studies on exploring word sense disambiguation, semantic role labeling, bilingual semantic representations as well as semantic evaluation for SMT. The latter is reflected in CoNLL shared tasks, SemEval and SenEval exercises in recent years.The need of capturing cross-sentence dependencies for document-level SMT triggers the resurgent interest of modeling translation from the perspective of discourse. Discourse phenomena, such as coherent relations, discourse topics, lexical cohesion that are beyond the scope of conventional sentence-level n-grams, have been recently considered and explored in the context of SMT.This tutorial aims at providing a timely and combined introduction of such recent work along these two trends as discourse is inherently connected with semantics. The tutorial has three parts. The first part critically reviews the phrase-and syntax-based SMT. The second part is devoted to the lines of research oriented to semantic SMT, including a brief introduction of semantics, lexical and shallow semantics tailored to SMT, semantic representations in SMT, semantically motivated evaluation as well as advanced topics on deep semantic learning for SMT. The third part is dedicated to recent work on SMT with discourse, including a brief review on discourse studies from linguistics and computational viewpoints, discourse research from monolingual to multilingual, discourse-based SMT and a few advanced topics.The tutorial is targeted for researchers in the SMT, semantics and discourse communities. In particular, the expected audience comes from two groups: 1) Researchers and students in the SMT community who want to design cutting-edge models and algorithms for semantic SMT with various semantic knowledge and representations, and who would like to advance SMT from sentence-bysentence translation to document-level translation with discourse information; 2) Researchers and students from the semantics and discourse community who are interested in developing models and methods and adapting them to SMT.OutlineSMT Overall Review (30 minutes)• SMT architecture • phrase-and syntax-based SMT 2. Semantics and SMT (1 hour and 15 minutes) | [] | Semantics, Discourse and Statistical Machine Translation
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2014. 2014
Deyi Xiong [email protected]
Min Zhang [email protected]
Provincial Key Laboratory for Computer Information Processing Technology
Soochow University
215006SuzhouChina
Description
Semantics, Discourse and Statistical Machine Translation
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Tutorials
the 52nd Annual Meeting of the Association for Computational Linguistics: TutorialsBaltimore, Maryland, USA, 22Association for Computational LinguisticsJune 2014. 2014
In the past decade, statistical machine translation (SMT) has been advanced from word-based SMT to phrase-and syntax-based SMT. Although this advancement produces significant improvements in BLEU scores, crucial meaning errors and lack of cross-sentence connections at discourse level still hurt the quality of SMT-generated translations. More recently, we have witnessed two active movements in SMT research: one towards combining semantics and SMT in attempt to generate not only grammatical but also meaningpreserved translations, and the other towards exploring discourse knowledge for document-level machine translation in order to capture intersentence dependencies.The emergence of semantic SMT are due to the combination of two factors: the necessity of semantic modeling in SMT and the renewed interest of designing models tailored to relevant NLP/SMT applications in the semantics community. The former is represented by recent numerous studies on exploring word sense disambiguation, semantic role labeling, bilingual semantic representations as well as semantic evaluation for SMT. The latter is reflected in CoNLL shared tasks, SemEval and SenEval exercises in recent years.The need of capturing cross-sentence dependencies for document-level SMT triggers the resurgent interest of modeling translation from the perspective of discourse. Discourse phenomena, such as coherent relations, discourse topics, lexical cohesion that are beyond the scope of conventional sentence-level n-grams, have been recently considered and explored in the context of SMT.This tutorial aims at providing a timely and combined introduction of such recent work along these two trends as discourse is inherently connected with semantics. The tutorial has three parts. The first part critically reviews the phrase-and syntax-based SMT. The second part is devoted to the lines of research oriented to semantic SMT, including a brief introduction of semantics, lexical and shallow semantics tailored to SMT, semantic representations in SMT, semantically motivated evaluation as well as advanced topics on deep semantic learning for SMT. The third part is dedicated to recent work on SMT with discourse, including a brief review on discourse studies from linguistics and computational viewpoints, discourse research from monolingual to multilingual, discourse-based SMT and a few advanced topics.The tutorial is targeted for researchers in the SMT, semantics and discourse communities. In particular, the expected audience comes from two groups: 1) Researchers and students in the SMT community who want to design cutting-edge models and algorithms for semantic SMT with various semantic knowledge and representations, and who would like to advance SMT from sentence-bysentence translation to document-level translation with discourse information; 2) Researchers and students from the semantics and discourse community who are interested in developing models and methods and adapting them to SMT.OutlineSMT Overall Review (30 minutes)• SMT architecture • phrase-and syntax-based SMT 2. Semantics and SMT (1 hour and 15 minutes)
Description
In the past decade, statistical machine translation (SMT) has been advanced from word-based SMT to phrase-and syntax-based SMT. Although this advancement produces significant improvements in BLEU scores, crucial meaning errors and lack of cross-sentence connections at discourse level still hurt the quality of SMT-generated translations. More recently, we have witnessed two active movements in SMT research: one towards combining semantics and SMT in attempt to generate not only grammatical but also meaningpreserved translations, and the other towards exploring discourse knowledge for document-level machine translation in order to capture intersentence dependencies.
The emergence of semantic SMT are due to the combination of two factors: the necessity of semantic modeling in SMT and the renewed interest of designing models tailored to relevant NLP/SMT applications in the semantics community. The former is represented by recent numerous studies on exploring word sense disambiguation, semantic role labeling, bilingual semantic representations as well as semantic evaluation for SMT. The latter is reflected in CoNLL shared tasks, SemEval and SenEval exercises in recent years.
The need of capturing cross-sentence dependencies for document-level SMT triggers the resurgent interest of modeling translation from the perspective of discourse. Discourse phenomena, such as coherent relations, discourse topics, lexical cohesion that are beyond the scope of conventional sentence-level n-grams, have been recently considered and explored in the context of SMT.
This tutorial aims at providing a timely and combined introduction of such recent work along these two trends as discourse is inherently connected with semantics. The tutorial has three parts. The first part critically reviews the phrase-and syntax-based SMT. The second part is devoted to the lines of research oriented to semantic SMT, including a brief introduction of semantics, lexical and shallow semantics tailored to SMT, semantic representations in SMT, semantically motivated evaluation as well as advanced topics on deep semantic learning for SMT. The third part is dedicated to recent work on SMT with discourse, including a brief review on discourse studies from linguistics and computational viewpoints, discourse research from monolingual to multilingual, discourse-based SMT and a few advanced topics.
The tutorial is targeted for researchers in the SMT, semantics and discourse communities. In particular, the expected audience comes from two groups: 1) Researchers and students in the SMT community who want to design cutting-edge models and algorithms for semantic SMT with various semantic knowledge and representations, and who would like to advance SMT from sentence-bysentence translation to document-level translation with discourse information; 2) Researchers and students from the semantics and discourse community who are interested in developing models and methods and adapting them to SMT.
Outline
SMT Overall Review (30 minutes)
• SMT architecture His current research interests include machine translation, natural language processing, information extraction, social network computing and Internet intelligence. He has co-authored more than 150 papers in leading journals and conferences, and co-edited 10 books/proceedings published by Springer and IEEE. He was the recipient of several awards in China and oversea. He is the vice president of COLIPS (2011COLIPS ( -2013, the elected vice chair of SIGHAN/ACL (2014-2015), a steering committee member of PACLIC (2011-now), an executive member of AFNLP (2013-2014) and a member of ACL (since 2006). He supervises Ph.D students at National University of Singapore, Harbin Institute of Technology and Soochow University.
•
Brief introduction of semantics • Lexical semantics for SMT • Semantic representations in SMT • Semantically Motivated Evaluation • Advanced topics: deep semantic learning for SMT Dr. Deyi Xiong is a professor at Sochoow University. His research interests are in the area of natural language processing, particularly statistical machine translation and parsing. Previously he was a research scientist at the Institute for Infocomm Research of Singapore. He received the B.Sc degree from China University of Geosciences (Wuhan, China) in 2002, the Ph.D.degree from the Institute of Computing Technology (Beijing, China) in 2007, both in computer science. He has published papers in prestigious journals and conferences on statistical machine translation, including Computational Linguistics, IEEE TASLP, JAIR, NLE, ACL, EMNLP, AAAI and IJCAI. He was the program co-chair of IALP 2012 and CLIA workshop 2011. Dr. Min Zhang, a distinguished professor and Director of the Research Center of Human Language Technology at Soochow University (China), received his Bachelor degree and Ph.D. degree in computer science from Harbin Institute of Technology in 1991 and 1997, respectively. From 1997 to 1999, he worked as a postdoctoral research fellow in Korean Advanced Institute of Science and Technology in South Korea. He began his academic and industrial career as a researcher at Lernout & Hauspie Asia Pacific (Singapore) in Sep. 1999. He joined Infotalk Technology (Singapore) as a researcher in 2001 and became a senior research manager in 2002. He joined the Institute for Infocomm Research (Singapore) as a research scientist in Dec. 2003. He joined the Soochow University as a distinguished professor in 2012. |
14,520,031 | SPEAKER INDEPENDENT PHONETIC TRANSCRIPTION OF FLUENT SPEECH FOR LARGE VOCABULARY SPEECH RECOGNITION | Speaker independent phonetic Iranscription of fluent speech is performed using an ergodic continuously variable duration hidden Markov model (CVDHMM) to represent the acoustic, phonetic and phonotactic structure of speech. An important property of the model is that each of its fifty-one states is uniquely identified with a single phonetic unit. Thus, for any spoken utterance, a phonetic transcription is obtained from a dynamic programming (DP) procedure for finding the state sequence of maximum likelihood. A model has been constructed based on 4020 sentences from the TIMIT database. When tested on 180 different sentences from this database, phonetic accuracy was observed to be 56% with 9% insertions. A speaker dependent version of the model was also constructed. The transcription algorithm was then combined with lexical access and parsing routines to form a complete recognition system. When tested on sentences from the DARPA resource management task spoken over the local switched telephone network, phonetic accuracy of 64% with 8% insertions and word accuracy of 87% with 3% insertions was measured. This system is presently operating in an on-line mode over the local switched telephone network in less than ten times real time on an Alliant FX-80. | [] | SPEAKER INDEPENDENT PHONETIC TRANSCRIPTION OF FLUENT SPEECH FOR LARGE VOCABULARY SPEECH RECOGNITION
S E Levinson
AT&T Bell Laboratories Murray Hill
07974New Jersey
M Y Liberman
AT&T Bell Laboratories Murray Hill
07974New Jersey
A Ljolje
AT&T Bell Laboratories Murray Hill
07974New Jersey
L G Miller
AT&T Bell Laboratories Murray Hill
07974New Jersey
SPEAKER INDEPENDENT PHONETIC TRANSCRIPTION OF FLUENT SPEECH FOR LARGE VOCABULARY SPEECH RECOGNITION
Speaker independent phonetic Iranscription of fluent speech is performed using an ergodic continuously variable duration hidden Markov model (CVDHMM) to represent the acoustic, phonetic and phonotactic structure of speech. An important property of the model is that each of its fifty-one states is uniquely identified with a single phonetic unit. Thus, for any spoken utterance, a phonetic transcription is obtained from a dynamic programming (DP) procedure for finding the state sequence of maximum likelihood. A model has been constructed based on 4020 sentences from the TIMIT database. When tested on 180 different sentences from this database, phonetic accuracy was observed to be 56% with 9% insertions. A speaker dependent version of the model was also constructed. The transcription algorithm was then combined with lexical access and parsing routines to form a complete recognition system. When tested on sentences from the DARPA resource management task spoken over the local switched telephone network, phonetic accuracy of 64% with 8% insertions and word accuracy of 87% with 3% insertions was measured. This system is presently operating in an on-line mode over the local switched telephone network in less than ten times real time on an Alliant FX-80.
INTRODUCTION
Though rarely explicitly stated, a fundamental assumption on which many speech recognition systems are implicitly based is that speech is literate. That is, it is a code for communication having a small number of discrete phonetic symbols in its alphabet. These symbols are, however, merely mental constructs and, as such, are not directly accessible but are, instead, observable only in their highly variable acoustic manifestation. It is also well-known but equally seldom expressed that a hidden Markov model comprises a finite set of discrete inaccessible states observable only via a set of random processes, one associated with each hidden state. When these two simple ideas are juxtaposed, it seems to us inescapable that the most natural representation of speech by a hidden Markov model is one in which the hypothetical phonetic symbols are identified with the hidden states of the Markov chain and the variability of the measurable acoustic signal is captured by the observable, state-dependent random processes.
The mathematical details of just such a model are given in [6]. Its application to a smallvocabulary continuous speech recognition system and a large-vocabulary isolated word recognition system are described in [7] and [8], respectively. Here we present a brief overview of the use of this approach in large vocabulary continuous speech recognition and some preliminary results of two experiments performed with it on the TIMIT [4] and DARPA [9] databases.
THE MODEL
We have constructed two models, a 51 state model on which the speaker-independent phonetic transcription results are based, and a 43 state model on which the speaker-dependent recognition of sentences from the DARPA resource management task are founded. Both models are of the same form, CVDHMM, as described in the reference cited earlier. The state transition matrices define ergodic Markov chains and weakly capture the phonotactic structure of English. The acoustic measurements are represented by 26-dimensional Gaussian density functions. The first twelve coordinates are LPC based cepstra; the second twelve, delta-cepstra [2], and the last two, log energy and its time-derivative, respectively. The temporal structure of the acoustic signal is reflected in the durational densities which are of the two-parameter gamma family. Because of the presence of the durational densities, selftransitions are forbidden.
PARAMETER ESTIMATION
The parameters for both the 51 and 43 state models were estimated in the same way although on different training data. In both cases, the state transition matrix was computed from bigram statistics extracted from the Collins dictionary. No attempt was made to count bigrams resulting from word junctures. Also, in both cases, the respective databases were segmented by hand and labeled with respect to the appropriate phonetic alphabet. Acoustic observations were sorted into sets corresponding to the phonetic symbols. The necessary parameters, spectral means and covariances and durational means and standard deviations, were then calculated for each set independently. No parameter optimization was applied to these estimates.
The 51 state speaker-independent model was trained on 4200 sentences of TIMIT data. Ten different sentences were selected from each of 402 different speakers. The 43 state speakerdependent model was trained on one reading of the 450 sentences in the TIMIT phonetically balanced list by a single male speaker. These utterances were recorded over the local switched telephone network with a conventional telephone handset.
At this writing, we have yet to train a speaker-independent model using the DARPA training material. Although we expect to do so, we are concerned about its utility since the phonetic contexts in this database are rather restrictive compared with those of the TIMIT sentences.
PHONETIC TRANSCRIPTION
Phonetic transcription is accomplished by means of a DP technique for finding the state sequence that maximizes the joint likelihood of state, duration and observation sequences. The details of this algorithm are given in [7]. Note that this procedure makes no use of lexical or syntactic structure. The algorithm runs in approximately twice real time on an Alliant FX-80.
EXPERIMENTAL RESULTS ON TRANSCRIPTION
The transcription algorithm was tested on 180 sentences from the TIMIT database. Neither the sentences nor the speakers were used in ihe training. Transcription accuracy was determined by computing the Levenshtein distance between the derived transcription and the standard transcription supplied with the database. By this measure, the 51 state model yielded a phonetic recognition rate of 56% with a 9% insertion rate. The 43 state model resulted in a 64% recognition rate with an 8% insertion rate on 48 sentences from the DARPA task collected from the male speaker on whose speech the model had been trained.
The reader should bear in mind that these are the very first experiments performed with this system. We fully expect that the performance will improve greatly as a result of refinements we are presently making to the model. These include accounting for coarticulation, making the durational densities more faithful and using parameter reestimation techniques.
THE SPEECH RECOGNITION SYSTEM
The phonetic transcription algorithm described above is the first stage of a complete speech recognition system. The architecture of the system is unchanged from that described in [8] but the details of the lexical access procedure and the parser are utterly different from those given in the reference.
The lexical access procedure is simply that of computing the likelihood of every word in the lexicon over every sub-interval of the observation sequence. We define the likelihood of a word on an observation sub-sequence to be the joint likelihood of the standard phonetic transcription for that word as given in the lexicon and the phonetic transcription of that subsequence provided by the transcription algorithm. Because the standard transcription need not have the same length as the one computed for an arbitrary observation sub-sequence, the calculation is carried out by means of a DP algorithm. Note that this procedure is synchronized at the segment rate, not the frame rate.
The parser takes as input, the word lattice constructed by the lexical access procedure and finds the well-formed sentence of maximum likelihood. Here, well-formed means with respect to the strict DARPA resource management task grammar. This is a finite state grammar having 4767 states, 60433 state transitions, 90 final states and a maximum entropy of 4.4 bits/word. The parser itself is yet another DP algorithm. The search it effects is not pruned in any way.
The system has been tested in an on-line mode over the switched local telephone network. Under these conditions, we obtained an 87% correct word recognition rate and a 3% insertion rate. On an Alliant FX-80, a sentence is recognized in less than ten times real time. A sample of the recognizer output is shown in figure 2.
PHONETIC TRANSCRIPTION: h@riRriEZUzpl>grUDENw^ndTWz&ndSEdED&nd>rTtUsiZ &kObS&n DURATIONS: 5 5 7 4 8 8 7 51017 912 4 613 8 8 7 6 9 7 6 6 7 4 9 1910 3 6 3 11 17 5 12 4 4 5 3 9 6 7 6 7 14 5 8 3 10 12 31375
CONCLUSION
We have presented some very early results of experiments on phonetic transcription and recognition of fluent speech based on a novel use of a hidden Markov model. While our error rates are substantially higher than those achieved by more conventional systems [5,3,10], we believe that by improving the acoustic/phonetic model -the only adjustable part of the systemresults comparable to those obtained by other investigators can be realized.
The 51 states in the first model correspond to 51 of the phonetic symbols used in the standard transcriptions of the TIMIT sentences. The 43 states of the second model are associated with the 43 symbols used in the pronunciation guide of the Collins English dictionary [1]. The phonetic units are listed in figure 1. Flap and closure units are not included in the 43 state model.
Figure 1 :
1Phonetic Units and Symbols
RECOGNIZED SENTENCE: are there any ships longer than one thousand feet in the north pacific ocean LOG LIKELIHOOD = 0.22506502151489E+02 RECOGNITION TIME = 49.78 CPU-SECONDS Figure 2: Sample of Sentence Recognition ResultsLOG LIKELIHOOD = 0.23880190715663E+04
POSITION
BEGIN
END
STATE
LOG LIKELIHOOD
WORD
1
49
53
19
0.2250650E+02
ocean
2
42
48
394
0.2147619E+02
pacific
3
37
41
344
0.1887782E+02
north
4
35
36
265
0.1787559E+02
the
5
34
34
378
0.1733334E+02
in
6
30
33
299
0.1590989E+02
feet
7
24
29
926
0.1379514E+02
thousand
8
21
23
838
0.1118440E+02
one
9
18
20
758
0.1093698E+02
than
10
13
17
691
0.9208550E+01
longer
11
9
12
623
0.6723166E+01
ships
12
6
8
557
0.4362227E+01
any
13
3
5
513
0.3019794E+01
there
14
1
2
491
0.1371470E+01
are
. P Hanks, Collins, LondonCollins Dictionary of the English LanguageHanks, P., ed., Collins Dictionary of the English Language, Collins, London, 1972.
On the use of Bandpass Liftering in Speech Recognition. B H Juang, L R Rabiner, J G Wilpon, IEEE Trans. Acoust. Speech and Signal Processing. 357ASSP-Juang, B. H., Rabiner, L. R. and Wilpon, J. G., "On the use of Bandpass Liftering in Speech Recognition", IEEE Trans. Acoust. Speech and Signal Processing, ASSP-35 (7), pp. 947-954, July, 1987.
Continuous Speech Recognition Results of the BYBLOS System on the DARPA 1000-Word Resource Management Database. F Kubala, Proc. ICASSP-88. ICASSP-88New York, NYKubala, F. et al., "Continuous Speech Recognition Results of the BYBLOS System on the DARPA 1000-Word Resource Management Database", Proc. ICASSP-88, New York, NY, pp. 291-294, April, 1988.
Speech Database Development: Design and Analysis of the Acoustic-Phonetic Corpus. L F Lamel, R H Kassel, S Seneff, Proc. DARPA Speech Recognition Workshop. DARPA Speech Recognition WorkshopPalo Alto, CALamel, L. F., Kassel, R. H. and Seneff, S., "Speech Database Develop- ment: Design and Analysis of the Acoustic-Phonetic Corpus", Proc. DARPA Speech Recognition Workshop, Palo Alto, CA, pp. 100-109, Feb., 1986.
Large Vocabulary Speaker-Independent Speech Recognition System using HMM. K F Lee, H W Hon, Proc. ICASSP-88. ICASSP-88New York, NYLee, K. F. and Hon, H. W., "Large Vocabulary Speaker-Independent Speech Recognition System using HMM", Proc. ICASSP-88, New York, NY, pp. 123 126, April, 1988.
Continuously Variable Duration Hidden Markov Models for Automatic Speech Recognition. S E Levinson, Computer Speech and Language. 11Levinson, S. E., "Continuously Variable Duration Hidden Markov Models for Automatic Speech Recognition", Computer Speech and Language 1 (1), pp. 29-45, 1986.
Continuous Speech Recognition by means of Acoustic-Phonetic Classification Obtained from a Hidden Markov Model. S E Levinson, Proc. ICASSP-87. ICASSP-87Dallas, TXLevinson, S. E., "Continuous Speech Recognition by means of Acoustic- Phonetic Classification Obtained from a Hidden Markov Model", Proc. ICASSP-87, Dallas, TX, pp. 93-96, April, 1987.
Large Vocabulary Speech Recognition using a Hidden Markov Model for Acoustic Phonetic Classification. S E Levinson, A Ljolje, L G Miller, Proc. ICASSP-88. ICASSP-88New York, NYLevinson, S. E., Ljolje, A. and Miller, L. G., "Large Vocabulary Speech Recognition using a Hidden Markov Model for Acoustic Phonetic Classif- ication", Proc. ICASSP-88, New York, NY, pp. 505-508, April, 1988.
The DARPA 1000-Word Resource Management Database for Continuous Speech Recognition. P Price, W Fisher, J Bernstein, D Pallett, Proc. ICASSP-88. ICASSP-88New York, NYPrice, P., Fisher, W., Bernstein, J. and Pallett, D., "The DARPA 1000- Word Resource Management Database for Continuous Speech Recognition", Proc. ICASSP-88, New York, NY, pp. 651-654, April, 1988.
Some Preliminary Results on Speaker Independent Recognition of the DARPA Resource Management Task. R Pieraccini, C H Lee, L R Rabiner, J G Wilpon, in this proceedingsPieraccini, R., Lee, C. H., Rabiner, L. R. and Wilpon, J. G., "Some Preliminary Results on Speaker Independent Recognition of the DARPA Resource Management Task", in this proceedings. |
236,459,918 | Evaluating morphological typology in zero-shot cross-lingual transfer | Cross-lingual transfer has improved greatly through multi-lingual language model pretraining, reducing the need for parallel data and increasing absolute performance. However, this progress has also brought to light the differences in performance across languages. Specifically, certain language families and typologies seem to consistently perform worse in these models. In this paper, we address what effects morphological typology has on zero-shot cross-lingual transfer for two tasks: Part-of-speech tagging and sentiment analysis. We perform experiments on 19 languages from four language typologies (fusional, isolating, agglutinative, and introflexive) and find that transfer to another morphological type generally implies a higher loss than transfer to another language with the same morphological typology. Furthermore, POS tagging is more sensitive to morphological typology than sentiment analysis and, on this task, models perform much better on fusional languages than on the other typologies.3148 tools via robust projection across aligned corpora. In | [
10674977,
8910754,
14760908,
102353837,
201639088,
33164500,
212415221,
6698104,
220045406,
184488346,
235211772,
174798142,
208513183
] | Evaluating morphological typology in zero-shot cross-lingual transfer
August 1-6, 2021
Antonio Martínez-García
Universitat de Barcelona
Universitat Pompeu Fabra
University of Oslo
Toni Badia [email protected]
Universitat de Barcelona
Universitat Pompeu Fabra
University of Oslo
Jeremy Barnes [email protected]
Universitat de Barcelona
Universitat Pompeu Fabra
University of Oslo
Evaluating morphological typology in zero-shot cross-lingual transfer
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20213136
Cross-lingual transfer has improved greatly through multi-lingual language model pretraining, reducing the need for parallel data and increasing absolute performance. However, this progress has also brought to light the differences in performance across languages. Specifically, certain language families and typologies seem to consistently perform worse in these models. In this paper, we address what effects morphological typology has on zero-shot cross-lingual transfer for two tasks: Part-of-speech tagging and sentiment analysis. We perform experiments on 19 languages from four language typologies (fusional, isolating, agglutinative, and introflexive) and find that transfer to another morphological type generally implies a higher loss than transfer to another language with the same morphological typology. Furthermore, POS tagging is more sensitive to morphological typology than sentiment analysis and, on this task, models perform much better on fusional languages than on the other typologies.3148 tools via robust projection across aligned corpora. In
Introduction
Cross-lingual transfer uses available annotated resources in a source language to learn a model that will transfer to a target language. Earlier work used machine translation (Mihalcea et al., 2007), parallel data (Padó and Lapata, 2009), or delexicalized models (Zeman and Resnik, 2008;McDonald et al., 2011;Søgaard, 2011) to bridge the gap between languages. However, recent improvements (Devlin et al., 2019) have reduced the need for parallel data, instead relying on multi-lingual language models, trained on the concatenation of monolingual corpora. Fine-tuning these multilingual language models on a task in a source language can lead to strong performance when applied directly to the targetlanguage task (zero-shot transfer). This progress has uncovered gaps in performance, as transfer is generally easier between similar languages, and some language families consistently perform worse (Artetxe et al., 2020;Conneau et al., 2020a). So far, however, the analysis of these differences has only been anecdotal, rather than centered as a research question of its own merit. For these cases, linguistic typology has important implications, as it gives us ways to quantify the similarity of languages along certain variables, such as shared morphological or syntactic features (Bender, 2013). While previous work has studied the effects of morphological typology on language modeling (Gerz et al., 2018;Cotterell et al., 2018;Mielke et al., 2019), this effect on cross-lingual transfer has not been looked at in detail.
In this paper we attempt to answer (RQ1) to what degree morphological typology affects the performance of state-of-the-art cross-lingual models, (RQ2) whether morphological typology has a stronger effect than other variables, e.g., the amount of data for pretraining the LM or domain mismatches between source and target, (RQ3) whether there is a different effect on a low-level structural task (POS tagging) vs. a semantic task (sentiment analysis).
To answer these questions we experiment with two state-of-the-art cross-lingual models: multilingual BERT and XLM RoBERTa. We fine-tune the models for part-of-speech tagging and sentiment analysis on 19 languages from four morphologically diverse typologies. Our results show that POS tagging is more sensitive to morphological typology than sentiment analysis and that the models perform much better on fusional languages, such as German, than on the other typologies. We release the code and data 1 in order to reproduce the experiments and facilitate future work in this area.
Related Work
Cross-lingual transfer has become ubiquitous in recent years, including cross-lingual POS tagging (Täckström et al., 2013;Huck et al., 2019) and cross-lingual sentiment analysis (Mihalcea et al., 2007;Balahur and Turchi, 2014;Barnes and Klinger, 2019). While earlier research focused on annotation projection (Yarowsky et al., 2001;Banea et al., 2008) or cross-lingual embeddings (Kim et al., 2017;Artetxe et al., 2017;Barnes et al., 2018b), multi-lingual pretraining currently leads to state-of-the-art results (Devlin et al., 2019;Lample and Conneau, 2019). These approaches rely on training transformer-based language models (Vaswani et al., 2017) on unlabeled data from multiple languages, while using careful data selection methods to avoid the over-representation of larger languages.
Although these approaches have led to large improvements on many cross-lingual tasks, it is clear that the success of zero-shot cross-lingual transfer depends on the typological similarity of the source and target language (Conneau et al., 2020b;Libovický et al., 2020). Pires et al. (2019) find POS performance correlates with word order features taken from the World Atlas of Language Structures (WALS) database (Dryer and Haspelmath, 2013). Similarly, morphologically complex languages tend to achieve poorer performance (Artetxe et al., 2020;Conneau et al., 2020a).
Similar to this work, Lauscher et al. (2020) perform zero-shot and few-shot transfer on 20 languages and 5 tasks. However, the choice of languages does not allow one to answer what is the effect of morphological typology.
The effect of morphological typology on NLP tasks is well known (Ponti et al., 2019), with several dedicated workshop series (Nicolai et al., 2020;Zampieri et al., 2018). More recently, attention has turned to larger scale analyses of morphological typology effects on language modeling (Gerz et al., 2018;Cotterell et al., 2018;Mielke et al., 2019).
In contrast to these previous works, we are interested in how morphological typology affects crosslingual transfer for two supervised tasks, namely part of speech (POS) tagging and sentiment analysis. We choose these two tasks as 1) they both have data available in typologically diverse languages, and 2) represent a lower-level structural and higher-level semantic task, respectively. Our experimental setup reduces some of the complexity of comparing test results across languages, as we compare relative differences, instead of absolute differences. At the same time, it is necessary to take into account several other variables, i.e., presence of the language in pretraining, the amount of training data, the effect of byte-pair tokenization, the length of train and test examples, and any domain mismatches across languages.
Although it is a simplification of the variation in morphological features (Plank, 1999), languages have traditionally been grouped into four morphological categories, i.e., isolating, fusional, introflexive, and agglutinative. 2 These categories describe a language's tendency to group concepts together into a single word or disperse them into separate words. Pure isolating languages have maximally one morpheme per word. In agglutinative languages, morphemes tend to be neatly segmentable and carry a single feature, whereas in fusional languages, a single morpheme often carries multiple grammatic, syntactic, and semantic features. Finally, in introflexive languages root words are based on consonant stems, where vowels introduced around and between them lead to syntactic and semantic changes (see Plank (1999);Bickel and Nichols (2005); Gerz et al. (2018) for a more in-depth discussion).
Data
We select five languages from each category except introflexive (four), shown in Table 1
Part-of-speech
We obtain the data for the part-of-speech tagging task from the Universal Dependencies project (Zeman et al., 2020), which currently gathers data annotated with universal POS tags for more than 90 languages, although there are differences in size and domain. For Algerian we use the annotations from Seddah et al. (2020). We found no training sets available for Thai and Cantonese, hence we use them for testing only. For more details on these datasets, see Table 5 in the Appendix.
Sentiment Analysis
For sentiment analysis, however, there is no centralized repository of similar data. Therefore, we collect data from a number of sources and process 3 Including https://github.com/ dimitrakatseli/review_sentiment_analysis 4 https://github.com/ljw9609/ SentimentAnalysis 5 https://github.com/e9t/nsmc 6 https://github.com/Darkmap/japanese_ sentiment 7 Including https://github.com/ozturkaslii/ analyze-turkish-sentiment them to create binary (positive, negative) sentencelevel sentiment datasets. For convenience, we list the origin of each dataset in Table 2 and their full characteristics in Table 6 in the Appendix.
Methods
We fine-tune both multilingual BERT (mBERT) (Xu et al., 2019) and XLM RoBERTa (XLM-R) (Conneau et al., 2020a) models on the available training data in each language, using a shared set of hyperparameters selected from recommended values according to the characteristics of our data. We set the learning rate to 2e-5, maximum sequence length of 256, batch size of 8 or 16 8 , and perform early stopping once the validation score has not improved in the last epochs, saving the model that performs best on the dev set. We then test each model on all languages, giving us a matrix of test scores, where the diagonal is in-language, and all others are cross-lingual. We use accuracy as our metric for POS and macro F 1 for sentiment, as the latter often contains unbalanced classes, and define a baseline as the result of predicting the majority class.
Results
Once our scores matrix is built, we average 9 the score of each fine-tuned model, which we refer to as language-to-language cross-lingual scores, over the other languages in each morphological group, thus obtaining each model's average cross-lingual performance per target group (language-to-group cross-lingual scores). Next, we average again for each source language group. This yields the average cross-lingual performance values per training and testing language groups (group-to-group crosslingual scores), which we report in Table 3.
In the part-of-speech task, the best group-togroup cross-lingual performance always corresponds to models fine-tuned in a language of the same morphological group, regardless of the model's architecture. Fusional models, in particular, obtain a remarkably higher score when tested on other fusional languages (over 80%). On the other hand, the group-to-group cross-lingual scores where the target language is introflexive are considerably lower than the rest (always below 50%).
In contrast, both model architectures show different patterns in the sentiment analysis task. For the XLM-R models, the best group-to-group crosslingual scores are all achieved by those trained on a fusional language, while for the mBERT it is mainly models trained on an isolating language that achieve the best scores. In any case, all scores are within a similar range of values. In fact, the main difference in this task seems to be due to XLM-R's considerably higher scores.
In order to capture the cross-lingual phenomenon more accurately, we introduce transfer loss, a relative metric defined in Equation 1:
T L x→y = S x→x − S x→y(1)
where T L x→y is the transfer loss experienced by a model fine-tuned in language x when transferring to language y (language-to-language transfer loss) and S x→y is the score 10 achieved when testing a model fine-tuned in language x on language y. Thus, it is a measure of the performance lost in the zero-shot transfer process: the better the transfer between both languages, the lower it will be. We also define its averaged variants:
T L x→A = S x→x − 1 N A i∈A i =x S x→i (2) T L A→B = 1 N A i∈A T L i→B(3)
where T L x→A denotes the average transfer loss from language x to languages belonging to morphological type A (language-to-group transfer loss), T L A→B refers to the average transfer loss experienced by languages from morphological group A to languages from group B (group-to-group transfer loss) and N A is the number of languages (other than x) included in the experiment that belong to group A. Table 4 shows the resulting group-togroup transfer loss values for each task. Table 3: Group-to-group cross-lingual accuracy scores (%) in part-of-speech tagging (top) and macro F 1 scores (%) in sentiment analysis (bottom) for each fine-tuning (column) and testing (row) morphological group, and each model architecture. Maximum values in each test group and architecture are highlighted. Higher is better.
Models fine-tuned in all groups except agglutinative experience the lowest performance drop when transferring to fusional languages in the part-ofspeech task, whereas in the sentiment analysis task there is no clear pattern. It is also worth noting that the XLM-R models tend to transfer better compared to mBERT, only slightly in part-of-speech tagging but more drastically in sentiment analysis. Additionally, the cases of worst transfer happen when the target language is introflexive (especially for XLM-R).
Next, to address RQ1 more directly, we compare two different types of transfer: intra-group transfer, where both the fine-tuning and target languages belong to the same morphological group, and inter-group transfer, where the two differ in morphological type. We calculate an average for both types of transfer and for each training group, model architecture and task. We present the resulting values in Figure 1.
Generally, transfer to another morphological type implies a higher cost in terms of performance, except for the introflexive models. This difference in transfer loss appears to be similar for all groups in the sentiment task, yet it varies considerably in the part-of-speech task. More specifically, there are two extremes in this latter case: fusional models suffer large performance drops when switching morphological groups, whereas isolating models experience similar transfer losses in both conditions.
Finally, we average again to obtain a single trans-fer loss value for each task and model, and use it to establish a comparison in Figure 2. Here we observe that: (1) the difference in transfer loss between an intra-group and inter-group transfer is higher on the part-of-speech task, (2) transfer is also generally worse on this task 11 , (3) XLM-R models perform better cross-lingual transfers in general (especially on the sentiment analysis task), and (4) the difference between intra-group and inter-group transfer is similar on both model architectures.
Analysis
In this section, we run several statistical tests to verify our conclusion to RQ1 and detail several points of analysis that relate to RQ2 and RQ3. Namely, to what degree do other variables contribute to effects on cross-lingual transfer.
Testing the effect of transfer type
We run a set of statistical tests to validate the observations made from Figure 2 in Section 5. In the part-of-speech tagging task, an analysis of variance (ANOVA) reveals there is a statistically significant, although weak, difference in transfer loss between the intra-and inter-group conditions, for both model architectures (η 2 ≈ 0.06, p < 0.01 in both cases). In contrast, a Kruskal-Wallis analysis of variance 12 finds no significant difference Table 4: Group-to-group transfer loss (in percentage points) in the part-of-speech tagging (top) and sentiment analysis (bottom) tasks for each fine-tuning (column) and testing (row) language's morphological group, as well as each model architecture. Minimum values in each fine-tuning group and architecture are highlighted. Lower is better.
Intra-Group Inter-Group
Transfer Type between the two types of transfer in the sentiment analysis task, in neither mBERT or XLM-R models (p > 0.01 in both cases). We also test for differences in transfer loss between model architectures and find a significant difference in the sentiment analysis task (Kruskal-Wallis, p < 0.01),
but not in the part-of speech tagging task (ANOVA, p > 0.01). This is all consistent with our previous observations.
Linear regression model for transfer loss
Additionally, we model language-to-language transfer loss with a linear regression model, using transfer type, as well as other variables, as possible predictors. This allows us to (a) test whether the intra-/inter-group difference retains its statistical significance in the presence of other variables and (b) evaluate its effect in comparison to other predictors. First, we select a set of variables that might be relevant in cross-lingual transfer, and remove those that are highly correlated with the rest to avoid multicollinearity in the model (see Table 7 in the Appendix for the final list of selected variables). We standardize all of the remaining features so that their units are comparable and, consequently, so are their regression coefficients.
Again, we find transfer type (intra-/inter-group) to be a significant predictor in both regression models for part-of-speech tagging (p < 0.01), but not in sentiment analysis. In the former case, it has the second strongest effect with a standardized coefficient of 8.6 13 , the first being presence of the target language in pretraining with a coefficient of -25.9. In other words, transferring to a language on which the model has not been pretrained implies an additional performance drop of 25.9 percentage points, while transferring to another morphological group incurs an additional 8.6.
The remaining predictors for this task are average test example length (measured in tokens, coefficient of 4.0) and in-language score (3.3). The first is a complex variable because differences in text length can be due to their domain or to the lan-guages themselves but, in either case, its coefficient confirms our intuition that longer sequences generally make the task more difficult. The second could indicate some overfitting to the fine-tuning language, as higher in-language score entails slightly poorer transfer.
XLM-R adds another predictor: the proportion of words that have been split into subword tokens in the test data (2.1). This variable is related to the size of the pretraining corpus for each language 14 : a richer pretraining vocabulary will ensure more words are considered frequent during Byte Pair Encoding and, therefore, assigned a single token, instead of being broken down into subword tokens by the tokenizer. This means that high-resource languages will have a lower word split probability and, hence, it will be slightly easier to transfer to them. However, it is worth pointing out that this bias has little effect and is only statistically significant in XLM-R.
In the case of sentiment analysis, relevant predictors are: presence of the fine-tuning (coefficient of -11.8 for mBERT and -18.7 for XLM-R) and target (-10.3 and -16.3) languages in pretraining, inlanguage score (6.8 and 6.5), proportion of words split into subword tokens in the training data (3.3 and 2.7) and proportion of examples labeled as positive in the test set (-2.8, XLM-R only).
Curiously, sentiment analysis is more sensitive to variables related to the training data compared to part-of-speech tagging, whereas sequence length only affects the latter. On the other hand, language inclusion in pretraining and in-language score are useful predictors in both tasks, yet the former is far stronger in POS and the latter is more relevant in sentiment analysis. In summary, we verify that transferring to a different morphological type has a relevant effect in part-of-speech tagging but not in sentiment analysis, regardless of the model architecture.
Testing pretrained languages only
Given the considerable effect pretraining seems to have on transfer loss (discussed in Section 6.2), we re-evaluate our results after removing the languages that were not present during the pretraining of either of the two model architectures (Cantonese, Algerian and Maltese) and check whether there are relevant differences with our previous results.
Intra-Group
Inter Of course, we observe an improvement in crosslingual scores involving either an isolating or an introflexive language, because these are the groups the excluded languages belong to. Overall, however, re-running the statistical tests does not modify our previous conclusions (see Figure 3).
Balanced in-language scores
Since in-language score is relevant in all regression models considered in 6.2 (and the value of transfer loss is relative to it), we decide to re-train all models, this time preventing them from increasing said score above a fixed threshold value (we choose the minimum in-language score achieved previously in each task and model architecture) and re-evaluate our previous conclusions. The intra-/inter-group difference in transfer loss is still statistically significant in part-of-speech tagging and not in sentiment analysis. Similarly, there is still a statistically significant difference in transfer loss between both models only in the sentiment analysis task. All of this can be seen in Figure 3. The only remarkable difference is in the part-ofspeech task, where the average inter-group transfer loss values for all morphological groups seem to converge to the same value (see Figure 5 in the Appendix). For more information, see Figures 5 and 6, as well as Tables 8 and 9, all of which can be found in the Appendix. We also test the effect that training with considerably more data has on cross-lingual transfer. We select two languages, each with around 150,000 examples available: German for the part-of-speech tagging task and Korean for sentiment analysis. We train four models with increasingly more data and then test them on all languages.
In German, we notice an important decline in cross-lingual scores when increasing data size from 80,000 to 150,000 examples (see Figure 4). More specifically, in mBERT models there is an average decrease of 15.6 and 9.0 points when the crosslingual transfer is intra-and inter-group, respectively. In XLM-R, the corresponding values are 25.4 and 19.5. Hence, it appears that a phenomenon of language specialization takes place, one to which XLM-R is more susceptible and that has more important consequences in intra-group transfer. To ensure this is a language and not a domain/dataset specialization, we test these models on another German dataset (PUD) and find no decrease in performance.
In contrast, average Korean cross-lingual scores remain relatively constant (see Figure 4). Therefore, the language specialization phenomenon could be more characteristic of part-of-speech tagging than sentiment analysis.
Domain effects
Conneau et al. (2020b) find that domain mismatch in pretraining of multilingual LMs is more problematic than domain mismatch in fine-tuning. Yet given the variety of domains present in the sentiment data, we decided to test its effect. Proxy A-distance (Glorot et al., 2011) measures the generalization error of a linear SVM trained to discriminate between two domains. We translate 1000 sentences from each dataset to English using GoogleTranslate and then compute the proxy A-distance. 15 For POS tagging, there are small but insignificant negative effects of proxy A-distance on results for both models (a Pearson coefficient of -0.07, p > 0.01 and -0.07, p > 0.01 for mBERT and XLM-R, respectively). On the sentiment task, there is no significant domain effect for mBERT (-0.06, p > 0.01), while there is a small negative effect for XLM-R (-0.27, p < 0.01). This suggests that most of the transfer loss is not due to domain mismatch.
Discussion and Future Work
In this paper, we have conducted an extensive analysis of the effects of morphological typology on cross-lingual transfer and attempted to isolate these factors from other variables. We have compared performance of two state-of-the-art zero-shot cross-lingual models on two tasks (part-of-speech tagging and sentiment analysis) for 19 languages across four morphological typologies. We have found that transfer to another morphological type generally implies a higher performance loss than transfer to another language with the same morphological typology. Additionally, part-of-speech tagging is more sensitive to morphological differences than sentiment analysis, while sentiment analysis is more sensitive to variables related to the fine-tuning data and is less predictable in general.
We have tested this sensitivity to morphology after balancing other influential factors, such as 15 Implementation adapted from the code available at https://github.com/rpryzant/ proxy-a-distance.
in-language score, and, still, the intra-/inter-group difference remains. However, the effect of morphological typology, while significant, is not strong, given that most of the variability in transfer loss is due to other factors.
We have also confirmed that XLM-R generally transfers better than mBERT, especially on sentiment analysis. In part-of-speech tagging, we have reported considerably better transfer within fusional languages, as well as easier transfer from the other groups towards the fusional type. Moreover, we have found a case that suggests that finetuning on large training sets might lead to language specialization and, consequently, be detrimental to cross-lingual transfer.
It is worth noting that we do not explore whether the type of script used by the languages has an effect on cross-lingual transfer. This is hard to control in our experimental setup, as there are some scripts that are either unique to a language or only have one with enough data to represent it, making it impossible to make comparisons.
The recent cross-lingual suite Xtreme (Hu et al., 2020) includes a number of benchmark tasks in 40 languages. While this dataset is a useful collection of cross-lingual tasks, it is unfortunately not sufficient for our purposes. The POS data is the same as we use, while other tasks either a) do not contain a representative sample of language typologies b) use translation, introducing problems of 'translationese', or c) are automatically created and not manually curated Named Entity Recognition data. Our experimental setup avoids these problems by focusing on binary sentiment analysis, which is a task that has data available in many languages and does not require translation to get multilingual data.
Finally, this work ties in with the increasing interest in typological questions in NLP (Takamura et al., 2016;Ponti et al., 2019;Bjerva et al., 2019;Nooralahzadeh et al., 2020;Bjerva and Augenstein, 2021), which often try to directly predict typological features, or use these to analyze model performance.
In the future, it would be interesting to train multi-lingual language models on specific language families in order to find maximal benefits from shared morphology. Finally, as typology seems to affect tasks differently, it would be interesting to explore other tasks, e.g., dependency parsing or semantic role labeling. Table 6: Detailed description of the data used in sentiment analysis. "Train %" and "Dev/Test %" indicate what percentage of the language's training and validation/test data, respectively, comes from the dataset in question.
Intra-Group Inter-Group
Transfer Type to languages that belong to the other groups (inter-group) in the part-of-speech tagging task after balancing inlanguage scores. Lower is better. Table 9: Group-to-group transfer loss (in percentage points) in POS (top) and sentiment analysis (bottom) tasks (after balancing in-language scores) for each fine-tuning (column) and testing (row) language's morphological group, as well as each model architecture. Minimum values in each fine-tuning group and architecture are highlighted. Lower is better.
Intra-Group Inter-Group
Transfer Type : Average transfer loss (in percentage points) to other languages of the same group (intra-group) and to languages that belong to the other groups (inter-group) in the sentiment analysis task after balancing in-language scores. Lower is better.
Figure 1 :
1Average transfer loss (in percentage points) to other languages of the same group (intra-group) and to languages that belong to the other groups (inter-group) in the part-of-speech tagging (top) and sentiment analysis (bottom) tasks. Lower is better.
Figure 2 :
2Comparison across tasks of the average transfer loss (in percentage points) to other languages of the same group (intra-group) and to languages that belong to the other groups (inter-group). Lower is better.
Figure 3 :
3Comparison across tasks of the average transfer loss (in percentage points) to other languages of the same group (intra-group) and to languages that belong to the other groups (inter-group) after removing languages that were not present during pretraining (top) and after balancing in-language scores (bottom). Lower is better.
Figure 4 :
4Average cross-lingual score achieved by models trained with varying German part-of-speech (top) and Korean sentiment (bottom) data sizes. Higher is better.
Figure 5 :
5Average transfer loss (in percentage points) to other languages of the same group (intra-group) and
Figure 6
6Figure 6: Average transfer loss (in percentage points) to other languages of the same group (intra-group) and to languages that belong to the other groups (inter-group) in the sentiment analysis task after balancing in-language scores. Lower is better.
Table 2 :
2Origin of the data for sentiment analysis.
XLM-R mBERT XLM-R mBERT XLM-R mBERT XLM-RTrain
○ Fusional
Isolating
Agglutinative
Introflexive
Test
mBERT ○ Fusional
16.6
15.3
28.8
27.7
34.2
33.2
26.3
26.7
Isolating
45.0
39.4
37.4
32.6
42.6
37.2
40.6
35.2
Agglutinative
38.5
35.8
34.9
32.8
34.3
30.5
35.7
34.7
Introflexive
54.6
54.2
51.7
52.3
56.5
56.3
45.5
46.9
Train
○ Fusional
Isolating
Agglutinative
Introflexive
Test
mBERT XLM-R mBERT XLM-R mBERT XLM-R mBERT XLM-R
○ Fusional
26.5
13.5
31.2
22.8
26.5
19.4
33.0
22.7
Isolating
32.7
11.6
29.2
20.6
30.1
15.0
41.3
28.6
Agglutinative
29.4
10.3
33.2
22.8
31.0
17.7
37.5
20.6
Introflexive
33.2
27.1
34.9
33.8
33.3
31.0
33.3
26.3
Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. Alexiei Dingli and Nicole Sant. 2016. Sentiment analysis on maltese using machine learning. In Proceedings of The Tenth International Conference on Advances in Semantic Processing (SEMAPRO 2016), pages 21-25. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Daniela Gerz, Ivan Vulić, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018. On the relation between linguistic typology and (limitations of) multilingual language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 316-327, Brussels, Belgium. Association for Computational Linguistics. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, page 513-520, Madison, WI, USA. Omnipress. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. Matthias Huck, Diana Dutka, and Alexander Fraser. 2019. Cross-lingual annotation projection is effective for neural part-of-speech tagging. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 223-233, TOBEFILLED-Ann Arbor, Michigan. Association for Computational Linguistics. Georgios Kalamatianos, Dimitrios Mallis, Symeon Symeonidis, and Avi Arampatzis. 2015. Sentiment analysis of greek tweets and hashtags using a sentiment lexicon. pages 63-68. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual resources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2832-2838, Copenhagen, Denmark. Association for Computational Linguistics.Adam Amram, Anat Ben David, and Reut Tsarfaty.
2018. Representations and architectures in neu-
ral sentiment analysis for morphologically rich lan-
guages: A case study from modern Hebrew. In
Proceedings of the 27th International Conference on
Computational Linguistics, pages 2242-2252, Santa
Fe, New Mexico, USA. Association for Computa-
tional Linguistics.
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017.
Learning bilingual word embeddings with (almost)
no bilingual data. In Proceedings of the 55th Annual
Meeting of the Association for Computational Lin-
guistics (Volume 1: Long Papers), pages 451-462,
Vancouver, Canada. Association for Computational
Linguistics.
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama.
2020. On the cross-lingual transferability of mono-
lingual representations. In Proceedings of the 58th
Annual Meeting of the Association for Computa-
tional Linguistics, pages 4623-4637, Online. Asso-
ciation for Computational Linguistics.
bact', Pattarawat Chormai, Charin, and ekapolc. 2019.
Pythainlp/wisesight-sentiment: First release.
Alexandra Balahur and Marco Turchi. 2014. Compar-
ative experiments using supervised learning and ma-
chine translation for multilingual sentiment analysis.
Computer Speech Language, 28(1):56 -75.
Carmen Banea, Rada Mihalcea, Janyce Wiebe, and
Samer Hassan. 2008. Multilingual subjectivity anal-
ysis using machine translation. In Proceedings of
the 2008 Conference on Empirical Methods in Natu-
ral Language Processing, pages 127-135, Honolulu,
Hawaii. Association for Computational Linguistics.
Jeremy Barnes, Toni Badia, and Patrik Lambert. 2018a.
MultiBooked: A corpus of basque and Catalan ho-
tel reviews annotated for aspect-level sentiment clas-
sification. In Proceedings of the Eleventh Interna-
tional Conference on Language Resources and Eval-
uation (LREC-2018), Miyazaki, Japan. European
Languages Resources Association (ELRA).
Jeremy Barnes and Roman Klinger. 2019. Embed-
ding projection for targeted cross-lingual sentiment:
Model comparisons and a real-world study. Journal
of Artificial Intelligence Research, 66:691-742.
Jeremy Barnes, Roman Klinger, and Sabine Schulte im
Walde. 2018b. Bilingual sentiment embeddings:
Joint projection of sentiment across languages. In
Proceedings of the 56th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), pages 2483-2493, Melbourne, Aus-
tralia. Association for Computational Linguistics.
Emily M. Bender. 2013. Linguistic Fundamentals for
Natural Language Processing: 100 Essentials from
Morphology and Syntax. Morgan amp; Claypool
Publishers.
Balthasar Bickel and Johanna Nichols. 2005. Inflec-
tional morphology. In Timothy Shopen, editor, Lan-
guage Typology and Syntactic Description. Cam-
bridge University Press, Cambridge. 2nd edition.
Johannes Bjerva and Isabelle Augenstein. 2021. Does
typological blinding impede cross-lingual sharing?
Johannes Bjerva, Yova Kementchedjhieva, Ryan Cot-
terell, and Isabelle Augenstein. 2019. Uncovering
probabilistic implications in typological knowledge
bases. In Proceedings of the 57th Annual Meet-
ing of the Association for Computational Linguis-
tics, pages 3924-3930, Florence, Italy. Association
for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
Vishrav Chaudhary, Guillaume Wenzek, Francisco
Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
moyer, and Veselin Stoyanov. 2020a. Unsupervised
cross-lingual representation learning at scale. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 8440-
8451, Online. Association for Computational Lin-
guistics.
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle-
moyer, and Veselin Stoyanov. 2020b. Emerging
cross-lingual structure in pretrained language mod-
els. In Proceedings of the 58th Annual Meeting
of the Association for Computational Linguistics,
pages 6022-6034, Online. Association for Compu-
tational Linguistics.
Keith Cortis and Brian Davis. 2019. A social opin-
ion gold standard for the Malta government budget
2018. In Proceedings of the 5th Workshop on Noisy
User-generated Text (W-NUT 2019), pages 364-369,
Hong Kong, China. Association for Computational
Linguistics.
Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and
Brian Roark. 2018. Are all languages equally hard
to language-model? In Proceedings of the 2018
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, Volume 2 (Short Papers),
pages 536-541, New Orleans, Louisiana. Associa-
tion for Computational Linguistics.
Le Anh Cuong, Ng. T. Minh Huyen, and Ng. Viet Hung.
2016. Vlsp 2016 shared task: Vietnamese analysis.
In VLSP 2016.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Language
Text Type
Domain
Annotation Examples Train % Dev/Test %
○ German
Social media Trains
Manual
8706
100
100
○ Spanish
Reviews
Hotels
Manual
1472
100
100
○ Slovak
Reviews
Services
Manual
5124
100
100
○ Norwegian Reviews
Many
Manual
3608
100
100
○ Greek
Social media Politics
Manual
661
3
39
Social media Many
Manual
519
5
22
Reviews
Mobile phones User scores
5906
92
39
Mandarin
Reviews
Many
User scores
19835
100
100
Vietnamese Reviews
Technology
Manual
3400
100
100
Thai
Social media Product reviews Manual
11600
100
100
Cantonese Reviews
Food
User scores
41578
100
100
Indonesian Reviews
Many
Manual
11324
100
100
Finnish
Social media Many
Manual
6332
100
100
Basque
Reviews
Food/lodging
Manual
1129
100
100
Korean
Reviews
Movies
User scores
40000
100
100
Japanese
Reviews
Many
User scores
14060
100
100
Turkish
Reviews
Food
Manual
1052
16
100
Reviews
Many
User scores
3750
84
0
Arabic
Social media Many
Manual
1589
45
45
Social media Many
Manual
1951
55
55
Hebrew
Social media Politics
Manual
10110
100
100
Algerian
Social media Many
Manual
731
100
100
Maltese
Social media Many
Manual
718
84
84
Social media Politics
Manual
133
16
16
Table 7 :
7Variables considered in the linear regression model after eliminating multicollinearity. "Language" indicates whether the predictor was measured on the fine-tuning language (train) or the target language (test), "SA" stands for sentiment analysis. Test mBERT XLM-R mBERT XLM-R mBERT XLM-R mBERT XLM-R Test mBERT XLM-R mBERT XLM-R mBERT XLM-R mBERT XLM-RTrain
○ Fusional
Isolating
Agglutinative
Introflexive
○ Fusional
67.6
71.3
51.8
51.4
51.0
52.0
54.2
54.1
Isolating
46.5
51.9
48.8
49.2
47.5
49.6
45.7
47.0
Agglutinative
54.4
55.2
53.3
50.9
55.2
54.8
49.7
46.7
Introflexive
39.9
41.9
37.4
36.7
36.9
34.8
42.2
43.2
Train
○ Fusional
Isolating
Agglutinative
Introflexive
○ Fusional
48.3
42.8
46.5
45.4
45.4
44.5
41.7
42.4
Isolating
49.8
44.2
51.9
43.0
37.6
42.1
36.3
43.0
Agglutinative
46.4
47.1
48.0
50.7
40.1
47.3
41.6
43.5
Introflexive
48.0
42.6
45.5
41.8
43.4
45.4
45.0
45.2
Table 8 :
8Group-to-group cross-lingual accuracy scores (%) for part-of-speech tagging (top) and macro F 1 scores (%) in the sentiment analysis task (bottom) (after balancing in-language scores) for each fine-tuning (column) and testing (row) morphological group, and each model architecture. Maximum values in each test group and architecture are highlighted. Higher is better. Test mBERT XLM-R mBERT XLM-R mBERT XLM-R mBERT XLM-R Test mBERT XLM-R mBERT XLM-R mBERT XLM-R mBERT XLM-RTrain
○ Fusional
Isolating
Agglutinative
Introflexive
○ Fusional
14.9
12.0
31.0
31.3
30.4
29.9
28.0
28.8
Isolating
36.0
31.5
34.0
33.5
33.8
32.4
36.5
35.9
Agglutinative
28.1
28.2
29.6
31.8
26.1
27.2
32.5
36.2
Introflexive
42.6
41.5
45.5
46.0
44.5
47.2
40.0
39.7
Train
○ Fusional
Isolating
Agglutinative
Introflexive
○ Fusional
21.4
20.0
24.5
16.1
25.3
18.0
28.8
20.1
Isolating
19.9
18.6
19.1
18.5
33.1
20.5
34.2
19.4
Agglutinative
23.3
15.7
23.0
10.7
30.5
15.2
28.9
19.0
Introflexive
21.7
20.2
25.5
19.6
27.3
17.1
25.4
17.3
Code and data available at https://github.com/ jerbarnes/typology_of_crosslingual.
Depending on the size of the training set, model architecture and available GPU memory.
Note that, throughout this paper, when we average across morphological groups, we do so with a weighted average so that all groups are equally represented regardless of how many languages they include.
The score metric will depend on the task: accuracy in POS and macro F1 in sentiment analysis.
Strictly speaking, we use different metrics for both tasks, which are not necessarily comparable.12 The normality condition for ANOVA is not met.
Since the regression models for mBERT and XLM-R are quite similar, we report the averaged coefficients here.
In fact, we do not include pretraining data size as a predictor because of its correlation with the variable in question.
Arabic sentiment analysis: Lexicon-based and corpus-based. 10.1109/AEECT.2013.6716448Nawaf Abdulla, Nizar A. Ahmed, Mohammed Shehab, and Mahmoud Al-AyyoubNawaf Abdulla, Nizar A. Ahmed, Mohammed Shehab, and Mahmoud Al-Ayyoub. 2013. Arabic sentiment analysis: Lexicon-based and corpus-based. pages 1- 6.
Opener: Open polarity enhanced named entity recognition. Rodrigo Agerri, Montse Cuadros, Seán Gaines, German Rigau, Procesamiento del Lenguaje Natural. esamiento del Lenguaje Natural51Rodrigo Agerri, Montse Cuadros, Seán Gaines, and German Rigau. 2013. Opener: Open polarity en- hanced named entity recognition. Procesamiento del Lenguaje Natural, 51(0):215-218.
Crosslingual language model pretraining. Guillaume Lample, Alexis Conneau, Advances in Neural Information Processing Systems (NeurIPS). Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).
From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. Anne Lauscher, Vinit Ravishankar, Ivan Vulić, Goran Glavaš, 10.18653/v1/2020.emnlp-main.363Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnlineAnne Lauscher, Vinit Ravishankar, Ivan Vulić, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with mul- tilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4483-4499, On- line. Association for Computational Linguistics.
On the language neutrality of pre-trained multilingual representations. Jindřich Libovický, Rudolf Rosa, Alexander Fraser, 10.18653/v1/2020.findings-emnlp.150Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsJindřich Libovický, Rudolf Rosa, and Alexander Fraser. 2020. On the language neutrality of pre-trained mul- tilingual representations. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 1663-1674, Online. Association for Computa- tional Linguistics.
Finnsentiment -a finnish social media corpus for sentiment polarity annotation. Tommi Krister Lindén, Sam Jauhiainen, Hardwick, Krister Lindén, Tommi Jauhiainen, and Sam Hardwick. 2020. Finnsentiment -a finnish social media corpus for sentiment polarity annotation.
Multi-source transfer of delexicalized dependency parsers. Ryan Mcdonald, Slav Petrov, Keith Hall, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UKAssociation for Computational LinguisticsRyan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62-72, Edinburgh, Scotland, UK. Association for Computational Linguistics.
What kind of language is hard to language-model?. J Sebastian, Ryan Mielke, Kyle Cotterell, Brian Gorman, Jason Roark, Eisner, 10.18653/v1/P19-1491Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsSebastian J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of language is hard to language-model? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4975- 4989, Florence, Italy. Association for Computational Linguistics.
Learning multilingual subjective language via cross-lingual projections. Rada Mihalcea, Carmen Banea, Janyce Wiebe, Proceedings of the 45th. the 45thRada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proceedings of the 45th
Annual Meeting of the Association of Computational Linguistics. Prague, Czech RepublicAssociation for Computational LinguisticsAnnual Meeting of the Association of Computational Linguistics, pages 976-983, Prague, Czech Repub- lic. Association for Computational Linguistics.
ASTD: Arabic sentiment tweets dataset. Mahmoud Nabil, Mohamed Aly, Amir Atiya, 10.18653/v1/D15-1299Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsMahmoud Nabil, Mohamed Aly, and Amir Atiya. 2015. ASTD: Arabic sentiment tweets dataset. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 2515- 2519, Lisbon, Portugal. Association for Computa- tional Linguistics.
Garrett Nicolai, Kyle Gorman, Ryan Cotterell, Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Computational Linguistics. the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Computational LinguisticsOnlineGarrett Nicolai, Kyle Gorman, and Ryan Cotterell, edi- tors. 2020. Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Association for Com- putational Linguistics, Online.
Zero-shot cross-lingual transfer with meta learning. Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, Isabelle Augenstein, 10.18653/v1/2020.emnlp-main.368Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 4547-4562, Online. Association for Compu- tational Linguistics.
A fine-grained sentiment dataset for Norwegian. Lilja Øvrelid, Petter Maehlum, Jeremy Barnes, Erik Velldal, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationLilja Øvrelid, Petter Maehlum, Jeremy Barnes, and Erik Velldal. 2020. A fine-grained sentiment dataset for Norwegian. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5025- 5033, Marseille, France. European Language Re- sources Association.
Crosslingual annotation projection of semantic roles. Sebastian Padó, Mirella Lapata, Journal of Artificial Intelligence Research. 361Sebastian Padó and Mirella Lapata. 2009. Cross- lingual annotation projection of semantic roles. Journal of Artificial Intelligence Research, 36(1):307-340.
Improving sentiment classification in Slovak language. Samuel Pecar, Marian Simko, Maria Bielikova, 10.18653/v1/W19-3716Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing. the 7th Workshop on Balto-Slavic Natural Language ProcessingFlorence, ItalyAssociation for Computational LinguisticsSamuel Pecar, Marian Simko, and Maria Bielikova. 2019. Improving sentiment classification in Slovak language. In Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing, pages 114-119, Florence, Italy. Association for Computa- tional Linguistics.
How multilingual is multilingual BERT?. Telmo Pires, Eva Schlinger, Dan Garrette, 10.18653/v1/P19-1493Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTelmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.
Split morphology: how agglutination and flexion mix. Frans Plank, Linguistic Typology. 3Frans Plank. 1999. Split morphology: how agglu- tination and flexion mix. Linguistic Typology, 3:279-340.
Modeling language variation and universals: A survey on typological linguistics for natural language processing. Helen O' Edoardo Maria Ponti, Yevgeni Horan, Ivan Berzak, Roi Vulić, Thierry Reichart, Ekaterina Poibeau, Anna Shutova, Korhonen, 10.1162/coli_a_00357Computational Linguistics. 453Edoardo Maria Ponti, Helen O'Horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, and Anna Korhonen. 2019. Modeling lan- guage variation and universals: A survey on typo- logical linguistics for natural language processing. Computational Linguistics, 45(3):559-601.
SemEval-2016 task 5: Aspect based sentiment analysis. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Al- Mohammad, Mahmoud Smadi, Yanyan Al-Ayyoub, Bing Zhao, Orphée Qin, Véronique De Clercq, Marianna Hoste, Xavier Apidianaki, Natalia Tannier, Evgeniy Loukachevitch, Kotelnikov, 10.18653/v1/S16-1002Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)Nuria Bel, Salud María Jiménez-Zafra, and Gülşen Eryigit; San Diego, CaliforniaAssociation for Computational LinguisticsMaria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gülşen Eryigit. 2016. SemEval-2016 task 5: Aspect based senti- ment analysis. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 19-30, San Diego, California. Associa- tion for Computational Linguistics.
Improving bi-lstm performance for indonesian sentiment analysis using paragraph vector. A Purwarianti, I A P A Crisdayanti, 10.1109/ICAICTA.2019.89041992019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA). A. Purwarianti and I. A. P. A. Crisdayanti. 2019. Im- proving bi-lstm performance for indonesian senti- ment analysis using paragraph vector. In 2019 Inter- national Conference of Advanced Informatics: Con- cepts, Theory and Applications (ICAICTA), pages 1- 5.
Building a user-generated content North-African Arabizi treebank: Tackling hell. Djamé Seddah, Farah Essaidi, Amal Fethi, Matthieu Futeral, Benjamin Muller, Pedro Javier Ortiz Suárez, Benoît Sagot, Abhishek Srivastava, 10.18653/v1/2020.acl-main.107Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsDjamé Seddah, Farah Essaidi, Amal Fethi, Matthieu Futeral, Benjamin Muller, Pedro Javier Ortiz Suárez, Benoît Sagot, and Abhishek Srivastava. 2020. Build- ing a user-generated content North-African Arabizi treebank: Tackling hell. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 1139-1150, Online. Asso- ciation for Computational Linguistics.
Data point selection for crosslanguage adaptation of dependency parsers. Anders Søgaard, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsAnders Søgaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 682-686, Portland, Ore- gon, USA. Association for Computational Linguis- tics.
Token and type constraints for cross-lingual part-of-speech tagging. Oscar Täckström, Dipanjan Das, Slav Petrov, Ryan Mc-Donald, Joakim Nivre, 10.1162/tacl_a_00205Transactions of the Association for Computational Linguistics. 1Oscar Täckström, Dipanjan Das, Slav Petrov, Ryan Mc- Donald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1-12.
Discriminative analysis of linguistic features for typological study. Hiroya Takamura, Ryo Nagata, Yoshifumi Kawasaki, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Portorož, SloveniaEuropean Language Resources Association (ELRAHiroya Takamura, Ryo Nagata, and Yoshifumi Kawasaki. 2016. Discriminative analysis of linguis- tic features for typological study. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC 2016), pages 69-76, Portorož, Slovenia. European Language Resources Association (ELRA).
The interplay between language similarity and script on a novel multi-layer Algerian dialect corpus. Samia Touileb, Jeremy Barnes, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, Online. the 59th Annual Meeting of the Association for Computational Linguistics, OnlineAssociation for Computational LinguisticsSamia Touileb and Jeremy Barnes. 2021. The interplay between language similarity and script on a novel multi-layer Algerian dialect corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics.
Building and evaluating resources for sentiment analysis in the greek language. Language resources and evaluation. Adam Tsakalidis, Symeon Papadopoulos, Rania Voskaki, Kyriaki Ioannidou, Christina Boididou, Alexandra I Cristea, Maria Liakata, Yiannis Kompatsiaris, 52Adam Tsakalidis, Symeon Papadopoulos, Rania Voskaki, Kyriaki Ioannidou, Christina Boididou, Alexandra I Cristea, Maria Liakata, and Yiannis Kompatsiaris. 2018. Building and evaluating re- sources for sentiment analysis in the greek language. Language resources and evaluation, 52(4):1021- 1044.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008. Cur- ran Associates, Inc.
Shared task on aspect-based sentiment in social media customer feedback. Michael Wojatzki, Eugen Ruppert, Sarah Holschneider, Torsten Zesch, Chris Biemann, Proceedings of the GermEval 2017 -Shared Task on Aspect-based Sentiment in Social Media Customer Feedback. the GermEval 2017 -Shared Task on Aspect-based Sentiment in Social Media Customer FeedbackBerlin, GermanyMichael Wojatzki, Eugen Ruppert, Sarah Holschneider, Torsten Zesch, and Chris Biemann. 2017. Germeval 2017: Shared task on aspect-based sentiment in so- cial media customer feedback. In Proceedings of the GermEval 2017 -Shared Task on Aspect-based Sen- timent in Social Media Customer Feedback, pages 1-12, Berlin, Germany.
Sentiment augmented attention network for cantonese restaurant review analysis. Rong Xiang, Rong Xiang. 2019. Sentiment augmented attention net- work for cantonese restaurant review analysis.
BERT post-training for review reading comprehension and aspect-based sentiment analysis. Hu Xu, Bing Liu, Lei Shu, Philip Yu, 10.18653/v1/N19-1242Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsHu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2324-2335, Minneapolis, Minnesota. Association for Computational Linguis- tics.
Inducing multilingual text analysis. David Yarowsky, Grace Ngai, Richard Wicentowski, David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis
Trond Toska, Anna Trosterud, Reut Trukhina, Utku Tsarfaty, Francis Türk, Sumire Tyers, Roman Uematsu, Zdeňka Untilov, Larraitz Urešová, Hans Uria, Andrius Uszkoreit, Sowmya Utka, Vajjala, Gertjan Daniel Van Niekerk, Viktor Van Noord, Eric Varga, Veronika Villemonte De La Clergerie, Aya Vincze, Lars Wakasa, Abigail Wallin, Jing Xian Walsh, Jonathan North Wang, Maximilan Washington, Paul Wendt, Seyi Widmer, Mats Williams, Christian Wirén, Tsegay Wittern, Tak-Sum Woldemariam, Alina Wong, Mary Wróblewska, Kayo Yako, Naoki Yamashita, Chunxiao Yamazaki, Koichi Yan, Yasuoka, M Marat, Zhuoran Yavrumyan, Yu, Amir Zdeněkžabokrtský, Zeldes, Hanzhi Zhu, and Anna Zhuravleva. 2020. Universal dependencies 2.6. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics. Charles UniversityToska, Trond Trosterud, Anna Trukhina, Reut Tsar- faty, Utku Türk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zdeňka Urešová, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Aya Wakasa, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Max- imilan Wendt, Paul Widmer, Seyi Williams, Mats Wirén, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, ZdeněkŽabokrtský, Amir Zeldes, Hanzhi Zhu, and Anna Zhuravleva. 2020. Universal dependencies 2.6. LINDAT/CLARIAH-CZ digital library at the Insti- tute of Formal and Applied Linguistics (ÚFAL), Fac- ulty of Mathematics and Physics, Charles Univer- sity.
Crosslanguage parser adaptation between related languages. Daniel Zeman, Philip Resnik, Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages. the IJCNLP-08 Workshop on NLP for Less Privileged LanguagesDaniel Zeman and Philip Resnik. 2008. Cross- language parser adaptation between related lan- guages. In Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages. |
38,111,428 | Translating and the Computer 20 | In this paper, we examine current trends in MT in Europe and in Japan. Our comparison is based on types of user profiles, issues of standardisation and localisation, role of MT providers and use of translation aids.Recently, however, MT has changed drastically in Japan. Today, a different type of MT system is popular in the Japanese market: MT enabled WWW browsers which target the novice/casual user. Recent studies of the Japanese market (AAMT, 1997; ASCI 1,1996) conclude that users engaged in web-browsing employ MT systems as they are looking for speedy access to roughly translated information through user friendly interfaces, whereas high quality translation is the object of a different type of MT (and user).Japanese Internet-oriented MT systems are typically bilingual (and largely English-Japanese), PC-based and low in price -the lowest price is around ¥6,000 (roughly 26 pounds) according to AAMT (1997). Some studies report that the needs for MT in Japan are as high as the needs for word processing. | [
29039632
] | Translating and the Computer 20
13 November 1998
Sophia Ananiadou [email protected]
Department of Computing & Mathematics
Manchester Metropolitan University
Translating and the Computer 20
Proceedings from Aslib conference
1213 November 1998
In this paper, we examine current trends in MT in Europe and in Japan. Our comparison is based on types of user profiles, issues of standardisation and localisation, role of MT providers and use of translation aids.Recently, however, MT has changed drastically in Japan. Today, a different type of MT system is popular in the Japanese market: MT enabled WWW browsers which target the novice/casual user. Recent studies of the Japanese market (AAMT, 1997; ASCI 1,1996) conclude that users engaged in web-browsing employ MT systems as they are looking for speedy access to roughly translated information through user friendly interfaces, whereas high quality translation is the object of a different type of MT (and user).Japanese Internet-oriented MT systems are typically bilingual (and largely English-Japanese), PC-based and low in price -the lowest price is around ¥6,000 (roughly 26 pounds) according to AAMT (1997). Some studies report that the needs for MT in Japan are as high as the needs for word processing.
Introduction
In this paper, we examine current trends in machine translation (MT) in Europe and in Japan. The market for MT systems has changed drastically in the past few years in both regions. The main reasons for this change have been: the urgent demand for keeping abreast with business needs; the voracious appetite of information systems (IS) which process increasingly vast amounts of documentation in a multilingual environment; and the unabated growth of the WWW. In order to compare trends and foresee future developments, we have concentrated on the following aspects:
I. What types of MT users are there in Japan and in Europe? II. What types of user are being targetted by MT providers? III. How do attitudes to issues of localisation and standardisation compare? IV. How advanced are the translator's aids, i.e. multilingual dictionaries, terminology management systems, translation memories, alignment tools.
All four standpoints are interdependent. On the one hand, the predominance of a specific type of user will force the market to cater for their needs. Ultimately, this will affect the types of aids and tools provided for translation. On the other hand, new research developments, such as use of corpus based techniques, example-based MT, statistical techniques and measures, e.g. similarity measures for word sense disambiguation, will eventually be used by MT systems. ATR Interpreting Telecommunications Research Laboratories in Japan has recently developed a prototype spoken language system called Chat Translation 2 which is capable of twoway translation between Japanese, English, Korean and German using just such research developments (Mima et al. 1997).
Types of user profile
Translation has changed image as the amount of text to be translated has increased and the number and range of people involved with translation have grown. More people want to have fast access to the main contents of documents without necessarily aiming at stylistic or indeed linguistic accuracy. Users of MT systems range from large organisations to casual, novice users. Translation varies depending on the quantity of translated text and the type of translation work, the intended use of the translated text, the text type and related terminology, the languages involved, and so on. Products can be customised for different types of organisations, e.g. a MT system can be intended for a translation company, to be used there by a professional translator, with experience of MT products. The type of MT required or the type of customisation required is further dependent on the answers to such questions as: What is the nature of a company's international activity? Is it concerned with using MT for localisation, for export, etc.? What is the business of the company? Is it concerned only with domestic markets, a public organisation or an international or multinational organisation?
All these factors are important when we examine types of user profiles and how these affect the expected demand for translation quality. Different levels of translation quality range from raw translation to high quality translation or even adaptation of the original text (product localisation).
Surveys of Japanese MT products (ASCII, 1996;AAMT, January 1997) show the predominance of raw translation where the central meaning of the original text is conveyed. In the Japanese market, users use MT predominantly to get outline information from English sources for quick reference purposes.
Differences between users of MT in Japan and in Europe
In Japan, MT has been the language processing technology with the highest profile since the early 80s (promoted by the Mu project (Nagao et al., 1985) in collaboration with major IT companies). Several of Japan's largest industrial companies have developed MT systems and market them commercially. In the past, in Japan, MT systems were designed for professional translators or professional post-editors in large organisations. Other users included public service organisations such as JICST, the Japan Information Center for Science and Technology. The driving force behind the introduction of MT systems in Japan was cost reduction. However, as the cost of human post editors escalated, MT users reported varying degrees of success. On average, overall productivity gains were observed, although greater success was reported for MT in restricted domains.
Overall, there is more commercial commitment to MT in Japan than in Europe, given the early interest expressed by private companies. Nevertheless, Europe has made significant advances. A Japan Electronic Industry Development Association (JEIDA) questionnaire on the use of MT-enabled Web browsers revealed that the main reason for using them is information gathering (90%) rather than dissemination. (http://www.jeida.or.jp/committee/textsyori/sec-0.html).
In Europe, MT is used much more for dissemination than information gathering. A high percentage of MT products target the casual user in Japan, whereas in Europe they target companies, organisations and translators (Equipe, 1996).
There are many MT products on the Japanese market compared to Europe, some promising 'high quality' translation results. As ever, though, one has to be realistic in terms of what to expect from most MT products. There are to our knowledge no easily accessible in-depth evaluations of available products. However, a few computer magazines have published evaluations of these systems. Mostly, such evaluations are informal, e.g. there is no specification of exact parameters of comparison between systems. Nevertheless, there is agreement among evaluators that one should not expect full translations (especially for languages as diverse in linguistic structure and culture as Japanese and English). Reviews such as that by Myers (1996) of 4 Japanese-to-English Microsoft Windows based translation packages reveal the limitations of commercial Internet MT systems. These include some necessary amount of pre-editing of the original Japanese text to e.g. shorten or simplify sentences. This often extensive pre-editing is a time consuming prerequisite of most such systems.
The European Commission has a keen interest in assessing the conditions under which MT is used in-house. MT is freely available to all in-house Commission translators and other administrators via the EC intranet. As Senez (1997) reports, translator users see benefits such as speed and terminology assistance, and see MT as a worthwhile tool but with limitations (heavy post-editing required). A different EC user group, administrators, use MT for scanning in languages unknown to them, and since they do not aim at high quality translation, they find MT is a very valuable tool, saving them time, with an acceptable quality of output.
From a wider European perspective, in 1996, Equipe conducted a survey of MT products and services. In their findings, they observe that users come mostly from organisations, i.e. telecommunications companies, government organisations, etc. They are mostly professional translators, having an average experience of using MT of 2.4 years, rather than casual users. As the majority of the work is being carried out on technical material, terminology dictionaries and terminology handling tools are very important. Most users employ the terminology packages provided with their MT product, others rely on other technical dictionaries. The annual volume of MT output per organisation is quite small, ranging from 300 pages to a high of 30,000 pages. It was rather difficult to measure the translation throughput, the raw MT turnaround, or post-edited turnaround time between users and to compare this situation with that of casual users. Most users reported that they were happy with the interface, the helpsystem and the documentation provided with their system (Equipe, 1996).
Attitudes to localisation and standardisation
Localisation
MT is closely related with the issue of localisation. Product localisation demands high quality translation which takes into account not only the linguistic but also the cultural aspects of the country concerned. This could not be more true with respect to Japan. In Japan, the actual use of MT for localisation is still quite limited, although many MT developers are well aware of its potential benefits. Much work has been carried out on the localisation of software from elsewhere to suit the Japanese market but not on the process of how Japanese software will become global, despite strong demand for this. One reason for this slow adaptation lies in the complex Japanese writing system (a mixture of 2,000 Kanji characters and the phonetic Kana) but the most important one lies in the linguistic and cultural complexities related with Japanese to English translation. As the human resources available to undertake this job are scarce, MT systems geared for the localisation of Japanese software products not only into English but into other languages are much needed.
European interest and investment in localisation on the other hand is impressive. Ireland is the world centre for software localisation involving publishers, software companies (Lotus, Oracle, Microsoft), and translation service providers (Berlitz, Mendez) .The Localisation Resources Centre and the Software Localisation Interest Group (SLIG) in Ireland bring together interest groups from industry, translator associations and research institutes. Lisa (the Localisation Industry Standards Association) recently organised a forum in Japan to address the issue of localisation of Japanese products.
Standardisation
In Japan, more than 20 companies are engaged in developing their own MT products (whether MT-enabled web products or more traditional varieties), which are delivered with basic resources: users typically have to build their own dictionaries. Moreover, in order to use systems effectively, users must rely heavily on pre-editing. Systems cannot re-use the results of pre-editing and the process itself really requires experienced users. Post-editing of the output is considered too time-consuming for it to be widely supported. In the search to improve the quality of translation results, interest has focussed on the sharing of user dictionaries through use of common formats.
A group of MT companies (NEC, Toshiba, Nova, Sharp, Fujitsu, Matsushita) is working with the Asian Association for MT (AAMT) to design standard formats for sharing and exchanging user dictionaries among different MT systems. This initiative is supported by the Information Technology Promotion Agency (IPA) of Japan. Their Universal Platform (UPF) aims at providing a common format for user dictionaries and making available to the public the electronic environment for the sharing of dictionaries (Kamei et al. 1997).
In Europe, projects like OTELO aim to integrate existing translation resources.
(http://www2.echo.lu/langeng/en/le1/otelo/otelo.html)
One of the objectives is to allow users to combine local and remote translation products such as MT systems and translation memories (TMs). Another is to define standardised common lexical resources and text-handling formats. There is much awareness of the benefits for defining an interchange format for groupware support. Such concerns rely fundamentally on standardisation and localisation.
We note however that, in contrast to Japan, standardisation efforts in the language engineering field are not new in Europe, especially in the field of lexical resources, where there have been several initiatives and projects whose genesis can be traced back to 1986. We refer especially to past and ongoing efforts of the EAGLES group to provide guidelines for the standardisation of lexical encoding. EAGLES (Expert Advisory Group on Language Engineering Standards) has at any one time about 200 people from across the European Union working, largely voluntarily, on a set of topics that are widely agreed to be ripe for de facto standardisation, including corpus annotation, evaluation of language engineering products, resources for speech processing and lexical resources, to name but a few areas.
Recommendations and guidelines developed by EAGLES are widely disseminated (http://www.ilc.pi.cnr.it/EAGLES/home.html) and feedback incorporated from the user and developer community. Regarding lexical resources, EAGLES addresses the problem of finding a protocol which will help to normalise and structure the information needed for the creation of reusable lexical resources. The aim is to improve the performance of MT and other document management applications such as information retrieval, information extraction and summarisation. Our own experience in EAGLES has been most positive, especially as EAGLES recognises that standardisation relies ultimately on sustained commitment from industry and has been very successful in driving standardisation via a strong industry-academia partnership. We conclude that Europe has then taken the lead in pushing towards standardisation in the language engineering area, but that in Japan the ongoing efforts are promising (it is noticeable, by the way, that US efforts in this area are some way behind both Europe and Japan).
Changing role of MT providers
In Japan, it has largely been the MT developers who have guided trends inMT. User involvement in the development of MT is still scarce. As a result, potentially useful technologies, such as translation memories, alignment tools,etc., have not been fully exploited for the user's benefit. In Europe, in contrast, there is more awareness of the user's role in the development of MT. This can be traced in large part to the role of the EC, where the focus of EC investment in language engineering has become increasingly user centred. User requirements are reflected by the need for up-to-date technology, tools and MT services. MT suppliers thus necessarily become ever more closely engaged in taking into consideration user requirements. Moreover, MT suppliers in Europe are keen to invest in new markets by integrating language engineering tools to improve the quality of MT. Most European MT suppliers, for example, now incorporate some form of translation memory in their products (examples of TMs are Trados Translator's Workbench, Langenscheidt's T1 Professional developed for PCs by Gesellschaft für Multilinguale Systeme (GMS), IBM Translation Manager, etc.) For those who do not know what a translation memory is, it can broadly be defined as "a multilingual text archive containing (segmented, aligned, parsed and classified) multilingual texts, allowing storage and retrieval of aligned multilingual text segments against various search conditions." EAGLES (1995) If we however turn to look at Japan, there is marked absence of user involvement in the development of systems. This absence of user involvement may explain the relatively low interest there in TM systems, as they are not found as widely integrated in translation environments as they are in Europe.
MT related tools
There is in addition a knock-on effect of the lack of user involvement inJapan in terms of the absence of other MT related tools in Japanese MT environments. For example, text alignment tools which can among other things be used for generating translation memories are largely absent from Japanese systems. Another largely lacking component of a translation environment in Japan is the terminology management system. Such systems consist of a terminology database, lookup software and utilities for maintaining and updating the database. Some include automatic term recognition tools to capture terms from running texts.
Terminology is demonstrably important for quality translation. However, the integration of terminology tools in Japanese translation environments is still at a nearly stage. Although lack of user involvement is again a crucial factor here, another factor that must not be neglected is the different attitudes in Europe and Japan to the types of expert involved in the design and building of MT products. In Europe, it is not unusual to find linguists, lexicographers, terminologists and translators working together with computer scientists and computational linguists on the development of MTsystems. It is recognised in Europe that an interdisciplinary approach is required to help automate the translation process. Europe has indeed a long history of interdisciplinary collaboration in this area. However, in Japan, the development of MT systems is largely driven by engineers and computer scientists. Academic input, since the early days of the Mu system, has been peripheral to Japan's MT effort and mainly restricted to contributions by computer science and electrical engineering experts. There has been a dearth of contributions whether academic or industrial from theoretical linguists, lexicographers and terminologists. Here we then have a crucial difference in attitudes to MT development in Europe and Japan.
However, the boot is on the other foot when it comes to advances in sharing and the collection of lexical resources. This is apparently impressive in Japan, while it is hardly nascent in Europe. In particular, dictionaries developed by EDR (Japan Electronic Dictionary Research Institute), NTT and IPA are used by many companies as common lexical linguistic resources. However, there is a lack of bilingual or multilingual terminology databases. While computerised collections of English-Japanese pairs of technical terms are available, there is little control over their terminological quality. In Europe, terminological collections are arguably of higher quality (again largely due to greater use of professional terminologists in Europe). Above, we referred to Japanese efforts in the sharing of lexicons as being 'apparently impressive'. They are indeed impressive in terms of size. However, evaluations of the vast EDR resource in particular, carried out by EAGLES and foregoing EC lexical projects, reveal that the design of this resource leaves much to be desired from a formal linguistic point of view. This is a prime example of a resource built by computer scientists. Moreover, questions were raised about its actual level of reuse in Japan: that is, it was hard to discover to what extent the resource is actually being used (reportedly predominantly academic) and with what degree of success (in terms of being able to provide the type and quality of translation that would satisfy an end user).
MT services
Remote translation services over the Internet from a central server have become popular in both Europe and Japan. We mention, on an indicative basis only, a few of the translation services. We distinguish these services from the MT enabled Web browsers we have mentioned earlier.
1.
ATLAS MT Service was developed by Fujitsu, with 22 technical dictionaries available, where the original text is sent by e-mail to a remote MT server and the translation returned in the same manner. The input documents have to be pre-edited according to such guidelines as limit the original document to 12,000 characters, use word wrapping instead of splitting words by hyphenation at the end of a line, keep sentences as short as possible, etc.
2. JST (Japan Science and Technology Corporation) offers an on-line Japanese-English MT service through e-mail (the service is free for STA associated organisations but not for the general public). For more information see:
http://www-jmt.jst.go.jp/index-E.html One of the restrictions imposed is that the documents to be translated cannot exceed 20,000 Japanese characters.
3. AltaVista Translation with Systran (http://babelfish.altavista.digital.com/) Currently, the languages offered are French, German, Italian, Spanish and Portuguese with English. The service is free and allows Web users to translate Websites from and into English. SYSTRAN offers an online translation service called SYSTRANET, which is available on subscription from the company's URL, http://www.systransoft.com 4. Globalink's on line translation service, Comprende, which provides real-time Website translation services to and from English and French, German, Italian, Portuguese and Spanish. Additional languages will include Japanese, Chinese and Russian. Users can access Comprende at http://comprende.globalink.com for a free beta-test period. After that period, the monthly fee is $19.95 for basic Website content translation only, and $49.95 for a premium service including newsgroup translations of chat and email. i ■
Conclusions
As a result of our investigation of MT trends in Japan and Europe, we are able to state the following major conclusions:
• There is a difference in typical user type of MT between Japan and Europe: in Japan, the casual user predominates; in Europe, the professional. Reasons for this lie mainly in the fact that most major Japanese IT companies saw the viability of MT as a mass commercial product, which is not true of Europe, where this market has as yet to be exploited. Nevertheless, we must be careful to note that the quality of MT offered for a mass market of casual users is necessarily poor. However, it is clear that Japanese IT companies have invested a great deal in this market and are making reasonable if not large profits from it. Other reasons for this difference in user type lie in the different purpose that MT serves in each region: in Japan, MT is mainly used for information gathering; in Europe, it is used mainly for dissemination.
• Although localisation is of great interest in both areas, there are major differences: in Japan, interest in localisation is of recent date and progress is apparently slow; in Europe, there is already a highly developed localisation industry, which has in addition recently begun to turn its attention to localising its products for the Japanese market • Standardisation issues are equally of interest in both regions. However, again differences may be noted: Europe has a clear lead over both Japan (and indeed North America) in the drive for language engineering standards, with well-coordinated industry-academic initiatives such as EAGLES, whose results are widely disseminated and endorsed; in Japan, though, leading MT providers have recently been working with the AAMT to design standards for user dictionary formats and these will undoubtedly have a positive impact on the development and market penetration of Japanese MT products as users become better able to reuse their lexical resources in different systems.
• There are differences in the level of provision of MT-related tools. These again are due to differences in the predominant type of user in each area. Thus, in Europe, we find such tools as translation memories, terminology management packages and the like, being delivered with or used in conjunction with MT systems, and intended for the professional user,the predominant user type. In Japan, in contrast, there is not so much interest in providing such tools for the casual user. However, emphasis on the casual user leads also to the exploitation of different kinds of strategies for MT: for example, template-based MT can successfully be deployed for the casual user where it is typically inappropriate for the professional. It is important nevertheless to note that our conclusions relate to broad trends that can be detected. This is not then to deny that certain types of user or certain types of MT do not exist in either Japan or Europe. Thus, for example, there are highly-regarded Japanese MTsystems that are oriented to the needs of the professional user (e.g.systems offered by Hitachi, Fujitsu, NEC and Toshiba, among others).
What further conclusions may we draw regarding the future of MT in Japan and Europe? It is likely that casual users of MT will become increasingly predominant in r Europe through growing demand for web-based information. Thus, we can expect greater numbers of MT-enabled web browsers to become available in Europe. It remains to be seen how the question of quality of translation of such browsers will be tackled in Europe and in Japan. In Japan, there is growing realisation that the needs of the professional user are being neglected due to the commercial impact of the casual user. It is hard to say, however, whether the professional user community can attain enough commercial weight to persuade Japanese MT providers to cater more fully for their needs.
In looking at trends in MT in both Japan and Europe, it is not appropriate to engage in adverse criticism of some approach that is different to some other: differences are rooted largely in historical developments and in alternative commercial paths. This is why we believe that Europe will begin to move in the direction of current Japanese trends and Japan will similarly adopt European practices as each region strives to broaden the impact of language engineering for the benefit of their respective societies, encompassing many kinds of user.
Dr Ananiadou is also a senior research fellow at UMIST and at the University of Tokyo.2 All registered trademarks are duly acknowledged.
AcknowledgementsI would like to thank Dr Hideki Mima (MMU, UK) for his invaluable help inproviding information and translation about the Japanese systems.Appendix of commercial MT systems from AAMT report(January 1997 http://www.jeida.or.jp/aamt/list-j.html)¥12,000.6. Pivot/JE , Pivot /EJ, CROSSROAD, Translation Adapter 2, developed by NEC, run on Windows95, Windows98, NT4.0, Unix. Crossroad translates Web pages and also preserves document format(rtf,doc). NEC also provides a service called "timer translation" which translates documents overnight at a cheaper rate. They also provide dictionaries of 100,000 words for each language pair (E/J and J/E) with bilingual technical dictionaries. The program allows the user to define their own dictionary. An interesting aspect of Crossroad is that it offers a space for exchanging user dictionaries. It also provides an interactive translation interface converting a sentence gradually from Kana to English, i.e. initially the sentence has a mixture of English and Japanese, with the Japanese word order, then the object-subject markers are removed and lastly the sentence is put into English word order. The cheapest version is sold at ¥9,000. Translation Adapter 2 offers a bi-directional English-Japanese and Japanese-English system for browsing and translating Web pages and email. Besides translation and dictionary look-up, it includes an example retrieval utility which retrieves model sentences to help Japanese users to write letters. 7. ATLAS EJ/JE for Windows95/NT by Fujitsu. The professional version ranges from 12,000 to ¥35,000 and includes 24 technical dictionaries containing 1,200,000 words. Fujitsu also markets TransLinGO! (and Plus) which searches Japanese Webpages in English. With the Plus version the user can input English keywords to search Japanese homepages by using Japanese search engines. Prices range from ¥10,800 to ¥17,800 for the Plus version.Translation Surfin 1.0 provides English to Japanese translations using Netscape Navigator. The product offers four translation modes: on-line to Web sites, off-line, partial translation and title translation. Fujitsu sells lexical resources such as Denjikai V2.0 for Windows, which comprises a basic dictionary of 320,000 words. This can be put together with technical dictionaries to reach 3 million words. The basic dictionary sells at ¥24,000 and each technical dictionary at ¥50,000. Fujitsu also sells the EDR dictionary (730,000 words) at ¥50,000. Kuno. Some contextual information is used by the system. The Logo VistaPersonal sells at 39,800. It has the same engine as the PRO version but does not include the alternative translations capability. Internet Plus was voted best choice by the DOS/V Magazine(1997.2.1) compared with 12 other products.
. Report on Commercial MT. AAMT The Asia-Pacific Association for Machine Translationin JapaneseAAMT The Asia-Pacific Association for Machine Translation (1997) Report on Commercial MT (in Japanese) http://ww.jeida.or.jp/aamt/list-j.html
Machine Translation Trends in Japan. S Ananiadou, Lisa Newsletter. 2Ananiadou S. (1998) Machine Translation Trends in Japan, Lisa Newsletter, Volume VII No 2, June 1998, pp.10-14.
. Ascii July, ASCII July 1996.
A Collection of Technical Publications. ATR Interpreting Telecommunications Research LaboratoriesDepartment 3 & 4. EAGLES Lexicon Interest GroupATR Interpreting Telecommunications Research Laboratories (1997) A Collection of Technical Publications, Department 3 & 4. EAGLES Lexicon Interest Group: http://www.ilc.pi.cnr.it/EAGLES96/rep2/ (on semantic encoding, ongoing) http://www.ilc.pi.cnr.it/EAGLES96/synlex/synlex.html (on syntactic subcategorisation)
Survey of MT products and services. Equipe Consortium, Japan Science and Technology Corporation. Equipe Consortium (1996) Survey of MT products and services, in http://www2.echo.lu/langeng/en/reps/mtsurvey/mtsun/ey.html JST (Japan Science and Technology Corporation) http://www-jmt.jst.go.jp/index-E.html
Sharable Formats and their supporting environments for exchanging user dictionaries among different MT systems as a part of AAMT activities. S Kamei, MT Summit VIKamei, S. et al. (1997) Sharable Formats and their supporting environments for exchanging user dictionaries among different MT systems as a part of AAMT activities, in MT Summit VI, Nov.1997.
Multi-lingual SpokenDialog translation System using Transfer-Driven Machine Translation. H Mima, O Furuse, H Lida, Y Wakita, Proc. of MT Summit VI. of MT Summit VIMima, H., Furuse, O., lida, H., Wakita, Y. (1997) Multi-lingual SpokenDialog translation System using Transfer-Driven Machine Translation, in Proc. of MT Summit VI, pp. 148-155.
Newsletter of the International Association for Machine Translation. Issues. 1519MT News InternationalMT News International, Newsletter of the International Association for Machine Translation, Issues 15,16,17,18,19.
Can Computers Translate? Computing Japan magazine. S Myers, ApriM. 996Myers S. (1996) Can Computers Translate? Computing Japan magazine, ApriM 996.
The Japanese Government Project for Machine Translation. M Nagao, J Tsujii, J Nakamura, Computational Linguistics. 112-3Nagao, M., Tsujii, J. and Nakamura, J. (1985) The Japanese Government Project for Machine Translation, Computational Linguistics 11, 2-3, pages 91- 110.
Lying in Wait at the Heart of the Web, Language Technology DTI-OSTEMS. N Ostler, Ostler, N. (ed) (1996) Lying in Wait at the Heart of the Web, Language Technology DTI-OSTEMS, September 1996 OTELO http://www2.echo.lu/langeng/en/le1/otelo/otelo.html
Users and Research in Europe. D Senez, MT News International, Issue No17. Senez, D. (1997) Users and Research in Europe, in MT News International, Issue No17, June/July 1997, pp.9-10.
MT R&D in Asia. H Tanaka, MT Summit VI. Tanaka, H. (1997) MT R&D in Asia, in MT Summit VI, Nov.1997.
Yakushi nyorai by CSK, operates on Windows95, NT3, language pairs English to Japanese. English / French / Germandictionary of 200,000 words, optional terminology, price from ¥5,825Yakushi nyorai by CSK, operates on Windows95, NT3, language pairs English to Japanese, Japanese to English / French / German, dictionary of 200,000 words, optional terminology, price from ¥5,825.
Translation tools (PRO version only) include a 124,000 word dictionary and 27 specialised dictionaries (the system can support the use of 6 specialised dictionaries at a time). Perfect Ver.2 by AlSoft, available on Windows95, Windows98, NT4, from Japanese to English. These are based on an interlingua which is adapted for technical domains. Translation tools require pre-edited input (they incorporate a pre-editing checker for Japanese. Prices start from ¥12,800 for the standard version and ¥19,800 for the PRO versionPerfect Ver.2 by AlSoft, available on Windows95, Windows98, NT4, from Japanese to English. Translation tools (PRO version only) include a 124,000 word dictionary and 27 specialised dictionaries (the system can support the use of 6 specialised dictionaries at a time). These are based on an interlingua which is adapted for technical domains. Translation tools require pre-edited input (they incorporate a pre-editing checker for Japanese). Prices start from ¥12,800 for the standard version and ¥19,800 for the PRO version.
0 & EUDORA by Brother, runs on Windows95, Windows98, NT4.0, MacOS8. This system supports semantic transfer. The base model sells for ¥29,800. / Je Transland, Ver2, TransLand offers technical dictionaries which contain up to 811,600 entriesTransLand / JE Ver2.0 & EUDORA by Brother, runs on Windows95, Windows98, NT4.0, MacOS8. This system supports semantic transfer. The base model sells for ¥29,800. TransLand offers technical dictionaries which contain up to 811,600 entries.
The MT-enabled Web browser version sells for ¥9,800, the professional version costs ¥29,800 and includes a 120,000 dictionary. 5. WD-01 SW, Power E/J Ver.3.0 by Sharp, runs on Windows 95, Windows98. The system supports a template function for creating English letters and includes a 114,000 word dictionary and a 96,000 word terminological dictionary. The dictionaries (mono-lingual and bilingual) sell for 10. ASTRANSAC Sun WS, C/S, for Windows Ver.3.0, for Internet Ver.2.0 byToshiba for E/J and J/E. The professional version supports users' dictionaries, translation patterns and has an alternative translations capability like LogoVista. Price ¥98,000. Toshiba also sells MT for email and news with a dictionary of 240. Word Kokusaijin Versions 4.1, 2.0 by Sanyo, runs on Windows 95. 000 words. Price ¥12,800 for the E/J pair and 16,800 for the J/E pair. ASTRANSAC for Internet provides Web translation without changing the original layout. The basic dictionary has 190,000 words and is sold at ¥12,800Word Kokusaijin Versions 4.1, 2.0 by Sanyo, runs on Windows 95. The MT-enabled Web browser version sells for ¥9,800, the professional version costs ¥29,800 and includes a 120,000 dictionary. 5. WD-01 SW, Power E/J Ver.3.0 by Sharp, runs on Windows 95, Windows98. The system supports a template function for creating English letters and includes a 114,000 word dictionary and a 96,000 word terminological dictionary. The dictionaries (mono-lingual and bilingual) sell for 10. ASTRANSAC Sun WS, C/S, for Windows Ver.3.0, for Internet Ver.2.0 byToshiba for E/J and J/E. The professional version supports users' dictionaries, translation patterns and has an alternative translations capability like LogoVista. Price ¥98,000. Toshiba also sells MT for email and news with a dictionary of 240,000 words. Price ¥12,800 for the E/J pair and 16,800 for the J/E pair. ASTRANSAC for Internet provides Web translation without changing the original layout. The basic dictionary has 190,000 words and is sold at ¥12,800.
WorldNet /EJ developed by Kodensha Corporation, runs on Windows95, NT4.0. At ¥78,000, it includes OCR software. There are 34 specialised dictionaries for the J/E pair, including a 204,400 term medical dictionary, and 31 specialised dictionaries for the E/J pair. Jlondon supports a template function. J London, J/E , E/J Ver, 3The price for the specialised dictionaries starts at ¥29,800J London J/E, E/J, Ver.3, WorldNet /EJ developed by Kodensha Corporation, runs on Windows95, NT4.0. At ¥78,000, it includes OCR software. There are 34 specialised dictionaries for the J/E pair, including a 204,400 term medical dictionary, and 31 specialised dictionaries for the E/J pair. Jlondon supports a template function. The price for the specialised dictionaries starts at ¥29,800.
Deluxe for Windows, for Macintosh Ver.2.0 by MediaVision, has a standard dictionary of 430,000 words, 18 technical dictionaries, and 8,500 words of Internet terminology (i.e. terms frequently used on WWW home pages). The system supports a mechanism for learning users' grammar and dictionary preferences. Dr Surf for Windows. The price starts at ¥34,000. The system adopts a UPF standardDr Surf for Windows, Deluxe for Windows, for Macintosh Ver.2.0 by MediaVision, has a standard dictionary of 430,000 words, 18 technical dictionaries, and 8,500 words of Internet terminology (i.e. terms frequently used on WWW home pages). The system supports a mechanism for learning users' grammar and dictionary preferences. The price starts at ¥34,000. The system adopts a UPF standard.
OS/2. The cheapest product starts at ¥7,800. IBM sells a package that includes homepage building software, email authoring and MT-enabled Web browser with an * overnight' facility. The Translation Manager employs a pattern based translation which allows the users to define patterns. The pattern-based approach is geared to idiomatic, collocational, contextual and domain specific translations. The standard dictionary has 160. Translation Manager/2 by IBM runs on Windows95, Wlndows98, NT4.0. 000 words and 66,000patterns and the technical dictionary 7,000 terms and 45,000 patterns. IBM has also a Summariser which translates and makes summariesTranslation Manager/2 by IBM runs on Windows95, Wlndows98, NT4.0, OS/2. The cheapest product starts at ¥7,800. IBM sells a package that includes homepage building software, email authoring and MT-enabled Web browser with an * overnight' facility. The Translation Manager employs a pattern based translation which allows the users to define patterns. The pattern-based approach is geared to idiomatic, collocational, contextual and domain specific translations. The standard dictionary has 160,000 words and 66,000patterns and the technical dictionary 7,000 terms and 45,000 patterns. IBM has also a Summariser which translates and makes summaries.
PC-Transer/JE is one of the best known and most widely used products. The PCej product uses a dictionary of 200,000 words and 18 specialised dictionaries of around 950,000 words. The product includes a spelling and grammar checker and includes a function that allows users to construct templates of frequently used phrases and to choose among alternative translations. The system speed is 12. Net Surfer/ej Ver.3.0, PC-Transer /ej /je Ver.5.0, 4.0 by Nova Corporation runs on Windows95, Windows98, NT3.51, NT4.0, MacOS8. 980000 words per hour and the price is ¥198,000 plusNet Surfer/ej Ver.3.0, PC-Transer /ej /je Ver.5.0, 4.0 by Nova Corporation runs on Windows95, Windows98, NT3.51, NT4.0, MacOS8. PC- Transer/JE is one of the best known and most widely used products. The PC- ej product uses a dictionary of 200,000 words and 18 specialised dictionaries of around 950,000 words. The product includes a spelling and grammar checker and includes a function that allows users to construct templates of frequently used phrases and to choose among alternative translations. The system speed is 12,000 words per hour and the price is ¥198,000 plus 98,000
Nova has a patent translation product with automatic preediting of lengthy patent sentences (splitting long sentences intoshorter ones) and an automatic post-editing facility which involves mainly punctuation. for the dictionaries. for the dictionaries. Nova has a patent translation product with automatic pre- editing of lengthy patent sentences (splitting long sentences intoshorter ones) and an automatic post-editing facility which involves mainly punctuation.
HICATS has a standard dictionary of 85,000 words, technical dictionaries of 170,000 terms and a business dictionary of 60,000 words. The starting price is ¥9,800. Hitachi has been developing an MT system for translating manuals and patents.This client-server system was released in 1998. As for on-line translation aids, HICATS/JE includes a function for diagnosing input Japanese sentences. It supports users in pre-editing by detecting morphological, syntactic and semantic ambiguities in input sentences and long sentences. Nt4 Windows98, Unix, HICATS by Hitachi runs on Windows 95. HICATS by Hitachi runs on Windows 95, Windows98, NT4, Unix. HICATS has a standard dictionary of 85,000 words, technical dictionaries of 170,000 terms and a business dictionary of 60,000 words. The starting price is ¥9,800. Hitachi has been developing an MT system for translating manuals and patents.This client-server system was released in 1998. As for on-line translation aids, HICATS/JE includes a function for diagnosing input Japanese sentences. It supports users in pre-editing by detecting morphological, syntactic and semantic ambiguities in input sentences and long sentences. |
200,060,858 | [] | Towards discourse annotation and sentiment analysis of the Basque Opinion Corpus
Jon Alkorta [email protected]
Ixa Group
Ixa Group / UPV/EHU
Ixa Group / UPV/EHU
UPV/EHU
Koldo Gojenola [email protected]
Ixa Group
Ixa Group / UPV/EHU
Ixa Group / UPV/EHU
UPV/EHU
Mikel Iruskieta [email protected]
Ixa Group
Ixa Group / UPV/EHU
Ixa Group / UPV/EHU
UPV/EHU
Towards discourse annotation and sentiment analysis of the Basque Opinion Corpus
Proceedings of Discourse Relation Parsing and Treebanking (DISRPT2019), pages 144-152 Minneapolis, MN, June 6, 2019. c 2019 Association for Computational Linguistics 144
Discourse information is crucial for a better understanding of the text structure and it is also necessary to describe which part of an opinionated text is more relevant or to decide how a text span can change the polarity (strengthen or weaken) of other span by means of coherence relations. This work presents the first results on the annotation of the Basque Opinion Corpus using Rhetorical Structure Theory (RST). Our evaluation results and analysis show us the main avenues to improve on a future annotation process. We have also extracted the subjectivity of several rhetorical relations and the results show the effect of sentiment words in relations and the influence of each relation in the semantic orientation value.
Introduction
Sentiment analysis is a task that extracts subjective information for texts. There are different objectives and challenges in sentiment analysis: i) document level sentiment classification, that determines whether an evaluation is positive or negative (Pang et al., 2002;Turney, 2002); ii) subjectivity classification at sentence level which determines if one sentence has subjective or objective (factual) information (Wiebe et al., 1999) and iii) aspect and entity level in which the target of one positive or negative opinion is identified (Hu and Liu, 2004).
In order to attain those objectives, some resources and tools are needed. Apart from basic resources as a sentiment lexicon, a corpus with subjective information for sentiment analysis is indispensable. Moreover, such corpora are necessary for two approaches to sentiment analysis. One approach is based on linguistic knowledge, where a corpus is needed to analyze different linguistic phenomena related to sentiment analysis. The second approach is based on statistics and, in this case, the corpus is useful to extract patterns of different linguistic phenomena.
The aim of this work is to annotate the rhetorical structure of an opinionated corpus in Basque to check out the semantic orientation of rhetorical relations. This annotation was performed following the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988). We have used the Basque version of SO-CAL tool to analyze the semantic orientation of this corpus (Taboada et al., 2011). This paper has been organized as follows: after presenting related work in Section 2, Section 3 describes the theoretical framework, the corpus for study and the methodology of annotation as well as the analysis of the corpus carried out. Then, Section 4 explains the results of the annotation process, the inter-annotator agreement and the results with regard to analysis in the subjectivity of the corpus. After that, Section 5 discusses the results. Finally, Section 6 concludes the paper, also proposing directions for future work.
Related work
The creation of a specific corpus and its annotation at different linguistic levels has been very a common task in natural language processing. As far as a corpus for sentiment analysis is concerned, information related to subjectivity and different grammar-levels has been annotated in different projects. Refaee and Rieser (2014) annotate the Arabic Twitter Corpus for subjectivity and sentiment analysis. They collect 8,868 tweets in Arabic by random search. Two native speakers of Arabic annotated the tweets. On the one hand, they annotate the semantic orientation of each tweet. On the other hand, they also annotate different grammatical characteristics of tweets such as syntactic, morphological and semantic features as well as stylistic and social features. They do not annotate any discourse related feature. They obtain a Kappa inter-annotator agreement of 0.84.
The majority of corpora for sentiment analysis are annotated with subjectivity information. There are fewer corpora annotated with discourse information for the same task. Chardon et al. (2013) present a corpus for sentiment analysis annotated with discourse information. They annotate the corpus using Segmented Discourse Representation Theory (SDRT), creating two corpora: i) movie reviews from AlloCinéf.fr and ii) news reaction from Lemonde.fr. They collect 211 texts, annotated at EDU and document level. At the EDU level, subjectivity is annotated while at the document level, subjectivity and discourse relations are annotated. Results in subjectivity show that, at EDU level, Cohen's Kappa varies between 0.69 and 0.44 depending on the corpus and, at the document level, Kappa is between 0.73 and 0.58, respectively. They do not give results regarding the annotation of discourse relations. Asher et al. (2009) create a corpus with discourse and subjectivity annotation. They categorize opinions in four groups (REPORTING, JUDGMENT, ADVISE and SENTIMENT), using SDRT as the annotation framework for discourse. Exactly, they use five types of rhetorical relations (CONTRAST/CORRECTION, EXPLA-NATION, RESULT and CONTINUATION). They collect three corpora (movie reviews, letters and news reports) in English and French. 150 texts are in French and 186 texts in English. According to Kappa measure, in opinion categorization, the inter-annotator agreement is 95% while in discourse segmentation it is 82%. Mittal et al. (2013) follow a similar methodology. By the annotation of negation and discourse relations in a corpus, they measure the improvement made in sentiment classification. They collect 662 reviews in Hindi from review websites (380 with a positive opinion and 282 with a negative one). Regarding discourse, they annotate violating expectation conjunctions that oppose or refute the current discourse segment. According to their results, after implementing negation and discourse information to HindiSentiWord-Net (HSWN), the accuracy of the tool increases from 50.45 to 80.21. They do not mention the inter-annotating agreement of violating expectation conjunctions.
To sum up, this section gives us a general overview about discourse-based annotated corpora for sentiment analysis. Corpora have been made for specific aims, annotating only some characteristics or features related to discourse and discourse relations. This situation differs from our work, because our work describes the annotation process of the relational discourse structure and how the function in the rhetorical relation affect to the analysis in the semantic orientation.
3 Theoretical framework and methodology 3.1 Theoretical framework: Rhetorical Structure Theory
We have annotated the opinion text corpus using the principles of Rhetorical Structure Theory (RST) (Mann and Thompson, 1988;Taboada and Mann, 2006), as it is the most used framework in the annotation of discourse structure and coherence relations in Basque where there are some tools (Iruskieta et al., 2013(Iruskieta et al., , 2015b to study rhetorical relations. According to this framework, a text is coherent when it can be represented in one discourse-tree (RS-tree). In a discourse-tree, there are elementary discourse units (EDU) that are interrelated. The relations are called coherence relations and the sum of these coherence relations forms a discourse-tree. Moreover, the text spans present in a discourse relation may enter into new relations, so relations can form compound and recursive structures. Elementary discourse units are text spans that usually contain a verb, except in some specific situations. The union of two or more EDUs creates a coherence relation. There are initially 25 types of coherence relations in RST. In some cases, one EDU is more important than other one and, in this case, the most important EDU in the relation is called nucleus-unit (basic information) while the less important or the auxiliary EDU is called satellite-unit (additional information). Coherence relations of this type are called hypotactic relations. In contrast, in other relations, EDUs have the same importance and, consequently, all of them are nucleus. The relations with EDUs of same rank are called paratactic relations. The task that selects the nucleus in a relation is called nuclearity.
Hypotactic relations are also divided into two groups according to their effect on the reader. Some relations are subject matter and they are re-lated to the content of text spans. For example, CAUSE, CONDITION or SUMMARY are subject matter relations. On the other hand, the aim of other relations is to create some effect on the reader. They are more rhetorical in their way of functioning. EVIDENCE, ANTITHESIS or MO-TIVATION belong to this group. Figure 1 presents a partial discourse-tree of an opinion text (tagged with the code LIB29). The text is segmented and each text span is a discourse unit (EDU). The discourse units are linked by different types of rhetorical relations. For instance, the EDUs numbered with 15 and 16 are linked by an ELABORATION relation and the EDUs ranging from 15 to 20 are linked by LIST (multinuclear relation). On the other hand, the EDU numbered 2 is the central unit of this text because other relations in the text are linked to it and this text span is not attached to another one (with the exception of multinuclear relations).
According to Taboada and Stede (2009), there are three steps in RST-based text annotation:
1-Segmentation of the text in text spans. Spans are usually clauses.
2-Examination of clear relations between the units. If there is a clear relation, then mark
it. If not, the unit belongs to a higher-level relation. In other words, the text span is part of a larger unit.
3-Continue linking the relations until all the EDUs belong to one relation.
Following Iruskieta et al. (2014) we think that it is recommendable, after segmenting the corpus, to identify first the central unit, and then mark the relations between different text spans.
The Basque Opinion Corpus
The corpus used for this study is the Basque Opinion Corpus (Alkorta et al., 2016). This corpus has been created with 240 opinion texts collected from different websites. Some of them are newspapers (for instance, Berria and Argia) while others are specialized websites (for example, Zinea for movies and Kritiken Hemeroteka for literature).
The corpus is multidomain and, in total, there are opinion texts of six different domains: sports, politics, music, movies, literature books and weather. The corpus is doubly balanced. That is, each domain has the same quantity of opinion texts (40 per domain) and each semantic orientation (positive or negative subjectivity) has the same quantity of opinion texts per each domain (20 positive and 20 negative texts per domain). We extract preliminary corpus information using the morphosyntactical analysis tool Analhitza (Otegi et al., 2017): 52,092 tokens and 3,711 sentences.
We made preliminary checks to decide whether the corpus is useful for sentiment analysis. The opinion texts are subjective, so the frequency information of the first person should be high. The results show that the first person appearance is of 1.21% in a Basque objective corpus (Basque Wikipedia) whereas its appearance is of 8.37% in the Basque Opinion Corpus. As far as the presence of adjectives is concerned, both corpora show similar results. From all the types of grammatical categories, 8.50% of the words correspond to adjectives in Basque Wikipedia and 9.82% in the corpus for study. Other interesting features for sentiment analysis, such as negation, irrealis blocking and discourse markers, have also been found in the corpus.
Methodological steps
We have followed several steps to annotate the Basque Opinion Corpus using the RST framework:
A1
A2 Total Movie 21 + 9 9 30 Weather 10 + 5 5 15 Literature 5 20 + 5 25 Total 50 39 70 of literature reviews have been annotated by one annotator and other 5 texts from the same domain by two. In total, 19 texts from 70 (27.14%) have been annotated by two annotators.
2-Annotation procedure and process. We decided to follow the annotation guidelines proposed by Das and Taboada (2018). Each person annotated four or five texts per day during two or three weeks. The time to annotate documents varied according to the domain. The texts corresponding to the weather domain are shorter and, consequently, easier to annotate while texts about movies as well as those of the literature domain are more difficult because their writing style is more implicit (less indicators and relation signals) and complex (longer at least). Approximately, each weather text was annotated in 20 minutes while movie and literature texts were annotated in one hour.
3-Measurement of inter-annotator agreement.
In order to check the quality of the annotation process, inter-annotator agreement was measured. This was calculated manually following the qualitative evaluation method (Iruskieta et al., 2015a) using F-measure. In this measurement, in contrast with the automatic tool, the central subconstituent factor was not taken into account. Table 2 shows the inter-annotator agreement of rhetorical relations (RR) between both annotators. This agreement was calculated following the qualitative method (Iruskieta et al., 2015a). According to these results, the highest agreement has been reached in the domain of weather where 17 of 39 relations (43.59%) have been annotated with the same relation label. After that, inter-annotator agreement in literature is 41.67% (70 from 168). Finally, the domain of movies obtained the lowest results, since the agreement is 37.73% (83 of 220). Taking all domains into account, 39.81% of the rhetorical relations have been annotated in the same way (170 relations of 427). The disagreements are due to different reasons: i) both annotators have to train more to reach a higher agreement and to obtain better results. ii) opinionative texts are more open than news or scientific abstracts. Therefore, there is more place for different interpretations.
Results
Inter-annotator agreement
Domain
Agreement (
Subjectivity extraction from rhetorical relations
The annotation of the corpus using Rhetorical Structure Theory allows us to check the usefulness of the corpus. We have extracted the subjectivity from different types of rhetorical relations using the Basque version of the SO-CAL tool and we have been able to check the distribution of words with sentiment valence in each type of rhetorical relation and domain.
We have analyzed how words with sentiment valence appear in nuclei as well as satellites of CONCESSION and EVALUATION 1 in three domains. The results 2 are presented in Table 3. In the case of CONCESSION, the presence of words with sentiment valence in nuclei (47.21%) and satellites (52.79%) is similar in the three domains, although satellites show a higher proportion. In contrast, in the case of EVALUATION, words with sentiment valence are more concentrated on satellites (55.00%) in comparison with nuclei (45.00%). The only exception is weather, where nucleus prevail over satellites as far as the concentration of words with sentiment valence is concerned 3 .
This information contrast between discourse and sentiment analysis provides us the option to understand what happens there. For example, in CONCESSION, the nucleus presents a situation affirmed by the author and the satellite shows a situation which is apparently inconsistent but also affirmed by the author (Mann and Taboada, 2005). In other words, the probability of an opinion appearance is similar in both. The sentiment valence of the nucleus prevails over the satellite but the application of Basque SO-CAL does not give the correct result because the tool does not apply any discourse processing and, consequently, in this CONCESSION relation, nuclei as well as satellite are given the same weight.
( In Example (1), the semantic orientation of the nucleus is positive while the semantic orientation of the satellite is negative. The sum is positive and, in this case, SO-CAL correctly assigns the semantic orientation of the overall rhetorical relation. In contrast, in Example (2), according to SO-CAL, the sentiment orientation of the relation is negative but it should be positive, because the semantic orientation of the nucleus is positive. This example clarifies how discourse information is needed in lexicon-based sentiment classifiers such as in Example (3), the nucleus as well as the satellite and the rhetorical relation have positive semantic orientation and SO-CAL assigns correctly the semantic orientation.
Another type of rhetorical relation is EVALU-ATION, where the satellite makes an evaluative comment about the situation presented in the nucleus (Mann and Taboada, 2005). That means that the words with subjective information are more likely to appear in the satellite. Here, we can see some specific characteristics of each rhetorical relation. Unlike CONCES-SION, there is a concentration of words with sentiment valence in the satellite while words with sentiment valence have little presence in the nucleus. In fact, the sentiment valence of nuclei is never higher than +1 whereas satellites have a higher sentiment valence than ±3 in all the cases. In these three Examples (4, 5 and 6), the Basque version of the SO-CAL tool guesses correctly the semantic orientation of rhetorical relations. For example, in Example (6), the semantic orientation of nucleus is positive and of satellite is negative. The sum of the two EDUs is negative and SO-CAL correctly assigns a −3.4 sentiment valence. This does not happen in all cases because the tool has not implemented any type of discourse information processing. Anyway, the tool provides information about semantic orientation that is necessary to study the relation between sentiment analysis and rhetorical relations.
Discussion
Inter-annotator agreement
Regarding inter-annotator agreement (Table 2), the agreement goes from 37.73% to 43.59%. However, some domains do not show regularity regarding agreement. For example, in the case of reviews (domain of literature), inter-annotator agreement is situated between 38% and 48%, except in two texts where the agreement is lower (26% and 30%). In the same line, in the weather domain, some texts show higher agreement than the average in the domain.
If we evaluate this doubly annotated corpus by automatic means in a more strict scenario (if and only if the central subconstituent is the same) following Iruskieta et al. (2015a), we can observe and evaluate other aspects of rhetorical structure, such as:
• Constituent (C) describes all the EDUs that compose each discourse unit or span.
• Attachment point is the node in the RS-tree to which the relation is attached.
• N-S or nuclearity specifies if the compared relations share the same direction (NS, NS or NN).
• Relation determines if both annotators have assigned 4 the same type of rhetorical relation to the attachment point of two or more EDUs in order to get the same effect.
Another aspect to take into consideration is that the manual and automatic evaluation does not show the same results with regard to interannotator agreement of the type of relation. According to a manual evaluation, inter-annotator agreement is 39.81% while the automatic evaluation shows an agreement of 31.72%. As we have noted before, this difference comes due to the fact that the automatic comparison is made in a strict scenario and some relations are not compared, because the description of the central subconstituent of such relations is slightly different.
The inter-annotator agreement results given by the automatic tool offer complementary information related to the annotation of the corpus. As Table 4 shows, the inter-annotator agreement is low in the case of type of relation but the results are better in other aspects of rhetorical relations such as constituent and nuclearity. The agreement in attachment point achieves 0.40 that is low still but constituent as well as nuclearity have achieved the inter-annotator agreement of 0.52 and 0.66, respectively.
On the other hand, another interesting aspect is that there is no difference between domains as far as the agreement of different aspects related to writing style is concerned. It is surprising because the type and the way to express opinions are very different for each domain. In the weather domain, texts are short and clear and the language is direct. In contrast, in literature and movies, texts are longer, more diffuse and they use figurative expression many times. Even so, the weather domain obtains lowest results in three aspects mentioned in Table 4 but the type of relation obtains a better result compared to other domains.
The interpretation of inter-annotator agreement suggests that in the evaluation of some rhetorical relations the agreement is lower while other aspects related to rhetorical relations like constituent and nuclearity obtain a better agreement. We have also discovered that specially ELABORATION, EVALUATION and some multinuclear relations show higher disagreement.
Relevant RR disagreement: confusion matrix
In order to know the differences of these disagreements, we have also measured the type of rhetorical relations with the highest disagreement. With that aim, we have calculated a confusion matrix, and then we have identified the most controversial rhetorical relations. Results are shown in Table 5. According to Table 5, ELABORATION has been used by one annotator whereas the other has employed a more informative relation. In two cases, the first annotator (A1) has annotated an EVALUATION relation while the other annotator (A2) has annotated MOTIVATION and IN-TERPRETATION. In other case, A2 has annotated ELABORATION whereas A1 has tagged RESULT. In total, there are 19 instances in which ELABORATION has been annotated by one of the annotators. Moreover, there are 4 instances of disagreement between INTERPRETATION and JUSTIFICATION. Finally, there are also disagreements in multinuclear relations. While A2 has annotated CONTRAST in 10 relations, A1 has employed CONCESSION and EVALUATION. There are also 4 instances of disagreement between LIST and CONJUNCTION.
Our interpretation of this results is that one annotator (A1) tends to annotate more general rhetorical relations (e. g. ELABORATION) while other annotator (A2) annotates more precise relations. When it comes to multinuclear relations, it seems that A1 annotator has a tendency to not an-notate multinuclear relations.
Checking the usefulness of the corpus for sentiment analysis
The second aim of this work has been to check the usefulness of the corpus for sentiment analysis. Firstly, the results have shown that in some cases the Basque version of SO-CAL does not assign a suitable semantic orientation to all the rhetorical relations, even when the semantic orientation of EDUs of the relation is correct. This means that the information of rhetorical relations would be needed in order to make a lexicon-based sentiment classification. In other words, this suggests that it would be recommendable to assign weights to EDUs of rhetorical relations to model their effect on sentiment analysis. Each type of rhetorical relation has different characteristics and, consequently, the way to assign weights to EDUs in each relation must be different.
For that reason, we have made a preliminary study with the purpose of checking how different types of rhetorical relations present a semantic orientation and what is the distribution of words with sentiment valence in rhetorical relations. The study of CONCESSION has shown that i) the probability of sentiment words appearing in nuclei as well as satellites is similar, and that ii) nucleus always prevails over the satellite and, consequently, the semantic orientation of nucleus must be the semantic orientation of all the rhetorical relation. However, the semantic orientation of the satellite must be also taken into consideration in the semantic orientation of all the rhetorical relation. Although comparing with nucleus, satellite has to be less important.
The opposite situation happens in EVALUA-TION. Here, we can see that words with sentiment valence concentrate more on the satellite while there are fewer words with sentiment valence in the nucleus. That means that the weight must be assigned to the satellite because that part of the relation is more important from the point of view of sentiment analysis.
This interpretation of the results suggests that the Basque Opinion Corpus annotated using RST can be useful for different tasks of sentiment analysis, in fact, the preliminary analysis made with rhetorical relations shows some characteristics and differences that are related to rhetorical relations.
Conclusion and Future Work
In this work, we have annotated a part of the Basque Opinion Corpus using Rhetorical Structure Theory. Then, we have measured interannotator agreement. The manual evaluation of the results shows that the inter-annotator agreement of the type of rhetorical relations is 39.81%. On the other hand, using an automatic tool we have obtained more fine-grained results regarding aspects of relations and attachment, as well as nuclearity, with an inter-annotator agreement higher than 0.5. We have also identified that ELABO-RATION, EVALUATION and some multinuclear relations show the highest disagreement.
On the other hand, we have also checked the usefulness of this annotated corpus for sentiment analysis and the first results show that it is useful to extract subjectivity information of different rhetorical relations. In CONCESSION relations, the semantic orientation of the nucleus always prevails but the valence of the satellite must also be taken into consideration. In EVALUATION relations, words with sentiment valence concentrate on satellite.
In future, firstly, we plan to build extended annotation guidelines to annotate the corpus with more reliability. This would be the previous step before annotating the entire corpus. On the other hand, we would like to continue analyzing how the subjective information is distributed in relations.
Figure 1 :
1Part of a discourse-tree of the LIB29 review annotated with the RST framework.
4 -
4Semantic orientation extraction. Using the Basque version of the SO-CAL tool(Taboada et al., 2011), we have extracted the subjective information of rhetorical relations in the three domains of the corpus in order to check how the type of rhetorical relation affects their sentiment valence. SO-CAL needs a sentiment lexicon where words have a sentiment valence between −5 and +5. The Basque version of the sentiment lexicon contains 1,237 entries.We have extracted the sentiment valence of 75 instances if CONCESSION and EVALUATION relations. From the 75 CONCESSION relations, 16 come from the weather domain, 34 from literature and 25 from movies. In the case of EVALU-ATION, 19 come from weather, 31 from literature and 25 from weather.5-Results.On the one hand, we have calculated the percentage of rhetorical relations with the same label annotated by two persons. On the other hand, we have measured accumulated values of sentiment valences in nuclei and satellites in texts of different domains.
Table 1 :
1Number of texts annotated by two annotators. The number after the sum sign indicates the quantity of texts with double annotation.1-Limiting the annotating work. Annotating 240 texts needs a lot of work and time. For that reason, we have thought to annotate some part of the corpus initially and, if the results of the annotation are acceptable, continue with the work. Taking into account the previously described data, both annotators have worked with 70 texts (29.16%) of three different domains. 21 texts from the movie domain have been annotated by one annotator and other 9 texts have been annotated by the two annotators. 10 texts from weather have been annotated once and other 5 texts of the same domain by two annotators. Finally, 25 texts
Table 2 :
2Inter-annotator agreement in different domains of the corpus measured by hand.
Table 3 :
3Accumulated values of sentiment valences in nuclei and satellites for each domain.(4) [N[Arrate Mardarasek bere lehen liburua ar-
gitaratu du berriki, Pendrive,] 0 S[eta apustu
ausarta egin du bertan.] +3 ] +3 (SENTBER04)
[N[Arrate Mardaras has published her first
book recently, Pendrive,] 0 S[and she has
made a daring bet there.] +3 ] +3
(5) [N[Bada, erraz ikusten den filma da "The
danish girl".] +1 S[Atsegina da, hunkigarria,
entretenigarria] +6 ] +7 (ZIN15).
[N[So, "The danish girl" is a film easy to
watch.] +1 S[It is nice, touching, entertain-
ing.] +6 ] +7
(6) [N[Talde lana izatetik pertsonaia bakar-
raren epika izatera pasako da erdialdetik
aurrera] +0.5 S[eta horretan asko galduko du
filmak.] −3.9 ] −3.4 (ZIN39)
[N[It is going to pass from being team work
to epic of one person] +0.5 S[and in that, the
film will lose a lot.] −3.9 ] −3.4
Table 4 :
4Inter-annotator agreement results given by the automatic tool.
Table 5 :
5Disagreement in rhetorical relations.
We decide to choose these rhetorical relations, because we think they are more related to opinions and emotions.2 In order to measure the presence of words with subjectivity, we have calculated the sum of all the sentiment valences without taking into account their sign.3 In the weather domain, one of rhetorical relations has a very long nucleus compared to satellite. This situation may have influenced the results. In other cases, the length of nucleus and satellites has been similar.
If the central subconstituent is not described with the same span label and compared position (NS or SN), there is no possibility of comparing relations.
AcknowledgmentsThis research is partially supported by a Basque Government scholarship (PRE 2018 2 0033), the Spanish Ministry of Economy and Competitiveness (MINECO/FEDER, UE) project PROSA-MED (TIN2016-77820-C3-1-R), University of the Basque Country project UPV/EHU IXA Group (GIU16/16) and project Procesamiento automático de textos basado en arquitecturas avanzadas (PES18/28).
Creating and evaluating a polarity-balanced corpus for Basque sentiment analysis. Jon Alkorta, Koldo Gojenola, Mikel Iruskieta, IWoDA16 Fourth International Workshop on Discourse Analysis. Jon Alkorta, Koldo Gojenola, and Mikel Iruskieta. 2016. Creating and evaluating a polarity-balanced corpus for Basque sentiment analysis. In IWoDA16 Fourth International Workshop on Discourse Analy- sis, pages 58-62.
Appraisal of opinion expressions in discourse. Nicholas Asher, Farah Benamara, Yvette Yannick Mathieu, Lingvisticae Investigationes. 32Nicholas Asher, Farah Benamara, and Yvette Yannick Mathieu. 2009. Appraisal of opinion expressions in discourse. Lingvisticae Investigationes, 32(2):279- 292.
Measuring the effect of discourse structure on sentiment analysis. Baptiste Chardon, Farah Benamara, Yannick Mathieu, Vladimir Popescu, Nicholas Asher, International Conference on Intelligent Text Processing and Computational Linguistics. SpringerBaptiste Chardon, Farah Benamara, Yannick Mathieu, Vladimir Popescu, and Nicholas Asher. 2013. Mea- suring the effect of discourse structure on sentiment analysis. In International Conference on Intelli- gent Text Processing and Computational Linguistics, pages 25-37. Springer.
RST Signalling Corpus: A Corpus of Signals of Coherence Relations. Debopam Das, Maite Taboada, 10.1007/s10579-017-9383-xLang. Resour. Eval. 521Debopam Das and Maite Taboada. 2018. RST Sig- nalling Corpus: A Corpus of Signals of Coherence Relations. Lang. Resour. Eval., 52(1):149-184.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. the tenth ACM SIGKDD international conference on Knowledge discovery and data miningACMMinqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.
The RST Basque TreeBank: an online search interface to check rhetorical relations. Mikel Iruskieta, Jesus María, Arantza Aranzabe, Itziar Diaz De Ilarraza, Mikel Gonzalez, Oier Lersundi, Lopez De Lacalle, 4th workshop RST and discourse studies. Mikel Iruskieta, María Jesus Aranzabe, Arantza Diaz de Ilarraza, Itziar Gonzalez, Mikel Lersundi, and Oier Lopez de Lacalle. 2013. The RST Basque TreeBank: an online search interface to check rhetorical relations. In 4th workshop RST and dis- course studies, pages 40-49.
A qualitative comparison method for rhetorical structures: identifying different discourse structures in multilingual corpora. Language resources and evaluation. Mikel Iruskieta, Maite Iria Da Cunha, Taboada, 49Mikel Iruskieta, Iria Da Cunha, and Maite Taboada. 2015a. A qualitative comparison method for rhetor- ical structures: identifying different discourse struc- tures in multilingual corpora. Language resources and evaluation, 49(2):263-309.
The annotation of the central unit in rhetorical structure trees: A key step in annotating rhetorical relations. Mikel Iruskieta, Arantza Díaz De Ilarraza, Mikel Lersundi, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersMikel Iruskieta, Arantza Díaz de Ilarraza, and Mikel Lersundi. 2014. The annotation of the central unit in rhetorical structure trees: A key step in annotat- ing rhetorical relations. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers, pages 466- 475.
Establishing criteria for RSTbased discourse segmentation and annotation for texts in Basque. Mikel Iruskieta, Arantza Diaz De Ilarraza, Mikel Lersundi, Corpus Linguistics and Linguistic Theory. 112Mikel Iruskieta, Arantza Diaz de Ilarraza, and Mikel Lersundi. 2015b. Establishing criteria for RST- based discourse segmentation and annotation for texts in Basque. Corpus Linguistics and Linguistic Theory, 11(2):303-334.
C William, Maite Mann, Taboada, RST web site. William C Mann and Maite Taboada. 2005. RST web site.
Rhetorical Structure Theory: Toward a functional theory of text organization. C William, Sandra A Mann, Thompson, Text-Interdisciplinary Journal for the Study of Discourse. 83William C Mann and Sandra A Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243-281.
Sentiment Analysis of Hindi Reviews based on Negation and Discourse Relation. Namita Mittal, Basant Agarwal, Garvit Chouhan, Nitin Bania, Prateek Pareek, Proceedings of the 11th Workshop on Asian Language Resources. the 11th Workshop on Asian Language ResourcesNamita Mittal, Basant Agarwal, Garvit Chouhan, Nitin Bania, and Prateek Pareek. 2013. Sentiment Anal- ysis of Hindi Reviews based on Negation and Dis- course Relation. In Proceedings of the 11th Work- shop on Asian Language Resources, pages 45-50.
ANAL-HITZA: a tool to extract linguistic information from large corpora in Humanities research. Arantxa Otegi, Oier Imaz, Procesamiento del Lenguaje Natural. esamiento del Lenguaje NaturalArantza Diaz de Ilarraza, Mikel Iruskieta, and Larraitz UriaArantxa Otegi, Oier Imaz, Arantza Diaz de Ilarraza, Mikel Iruskieta, and Larraitz Uria. 2017. ANAL- HITZA: a tool to extract linguistic information from large corpora in Humanities research. Proce- samiento del Lenguaje Natural, (58):77-84.
Thumbs up?: sentiment classification using machine learning techniques. Bo Pang, Lillian Lee, Shivakumar Vaithyanathan, Proceedings of the ACL-02 conference on Empirical methods in natural language processing. the ACL-02 conference on Empirical methods in natural language processingAssociation for Computational Linguistics10Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79-86. As- sociation for Computational Linguistics.
An Arabic Twitter Corpus for Subjectivity and Sentiment Analysis. Eshrag Refaee, Verena Rieser, LREC. Eshrag Refaee and Verena Rieser. 2014. An Arabic Twitter Corpus for Subjectivity and Sentiment Anal- ysis. In LREC, pages 2268-2273.
Lexicon-based methods for sentiment analysis. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, Manfred Stede, Computational Linguistics. 372Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Lin- guistics, 37(2):267-307.
Maite Taboada, William C Mann, Rhetorical Structure Theory: looking back and moving ahead. Discourse studies. 8Maite Taboada and William C. Mann. 2006. Rhetorical Structure Theory: looking back and moving ahead. Discourse studies, 8(3):423-459.
Introduction to RST (Rhetorical Structure Theory). Maite Taboada, Manfred Stede, 2016Maite Taboada and Manfred Stede. 2009. Introduction to RST (Rhetorical Structure Theory). ESSLLI2016.
Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. D Peter, Turney, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsPeter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classi- fication of reviews. In Proceedings of the 40th an- nual meeting on association for computational lin- guistics, pages 417-424. Association for Computa- tional Linguistics.
Development and use of a goldstandard data set for subjectivity classifications. Rebecca F Janyce M Wiebe, Thomas P O' Bruce, Hara, Proceedings of the 37th annual meeting of the Association for Computational Linguistics. the 37th annual meeting of the Association for Computational LinguisticsJanyce M Wiebe, Rebecca F Bruce, and Thomas P O'Hara. 1999. Development and use of a gold- standard data set for subjectivity classifications. In Proceedings of the 37th annual meeting of the Asso- ciation for Computational Linguistics, pages 246- 253. |
||
903,330 | Neural Joint Learning for Classifying Wikipedia Articles into Fine-Grained Named Entity Types | This paper addresses the task of assigning finegrained NE type labels to Wikipedia articles. To address the data sparseness problem, which is salient particularly in fine-grained type classification, we introduce a multi-task learning framework where type classifiers are all jointly learned by a neural network with a hidden layer. In addition, we also propose to learn article vectors (i.e. entity embeddings) from Wikipedia's hypertext structure using a Skipgram model and incorporate them into the input feature set. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled instances. The dataset is available. The results of our experiments show that both ideas gained their own statistically significant improvement separately in classification accuracy. | [
44759394,
219309859,
9672901,
13462726,
16802534,
10145463,
9899754,
5800139,
2021966,
7099509,
7418935,
685945,
5959482,
15388570
] | Neural Joint Learning for Classifying Wikipedia Articles into Fine-Grained Named Entity Types
Masatoshi Suzuki
Tohoku University §Language Craft Inc
Koji Matsuda
Tohoku University §Language Craft Inc
Satoshi Sekine
Tohoku University §Language Craft Inc
Naoaki Okazaki
Tohoku University §Language Craft Inc
Kentaro Inui [email protected]@languagecarft.com
Tohoku University §Language Craft Inc
Neural Joint Learning for Classifying Wikipedia Articles into Fine-Grained Named Entity Types
This paper addresses the task of assigning finegrained NE type labels to Wikipedia articles. To address the data sparseness problem, which is salient particularly in fine-grained type classification, we introduce a multi-task learning framework where type classifiers are all jointly learned by a neural network with a hidden layer. In addition, we also propose to learn article vectors (i.e. entity embeddings) from Wikipedia's hypertext structure using a Skipgram model and incorporate them into the input feature set. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled instances. The dataset is available. The results of our experiments show that both ideas gained their own statistically significant improvement separately in classification accuracy.
Introduction
Recognizing named entities (NEs) in text is a crucial component task of a broad range of NLP applications including information extraction and question answering. Early work on named entity recognition (NER) defined a small number of coarse-grained entity types such as Person and Location and explored computational models for automatizing the task. One recent direction of extending this research field is to consider a larger set of fine-grained entity types (Lee et al., 2006;Sekine et al., 2002;Yosef et al., 2012;Corro et al., 2015). Recent studies report that fine-grained NER makes improvements to such applications as entity linking (Ling et al., 2015) and question answering (Mann, 2002). Given this background, this paper addresses the issue of creating a large gazetteer of NEs with fine-grained entity type information, motivated by the previous observations that a large-coverage gazetteer is a valuable resource for NER (Kazama and Torisawa, 2008;Carlson et al., 2009). Specifically, we consider building such a gazetteer by automatically classifying the articles of Wikipedia, one of the largest collection of NEs, into a predefined set of fine-grained named entity types.
The task of classifying Wikipedia articles into a predefined set of semantic classes has already been addressed by many researchers (Chang et al., 2009;Dakka and Cucerzan, 2008;Higashinaka et al., 2012;Tardif et al., 2009;Toral and Muñoz, 2006;Watanabe et al., 2007). However, most of these studies assume a coarse-grained NE type set (3 to 15 types). Fine-grained classification is naturally expected to be more difficult than coarsegrained classification. One big challenge is how to alleviate the problem of data sparseness when applying supervised machine learning approaches. For example, articles such as "Japan", "Mt. Fuji", and "Tokyo dome", may be classified as Country, Mountain, and Sports_Facility respectively in a fine-grained type set whereas all of them fall into the same type Location in a common coarse-grained type set. Given the same number of labeled training instances, one may obtain far fewer instances for each fine-grained type. Another challenge is in that fine-grained entity types may not be disjoint; for example, "Banana" can be classified as Flora and Food_Other simultaneously.
To address these issues, in this paper, we propose Figure 1: Automatic assignment of NE labels to Wikipedia articles based on multi-task learning and vector representation of articles two methods (illustrated in Figure 1). First, we adopt the notion of multi-task learning (Caruana, 1997) and solve the whole task using a two-layered neural network. Our model learns all types of training instances jointly, which enables the model to learn combinations of input features commonly effective for multiple NE types with the hidden layer. By sharing effective feature combinations across different NE types, the data scarcity in minority NE types can be alleviated. Furthermore, this model can also naturally realize multi-label classification.
Second, we extend the feature set by exploiting the hyper-text structure of Wikipedia. The idea of using hyperlinks for Wikipedia article classification was first reported by Dakka and Cucerzan (2008). In this work, they represented local context of anchor texts of hyperlinks in Wikipedia as bag-of-words features. However, since its feature space was too sparse, they reported that the new context features had no effect on improving classification performance. Our proposal is to refine the context features using a distributed representation. To do this, we give each article a vector learned from all context words around hyperlinks (i.e. anchor texts) in Wikipedia using the Skip-gram model (Mikolov et al., 2013b). In the Skip-gram model, vector representations of words are learned so that two words similar in contexts have vectors with high similarity. In our intuition, articles in the same NE types are likely to be mentioned in similar contexts. Therefore, we adopt this model for learning article vectors.
We test our ideas on Japanese Wikipedia articles using the 200-NE type set proposed by Sekine et al. (2002). The results of our experiments show that the proposed methods achieve a 4.94-point improvement in entity-based F1 score. Our methods are particularly effective in labeling infrequent NE types.
Main contributions of this paper are as follows:
• We propose to apply a neural network-based multitask learning method to the fine-grained multi-label classification of Wikipedia articles.
• We also propose to encode the local context of hyperlinks as vectors using the Skip-gram model. We make the obtained vectors publicly available1.
• We created a new dataset by manually annotating over 22,000 Japanese Wikipedia articles with finegrained NE types. The dataset is available if one contacts the authors.
• We tested our models on our new dataset and empirically showed their positive impacts on the accuracy of classification.
Related Work
The task of assigning labels of NE types to Wikipedia articles has been addressed in the context of automatic construction of an NE gazetteer 1http://www.cl.ecei.tohoku.ac.jp/~m-suzuki/ jawiki_vector/ from Wikipedia articles. Toral and Muñoz (2006) proposed a method to classify Wikipedia articles into three NE types (Location, Organization, Person) using words included in the body of the article. They used WordNet as an external knowledge base for collecting hypernym information. They also applied weighted voting heuristics to determine NE types of articles. Dakka and Cucerzan (2008) classified articles into four NE types (PER, ORG, LOC, MISC) defined in ACE (Doddington et al., 2004) using supervised machine learning algorithms based on SVMs and naive Bayes. They used the bag-of-words in the target article as well as context words from the anchor text linking to the target article. Watanabe et al. (2007) focused on the HTML tree/link structure in Wikipedia articles. They formalized an NE categorization problem as assigning of NE labels to anchor texts in Wikipedia. They constructed graph-based representations of articles and estimated assignments of NE labels over the graphs using conditional random fields. In addition to these studies, there have been efforts toward automatic categorization, such as (Tardif et al., 2009;Chang et al., 2009). However, most of these studies assume a relatively small set of coarse-grained NE types (up to only 15 types).
In recent years, several projects such as YAGO (Suchanek et al., 2007) and DBpedia (Auer et al., 2007) have been devoted to provide Wikipedia articles with ontology class labels by applying simple heuristic or hand-crafted rules. However, these approaches heavily rely on metadata (e.g., infobox templates and category labels) and suffer from insufficient coverage of rules due to the lack of metadata, as reported by Aprosio et al. (2013).
Another trend of research which may seem relevant to our work can be found in efforts for automatically annotating entity mentions in text with fine-grained NE type labels defined in an existing type hierarchy such as Freebase (Ling and Weld, 2012;Nakashole et al., 2013;Shimaoka et al., 2016). While these studies focus on the identification and classification of individual mentions, our work aims at the classification of Wikipedia articles. The two tasks are related and may well benefit from each other. However, they are not the same; techniques proposed for mention classification cannot directly apply to our task nor can be compared with our methods.
The work closest to our study is done by Higashinaka et al. (2012), who proposed a supervised machine learning model for classifying Wikipedia articles into the 200 fine-grained NE types defined by Sekine et al. (2002). They conducted experiments to determine effective features extracted from article titles, body text, category labels, and infobox templates in Wikipedia. They train a logistic regressionbased binary classifier for each type individually and the overall model chooses a single NE type receiving the highest score from the classifiers, ignoring the possibility that a Wikipedia article may belong to multiple NE categories. In contrast, our model learns classifiers for different NE types jointly and also addresses the issue of multi-label classification.
Data Preparation
Sekine et al.'s Fine-grained NE Type Set
In this study, we use the Extended Named Entity Hierarchy2 proposed by Sekine et al. (2002) as our fine-grained NE type set. This ontology consists of 200 types, structured in a three-layered hierarchy. In this type hierarchy, a Wikipedia article may fall into multiple categories. Consider the following example:
Article title: Godzilla Article body: Godzilla is a giant monster originating from a series of tokusatsu films of the same name from Japan. ... (excerpted from the English corresponding page of the same title)
It is reasonable to assume that the entity of this article belong to both Character and Movie).
Manual Annotation
From Japanese Wikipedia as of Nov. 23, 2015, we first extracted 22,667 articles that are hyperlinked at least 100 times from other articles in Wikipedia. We then manually annotated each of the 22,667 articles with one or more NE type labels from Sekine et al's type set3.
Articles on abstract notions such as "Peace" and "Sleep" do not fall into any NE category particularly.
2https://sites.google.com/site/ extendednamedentityhierarchy/ 3The annotation was done by one annotator, supervised by the curator of Sekine et al's type set. Verification of the annotation accuracy is left for future work. We labeled such articles as CONCEPT. Wikipedia also includes articles or pages specific to Wikipedia like "List of characters in The Lion King" and "Wikipedia: Index". Those pages need to be discarded as well. We therefore decided to label such pages as IGNORED. Among our 22,667 articles, 2,660 articles are labeled as CONCEPT and 611 as IGNORED.
Overall, our task is to classify Wikipedia articles into the 202 categories (Sekine et al.'s 200 types and the two additional categories). Table 1 lists 10 most frequent labels that appear in the annotated articles and Table 2 shows examples of infrequent labels. As shown in these tables, the distribution of NE types in our data set is highly skewed. This makes the data sparseness problem salient particularly for the long tail of infrequent NE types. Table 3 shows the distribution of the number of labels assigned to one article in the annotated data.
Most of the articles have only one label whereas 4.6% of articles were assigned multiple labels. This figure may seem not to be a big deal. However, given that the error rate of our model is already below 20% (see Section 5.2), considering the 4.6% is inevitable to seek further improvements.
Proposed Methods
Joint Learning of Multi-Label Classifiers
As a baseline approach of multi-label classification, we construct a classifier for each NE type; each classifier independently decides whether an article should be tagged with the corresponding NE type. We model this setting using binary classifiers based on logistic regression (Figure 2a). We call this model Indep-Logistic.
While Indep-Logistic is a simple model, this model may not work well for infrequent NE types because of the sparseness of the training data. This problem is crucial particularly in our task setting because the distribution of our NE types in Wikipedia is highly skewed as reported above. To address this problem, we propose a method based on multi-task learning (Caruana, 1997) and jointly train classifiers of all NE types. Concretely, we construct a neural network with a hidden layer (Figure 2b) and train it so that each node in the output layer yields the prob- ability of assigning the label of an NE type. Note that the activation function of the output layer is the singmoid function, not the softmax function. This means that this model can output mutiple labels of NE types for each article. In this method, we aim to learn effective combinations of input features which can also be used for labeling of infrequent NE types. We call this model Joint-NN.
Note that there are two changes from Indep-Logistic to Joint-NN: incorporating of a hidden layer and applying of joint learning. To examine the effect of each individual method separately, we also consider an intermediate model, Indep-NN ( Figure 2c). Similarly to Indep-Logistic, this model trains a classifier for each label, but has a hidden layer.
Formally, the Indep-Logistic model estimates the conditional probability that a given Wikipedia article represented by an n-dimensional feature vector x ∈ R n belongs to NE type c:
p Indep-Logistic (y c = 1|x) = σ(w c · x + b c ),(1)
where w c ∈ R n and b c ∈ R denote a weight vector and a bias term for NE type c, respectively. σ(x) = 1 1+e −x is a sigmoid function.
The Joint-NN model maps an input feature vector to a hidden layer with a matrix W whose parameters are shared across all the types:
p Joint-NN (y c = 1|x) = σ(w c · σ(W x + b) + b c ),(2)
where W ∈ R n×k and b ∈ R k denote a weight matrix and a bias vector of the k-dimensional hidden layer, w c ∈ R k and b c ∈ R denote a weight vector and a bias term, respectively, of the output layer, for each NE type c.
In contrast, the Indep-NN model maps an input feature vector to a hidden layer by using a matrix W c whose parameters are trained for each NE type independently:
p Indep-NN (y c = 1|x) = σ(w c · σ(W c x + b c ) + b c ),
(3) where W c ∈ R n×k and b c ∈ R k denote a weight matrix and a bias vector, respectively, of the kdimensional hidden layer for each NE type c. w c ∈ R k and b c ∈ R denote a weight vector and a bias term of the output layer for the NE type c, respectively.
The training data with N articles and C NE types is represented as
{x (i) , y (i) } N i=1
, where x is a feature vector of an article and y = {y c } C c=1 is an array of binary variables indicating if the article belongs to an NE type c. With this data set, we minimize the cross entropy loss L of each model by using Adam gradient-based optimization algorithm (Kingma and Ba, 2014):
L = − x,c {y c log(p(y c = 1|x)) + (1 − y c ) log(1 − p(y c = 1|x))} (4)
Input Features
We used two sets of features for building the models; one is a reproduction of the previous study (Higashinaka et al., 2012), and the other is our novel proposal.
Baseline Features
As a baseline feature set, we reproduced the features proposed by Higashinaka et al. (2012). Table 4 lists all of the basic features.4 We were not 4Note that although the features of "Last n character(s) in the title" are effective in labeling NE types of Japanese article titles, (Higashinaka et al., 2012) because those features require the authors' internal resources to implement. For similar reasons, we used MeCab (Kudo et al., 2004) as a morphological analyzer instead of JTAG (Fuchi and Takagi, 1998), which was unavailable to us. For extracting text from Wikipedia dump, we used Wikipedia Extractor (http://medialab.di.unipi.it/wiki/ Wikipedia_Extractor). We denote this baseline feature set as F b .
Article Vectors
To extend the aforementioned basic feature set, we hypothesize that the way how each article (i.e. named entity) is mentioned in other articles can also be a useful clue for classifying that article. To test this hypothesis, we introduce distributed representations of Wikipedia articles.
Consider an article "Mount Everest". This article is hyperlinked from other articles as follows:
(1) ... After his ascent of Everest on 29 May 1953 ... In this example, words near the anchor text, such as summit and avalanche, can be useful for estimating the semantic category of "Mount Everest" and assigning the label Mountain to the article "Mount Everest". While a number of approaches Higashinaka et al. (2012) reports that "Last two characters in the title" are not so useful in combinations with other features. have been proposed for learning distributed representations of words, we simply adopt the Skip-gram model (Mikolov et al., 2013a) in this study.
Skip-gram trains a model so that it can predict context words from a centered word in a document. We apply this model to learn the embeddings of Wikipedia articles. To this end we need to address the following issues:
• An anchor text is not always identical to the article title to which the anchor refers. For this reason, we need to normalize an anchor text to the title of the article linked by the anchor.
• Article titles often consist of multiple words such as "White House". Therefore, we need a special treatment for tokenizing article titles.
• Not all of mentions to other articles are marked as anchor text in the Wikipedia articles. Typically, when an article mentions an entity multiple times, a hyperlink is inserted only at the first appearance of the mention to the entity5.
To address these problems, we designed the following preprocessing steps. First, we replace every anchor text with the title of the article referred to by the hyperlink of the anchor text. Next, we assume all occurrences of the phrase identical to an anchor text to have hyperlinks to the article linked by the anchor text. This is based on the one-sense-per-discourse assumption. In addition, all white spaces in article titles are replaced with "_" to prevent article titles from being separated into words. In this way, we jointly learn vectors of words and articles. We use word2vec6 to obtain 200 dimensional vectors. We denote the 200-dimension article vector as F v .
Experiments
To demonstrate the effectiveness of our models, we conducted experiments for labeling NE types to Japanese Wikipedia articles.
Settings
We tested the three classifier models (Indep-Logistic, Indep-NN, and Joint-NN) with two different feature sets (F b and F b + F v ). For each combi-5https://en.wikipedia.org/wiki/Wikipedia: Manual_of_Style/Linking 6https://code.google.com/p/word2vec/ Table 6: NE labels whose weight vectors in output layers in (Joint-NN, F b ) have high similarity to that of the NE label (in the header line of the table), accompanied with improvements of the label-based F1 score between (Indep-NN, F b ) and (Joint-NN, F b ). The number of articles assigned with an NE label is given in brackets. nation of model and feature set, we evaluated classification performance by measuring entity-based/typebased precision, recall, and F1 value (Godbole and Sarawagi, 2004;Tsoumakas et al., 2009) over 10fold cross validation. Entity-based precision, recall, and F1 value are calculated as below:
Precision = 1 N N i=1 |Y (i) ∩ Z (i) | | Z (i) | (5) Recall = 1 N N i=1 |Y (i) ∩ Z (i) | |Y (i) | (6) F 1 = 1 N N i=1 2|Y (i) ∩ Z (i) | | Z (i) | + |Y (i) |(7)
Here, Y (i) and Z (i) denote the set of correct labels and the set of predicted labels of article i, respectively N denotes the number of documents. For type-based evaluation, we calculated precision, recall and F1 value of each named entity types. For Indep-Logistic, we used scikit-learn (Pedregosa et al., 2011) to train classifiers. We used L2 penalty for regularization. For Indep-NN and Joint-NN, we used Chainer (Tokui et al., 2015) to implement neural networks. The dimension of the hidden layer was set to k = 200. When training the models, we used most frequent 10,000 baseline features (F b ) and 200-dimension article vectors (F v ) as input features of classifiers. For optimization, we used Adam with a learning rate of 0.001 and a mini-batch size of 10 and iterated over the training data until the cross-entropy loss per document gets smaller than 1.0 × 10 −4 .
Indep-Logistic was implemented as a baseline model intended to reproduce the model proposed by Higashinaka et al. (2012). Note, however, that the results of our experiments cannot be compared directly with those reported in their paper because some of the features they used are not reproducible and the training/test data sets are not identical.
Results
The overall results are summarized in Table 5. We conducted binomial tests to determine statistical significance of the results, confirming that the improvement between any pair of settings is statistically significant p < 0.01 except that the improvement from (Indep-Logistic, F b ) to (Indep-NN, F b ) was significant at p < 0.05.
Comparing the results between the baseline method (Indep-Logistic, F b ) and our full model (Multi-NN, F b + F v ), entity-based F1 score improved by about 5 points (83.34% to 88.28%), which is about 30% reduction of error rate. Table 5 also indicates that both of our two proposed methods, multi-task learning and article vector features, have (4041) CONCEPT (2660) Broadcast_Program (2395) Company (1701) City (975) Product_Other (964) Date (916) Book (909) Game (625) IGNORED (611) Pro_Sports_Organization (484) Position_Vocation (462) Movie (438) Show_Organization (363) School (326) Doctrine_Method_Other (288) Country (282) Railroad (247) Road (243) Era (236) Province (211) Government (159) Sport (148) Organizaton_Other (145) Station (144) Corpolation_Other (138) Magazine (132) Sports_Organization_Other (131) Academic (128) County (126) Sports_League (125) Character (123) Award (121) Weapon (111) Sports_Facility (104) Name_Other (92) GPE_Other (85) Event_Other (83) Flora (80) National_Language (67) War (66) Domestic_Region (64) GOE_Other (62) Unit_Other (62) River (58) Religion (57) Food_Other (57) Newspaper (56) Island (56) Law (55) Mammal (55) Political_Organization_Other (54) Animal_Disease (53) Military (53) Continental_Region (51) Compound (51) Theater (49) Poltical_Party (48) Music (47) Dish (47) Car (47) Family (44) Show (43) Occasion_Other (42) Animal_Part (41) God (41) Element (38) Train (37) Art_Other (36) Mountain (35) Internatinal_Organization (35) Ethnic_Group_Other (35) Airport (34) Worship_Place (32) Improvements in label-based F1 score Figure 3: Improvement in F1 score per type between (Indep-Logistic, F b ) and (Joint-NN, F b + F v ). Only types with more than 30 (numbers are shown in brackets) articles are shown.
a separate significant gain.
To see the improvement in labeling performance per NE type, we compared label-based F1 score of each NE type between (Indep-Logistic, F b ) and (Joint-NN, F b + F v ). Figure 3 shows the improvement in F1 score for each NE type, where NE types are sorted by the number of articles in descending order. The figures indicate that our full model tends to obtain a larger gain particularly for infrequent NE types, which means our model addresses the data sparseness problem for infrequent NE types.
We made a deeper analysis of how our full model learns labeling of NE types. Our joint learning model is designed to learn combinations of features effective for multiple NE types. If two NE types share common combinations of features, they will have similar weight vectors at the output layer. So we observed clusters of the learned weight vectors at the output layer of (Joint-NN, F b ) and discovered that many clusters comprise NE types that are semantically related with each other. Some example clusters are shown in Table 6. For example, the NE type Book has such neighbors as Broadcast_Program, Movie and Show. These NE types had similar weight vectors and gained considerable improvements with together. This demonstrates that our joint learning model learned combinations of input features and utilized them for multiple NE types effectively, which lead to the improvements observed particularly for infrequent NE types.
Conclusion
We have addressed the task of assigning fine-grained NE type labels to Wikipedia articles. To address the data sparseness problem, which is salient particularly in fine-grained type classification, we have introduced multi-task learning in which all the type classifiers are jointly learned by a neural network with a hidden layer. Additionally, to extend the input feature set, we have proposed to learn article vectors (i.e. entity embeddings) from Wikipedia's hypertext structure using the Skip-gram model and incorporate them into the input feature set. We created a new dataset containing over 22,000 manually labeled instances and conducted experiments on that dataset to evaluate the practical impacts of our ideas. The results show that both ideas gained their own statistically significant improvement separately in classification accuracy. The labeled dataset we created is available if one contacts the authors.
For future work, we aim to incorporate the hierarchy structure of NE types into classification. Also, each type in Sekine et al's NE type set has attributes. For example, Mountain has such attributes as Height and People who reached the summit. We aim to address a task of assigning correct attributes to each entity using the results of named entity classification.
Figure 2 :
2The three models for labeling types of articles.
( 2 )
2... reached the summit of Everest for the twenty-first time ... (3) ... fatalities of the 2014 Mount Everest avalanche ...
Swan Lake, Op. 20, is a ballet composed by Pyotr Ilyich Tchaikovsky in 1875-76. Despite its initial failure, it is now one of the most popular of all ballets.Multi-label and
multi-task
learning with
neural networks
Swan Lake
Person
No
Music
Yes
Show
Yes
...
... the first stage production
of Tchaikovsky's ballet Swan
Lake and ...
nd ...
Tchaikovsky's ballet Swan
Lake premiered at the theatre on
4 March 1877.
Some examples of classical ballet
are: Swan Lake, The Nutcracker,
and Sleeping Beauty.
Tchaikovsky
Swan Lake
The Nutcracker
Texts from whole Wikipedia
Skip-gram
model
Input: a Wikipedia article
Output:
Named entity label(s)
Labeler
Categories: Ballets by Pyotr Ilyich
Tchaikovsky | 1877 ballet premieres |
1876 compositions | ...
Baseline Features
Morphs in title
Nouns in the first sentence
etc.
F b
Entity Vectors
Learned in advance
F v
Input Features
Person Country Book Mountain
Table 1 :
110 most frequent labels within the annotated datasetLabel name
Frequency Example
Person
4,041 Isaac Asimov, Hillary Clinton, J. K. Rowling
Broadcast_Program
2,395 Sesame Street, Star Wars, Glee (TV series)
Company
1701 Sony, IBM, Apple Inc., Rakuten
City
975 New York, Tokyo, Melbourne
Product_Other
964 Microsoft Windows, Apple II,
Date
916 1977, January 3,
Book
909 Gutenberg Bible, The Lord of the Rings
Game
625 Lacrosse, Soccer, Table tennis
Pro_Sports_Organization
484 New York Yankees, Japan national baseball team
Position_Vocation
462 Physiotherapist, Prosecutor, Professor
Table 2 :
2Infrequent labels within the annotated datasetFrequency Number of labels Examples
0
5 5
URL, Temperature, Paintings
1
8
Ship, Star, Time
2-5
16
Canal, Market, Bridge
6-10
23
Earthquake, Treaty, School_Age
11-20
23
Public_Institution, Religious_Festival, Nationality
Table 3 :
3Distribution of the number of labels per articleNumber of labels assigned Number of articles
1
21,624
2
850
3
187
4
1 4
6
2
Table 4 :
4List of features used for learning Features Word unigram of the title Word bigram of the title POS bigram of the title Character bigram of the title Last noun in the title Last single character in the title Last three characters in the title Last character type in the title Last noun in the first sentence Headings of the article Direct categories defined in Wikipedia Upper categories defined in Wikipedia able to reproduce features T8, T12, T14, and M22 described in the original paper
Table 5 :
5Entity-based precision, recall and F1 of the models with different settings.F b
F b + F v
Model
Precision Recall
F1
Precision Recall
F1
Indep-Logistic
83.59
83.57 83.34
85.79
86.76 85.84
Indep-NN
84.00
84.68 83.94
86.90
88.05 87.00
Joint-NN (our model)
86.32
86.54 86.14
88.48
88.63 88.28
AcknowledgmentsThis work was partially supported by Research and Development on Real World Big Data Integration and Analysis, MEXT and JSPS KAKENHI Grant 15H05318 and 15H01702.
Extending the coverage of DBpedia properties using distant supervision over Wikipedia. Alessio Palmero Aprosio, Claudio Giuliano, Alberto Lavelli, Proceedings of ICON. ICONAlessio Palmero Aprosio, Claudio Giuliano, and Alberto Lavelli. 2013. Extending the coverage of DBpedia properties using distant supervision over Wikipedia. In Proceedings of ICON 2013.
Dbpedia: A nucleus for a web of open data. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary Ives, Proceedings of ISWC'07/ASWC'07. ISWC'07/ASWC'07Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In Pro- ceedings of ISWC'07/ASWC'07.
Learning a named entity tagger from gazetteers with the partial perceptron. Andrew Carlson, Scott Gaffney, Flavian Vasile, Proceedings of the 2009 AAAI Spring Symposium on Learning by Reading and Learning to Read. the 2009 AAAI Spring Symposium on Learning by Reading and Learning to ReadAndrew Carlson, Scott Gaffney, and Flavian Vasile. 2009. Learning a named entity tagger from gazetteers with the partial perceptron. In Proceedings of the 2009 AAAI Spring Symposium on Learning by Reading and Learning to Read.
Multitask learning. Machine learning. Rich Caruana, 28Rich Caruana. 1997. Multitask learning. Machine learn- ing, 28:41-75.
Wikisense: Supersense tagging of wikipedia named entities based wordnet. Joseph Chang, Richard Tzong-Han Tsai, Jason S Chang, Proceedings of PACLIC 23. PACLIC 23Joseph Chang, Richard Tzong-Han Tsai, and Jason S. Chang. 2009. Wikisense: Supersense tagging of wikipedia named entities based wordnet. In Proceed- ings of PACLIC 23.
FINET: Context-aware fine-grained named entity typing. Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, Gerhard Weikum, Proceedings of EMNLP. EMNLPLuciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. FINET: Context-aware fine-grained named entity typing. In Proceedings of EMNLP, pages 868-878.
Augmenting wikipedia with named entity tags. Wisam Dakka, Silviu Cucerzan, Proceedings of 3rd IJCNLP. 3rd IJCNLPWisam Dakka and Silviu Cucerzan. 2008. Augmenting wikipedia with named entity tags. In Proceedings of 3rd IJCNLP.
The automatic content extraction (ace) program tasks, data, and evaluation. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, Ralph Weischedel, Proceedings of LREC. LRECGeorge Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ace) program tasks, data, and evaluation. In Proceed- ings of LREC.
Japanese morphological analyzer using word co-occurrencejtag. Takeshi Fuchi, Shinichiro Takagi, Proceedings of ACL '98 and Proceedings of COLING '98. ACL '98 and COLING '98Takeshi Fuchi and Shinichiro Takagi. 1998. Japanese morphological analyzer using word co-occurrence - jtag. In Proceedings of ACL '98 and Proceedings of COLING '98.
Shantanu Godbole, Sunita Sarawagi, Advances in Knowledge Discovery and Data Mining: 8th Pacific-Asia Conference. Sydney, Australia; Berlin HeidelbergSpringerProceedings, chapter Discriminative Methods for Multi-labeled ClassificationShantanu Godbole and Sunita Sarawagi, 2004. Advances in Knowledge Discovery and Data Mining: 8th Pacific- Asia Conference, PAKDD 2004, Sydney, Australia, May 26-28, 2004. Proceedings, chapter Discrimina- tive Methods for Multi-labeled Classification, pages 22-30. Springer Berlin Heidelberg.
Creating an extended named entity dictionary from wikipedia. Ryuichiro Higashinaka, Kugatsu Sadamitsu, Kuniko Saito, Toshiro Makino, Yoshihiro Matsuo, Proceedings of COLING. COLINGRyuichiro Higashinaka, Kugatsu Sadamitsu, Kuniko Saito, Toshiro Makino, and Yoshihiro Matsuo. 2012. Creating an extended named entity dictionary from wikipedia. In Proceedings of COLING.
Inducing gazetteers for named entity recognition by large-scale clustering of dependency relations. Kentaro Jun'ichi Kazama, Torisawa, Proceedings of ACL-08: HLT. ACL-08: HLTAssociation for Computational LinguisticsJun'ichi Kazama and Kentaro Torisawa. 2008. Inducing gazetteers for named entity recognition by large-scale clustering of dependency relations. In Proceedings of ACL-08: HLT, pages 407-415. Association for Com- putational Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLRDiederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. ICLR.
Applying conditional random fields to japanese morphological analysis. Taku Kudo, Kaoru Yamamoto, Yuji Matsumoto, Proceedings of EMNLP. EMNLPTaku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to japanese morphological analysis. In Proceedings of EMNLP, pages 230-237.
Fine-grained named entity recognition using conditional random fields for question answering. Changki Lee, Yi-Gyu Hwang, Hyo-Jung Oh, Soojong Lim, Jeong Heo, Chung-Hee Lee, Hyeon-Jin Kim, Ji-Hyun Wang, Myung-Gil Jang, Proceedings of Information Retrieval Technology, Third Asia Information Retrieval Symposium. Information Retrieval Technology, Third Asia Information Retrieval SymposiumSingaporeChangki Lee, Yi-Gyu Hwang, Hyo-Jung Oh, Soojong Lim, Jeong Heo, Chung-Hee Lee, Hyeon-Jin Kim, Ji- Hyun Wang, and Myung-Gil Jang. 2006. Fine-grained named entity recognition using conditional random fields for question answering. In Proceedings of Infor- mation Retrieval Technology, Third Asia Information Retrieval Symposium, AIRS 2006, Singapore, October 16-18, 2006, pages 581-587.
Fine-grained entity recognition. Xiao Ling, Daniel S Weld, Proc. of the 26th AAAI Conference on Artificial Intelligence. of the 26th AAAI Conference on Artificial IntelligenceCiteseerXiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In In Proc. of the 26th AAAI Conference on Artificial Intelligence. Citeseer.
Design challenges for entity linking. Xiao Ling, Sameer Singh, Daniel S Weld, TACL. Xiao Ling, Sameer Singh, and Daniel S Weld. 2015. Design challenges for entity linking. TACL, pages 315- 328.
Fine-grained proper noun ontologies for question answering. Gideon S Mann, Proceedings of SEMANET '02. SEMANET '02Gideon S. Mann. 2002. Fine-grained proper noun on- tologies for question answering. In Proceedings of SEMANET '02.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of Workshop at International Conference on Learning Representations. Workshop at International Conference on Learning RepresentationsTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop at International Conference on Learning Representa- tions.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems. C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. WeinbergerCurran Associates, Inc26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahra- mani, and K.Q. Weinberger, editors, Advances in Neu- ral Information Processing Systems 26, pages 3111- 3119. Curran Associates, Inc.
Fine-grained semantic typing of emerging entities. Ndapandula Nakashole, Tomasz Tylenda, Gerhard Weikum, 51st Annual Meeting of the Association for Computational Linguistics. ACLNdapandula Nakashole, Tomasz Tylenda, and Gerhard Weikum. 2013. Fine-grained semantic typing of emerging entities. In 51st Annual Meeting of the As- sociation for Computational Linguistics, pages 1488- 1497. ACL.
Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches- nay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825- 2830.
Extended named entity hierarchy. Satoshi Sekine, Kiyoshi Sudo, Chikashi Nobata, Proceedings of LREC. LRECSatoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended named entity hierarchy. In Proceed- ings of LREC.
An attentive neural architecture for fine-grained entity type classification. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel, Proceedings of the 5th Workshop on Automated Knowledge Base Construction (AKBC). the 5th Workshop on Automated Knowledge Base Construction (AKBC)Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Se- bastian Riedel. 2016. An attentive neural architecture for fine-grained entity type classification. In Proceed- ings of the 5th Workshop on Automated Knowledge Base Construction (AKBC) 2016.
Yago: A core of semantic knowledge. Fabian M Suchanek, Gjergji Kasneci, Gerhard Weikum, Proceedings of WWW, WWW '07. WWW, WWW '07New York, NY, USAACMFabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proceedings of WWW, WWW '07, pages 697-706, New York, NY, USA. ACM.
Improved text categorisation for wikipedia named entities. Sam Tardif, R James Curran, Tara Murphy, Proceedings of ALTA Workshop. ALTA WorkshopSam Tardif, R. James Curran, and Tara Murphy. 2009. Improved text categorisation for wikipedia named en- tities. In Proceedings of ALTA Workshop, pages 104- 108.
Chainer: a next-generation open source framework for deep learning. Seiya Tokui, Kenta Oono, Shohei Hido, Justin Clayton, Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS). Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS)Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clay- ton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Work- shop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Infor- mation Processing Systems (NIPS).
A proposal to automatically build and maintain gazetteers for named entity recognition by using wikipedia. Antonio Toral, Rafael Muñoz, Proceedings of Workshop on New Text, EACL. Workshop on New Text, EACLAntonio Toral and Rafael Muñoz. 2006. A proposal to automatically build and maintain gazetteers for named entity recognition by using wikipedia. In Proceedings of Workshop on New Text, EACL.
Ioannis Katakis, and Ioannis Vlahavas. Grigorios Tsoumakas, Data mining and knowledge discovery handbook. SpringerMining multi-label dataGrigorios Tsoumakas, Ioannis Katakis, and Ioannis Vla- havas. 2009. Mining multi-label data. In Data mining and knowledge discovery handbook, pages 667-685. Springer.
A graph-based approach to named entity categorization in wikipedia using conditional random fields. Yotaro Watanabe, Masayuki Asahara, Yuji Matsumoto, Proceedings of EMNLP-CoNLL. EMNLP-CoNLLYotaro Watanabe, Masayuki Asahara, and Yuji Mat- sumoto. 2007. A graph-based approach to named entity categorization in wikipedia using conditional random fields. In Proceedings of EMNLP-CoNLL.
HYENA: Hierarchical type classification for entity names. Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, Gerhard Weikum, The COLING 2012 Organizing Committee. Mumbai, IndiaProceedings of COLING 2012: PostersMohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. HYENA: Hierarchical type classification for entity names. In Proceedings of COLING 2012: Posters, pages 1361- 1370, Mumbai, India, December. The COLING 2012 Organizing Committee. |
5,235,435 | Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid | Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model. | [
13573624,
7105713,
631855
] | Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid
Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 2010. 2010
Wayne Xin Zhao
School of Electronics Engineering and Computer Science
Peking University
China
Jing Jiang [email protected]
School of Information Systems
Singapore Management University
Singapore
Hongfei Yan
School of Electronics Engineering and Computer Science
Peking University
China
Xiaoming Li
School of Electronics Engineering and Computer Science
Peking University
China
Jointly Modeling Aspects and Opinions with a MaxEnt-LDA Hybrid
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
the 2010 Conference on Empirical Methods in Natural Language ProcessingMIT, Massachusetts, USA, 9; cAssociation for Computational Linguistics11October 2010. 2010
Discovering and summarizing opinions from online reviews is an important and challenging task. A commonly-adopted framework generates structured review summaries with aspects and opinions. Recently topic models have been used to identify meaningful review aspects, but existing topic models do not identify aspect-specific opinion words. In this paper, we propose a MaxEnt-LDA hybrid model to jointly discover both aspects and aspect-specific opinion words. We show that with a relatively small amount of training data, our model can effectively identify aspect and opinion words simultaneously. We also demonstrate the domain adaptability of our model.
Introduction
With the dramatic growth of opinionated usergenerated content, consumers often turn to online product reviews to seek advice while companies see reviews as a valuable source of consumer feedback. How to automatically understand, extract and summarize the opinions expressed in online reviews has therefore become an important research topic and gained much attention in recent years (Pang and Lee, 2008). A wide spectrum of tasks have been studied under review mining, ranging from coarse-grained document-level polarity classification (Pang et al., 2002) to fine-grained extraction of opinion expressions and their targets (Wu et al., 2009). In particular, a general framework of summarizing reviews of a certain product is to first identify different aspects (a.k.a. features) of the given product and then extract specific opinion expressions for each aspect. For example, aspects of a restaurant may include food, staff, ambience and price, and opinion expressions for staff may include friendly, rude, etc. Because of the practicality of this structured summary format, it has been adopted in several previous studies (Hu and Liu, 2004;Popescu and Etzioni, 2005;Brody and Elhadad, 2010) as well as some commercial systems, e.g. the "scorecard" feature at Bing shopping 1 .
Different approaches have been proposed to identify aspect words and phrases from reviews. Previous methods using frequent itemset mining (Hu and Liu, 2004) or supervised learning Wu et al., 2009) have the limitation that they do not group semantically related aspect expressions together. Supervised learning also suffers from its heavy dependence on training data. In contrast, unsupervised, knowledge-lean topic modeling approach has been shown to be effective in automatically identifying aspects and their representative words (Titov and McDonald, 2008;Brody and Elhadad, 2010). For example, words such as waiter, waitress, staff and service are grouped into one aspect.
We follow this promising direction and extend existing topic models to jointly identify both aspect and opinion words, especially aspect-specific opinion words. Current topic models for opinion mining, which we will review in detail in Section 2, still lack this ability. But separating aspect and opinion words can be very useful. Aspect-specific opinion words can be used to construct a domain-dependent senti-ment lexicon and applied to tasks such as sentiment classification. They can also provide more informative descriptions of the product or service being reviewed. For example, using more specific opinion words such as cozy and romantic to describe the ambience aspect in a review summary is more meaningful than using generic words such as nice and great. To the best of our knowledge, Brody and Elhadad (2010) are the first to study aspect-specific opinion words, but their opinion word detection is performed outside of topic modeling, and they only consider adjectives as possible opinion words.
In this paper, we propose a new topic modeling approach that can automatically separate aspect and opinion words. A novelty of this model is the integration of a discriminative maximum entropy (Max-Ent) component with the standard generative component. The MaxEnt component allows us to leverage arbitrary features such as POS tags to help separate aspect and opinion words. Because the supervision relies mostly on non-lexical features, although our model is no longer fully unsupervised, the number of training sentences needed is relatively small. Moreover, training data can also come from a different domain and yet still remain effective, making our model highly domain adaptive. Empirical evaluation on large review data sets shows that our model can effectively identify both aspects and aspect-specific opinion words with a small amount of training data.
Related Work
Pioneered by the work of Hu and Liu (2004), review summarization has been an important research topic. There are usually two major tasks involved, namely, aspect or feature identification and opinion extraction. Hu and Liu (2004) applied frequent itemset mining to identify product features without supervision, and considered adjectives collocated with feature words as opinion words. ) and Wu et al. (2009 used supervised learning that requires hand-labeled training sentences to identify both aspects and opinions. A common limitation of these methods is that they do not group semantically related aspect expressions together. Furthermore, supervised learning usually requires a large amount of training data in order to perform well and is not easily domain adaptable.
Topic modeling provides an unsupervised and knowledge-lean approach to opinion mining. Titov and McDonald (2008) show that global topic models such as LDA (Blei et al., 2003) may not be suitable for detecting rateable aspects. They propose multigrain topic models for discovering local rateable aspects. However, they do not explicitly separate aspect and opinion words. Lin and He (2009) propose a joint topic-sentiment model, but topic words and sentiment words are still not explicitly separated. Mei et al. (2007) propose to separate topic and sentiment words using a positive sentiment model and a negative sentiment model, but both models capture general opinion words only. In contrast, we model aspect-specific opinion words as well as general opinion words.
Recently Brody and Elhadad (2010) propose to detect aspect-specific opinion words in an unsupervised manner. They take a two-step approach by first detecting aspect words using topic models and then identifying aspect-specific opinion words using polarity propagation. They only consider adjectives as opinion words, which may potentially miss opinion words with other POS tags. We try to jointly capture both aspect and opinion words within topic models, and we allow non-adjective opinion words.
Another line of related work is about how to incorporate useful features into topic models (Zhu and Xing, 2010;Mimno and McCallum, 2008). Our MaxEnt-LDA hybrid bears similarity to these recent models but ours is designed for opinion mining.
Model Description
Our model is an extension of LDA (Blei et al., 2003) but captures both aspect words and opinion words. To model the aspect words, we use a modified version of the multi-grain topic models from (Titov and McDonald, 2008). Our model is simpler and yet still produces meaningful aspects. Specifically, we assume that there are T aspects in a given collection of reviews from the same domain, and each review document contains a mixture of aspects. We further assume that each sentence (instead of each word as in standard LDA) is assigned to a single aspect, which is often true based on our observation.
To understand how we model the opinion words, let us first look at two example review sentences from the restaurant domain:
The food was tasty. The waiter was quite friendly. We can see that there is a strong association of tasty with food and similarly of friendly with waiter. While both tasty and friendly are specific to the restaurant domain, they are each associated with only a single aspect, namely food and staff, respectively. Besides these aspect-specific opinion words, we also see general opinion words such as great in the sentence "The food was great!" These general opinion words are shared across aspects, as opposed to aspect-specific opinion words which are used most commonly with their corresponding aspects. We therefore introduce a general opinion model and T aspect-specific opinion models to capture these different opinion words.
Generative Process
We now describe the generative process of the model. First, we draw several multinomial word distributions from a symmetric Dirichlet prior with parameter β: a background model φ B , a general aspect model φ A,g , a general opinion model φ O,g , T aspect models {φ A,t } T t=1 and T aspect-specific opinion models {φ O,t } T t=1 . All these are multinomial distributions over the vocabulary, which we assume has V words. Then for each review document d, we draw a topic distribution θ d ∼Dir(α) as in standard LDA. For each sentence s in document d, we draw an aspect assignment z d,s ∼Multi(θ d ).
Now for each word in sentence s of document d, we have several choices: The word may describe the specific aspect (e.g. waiter for the staff aspect), or a general aspect (e.g. restaurant), or an opinion either specific to the aspect (e.g. friendly) or generic (e.g. great), or a commonly used background word (e.g. know). To distinguish between these choices, we introduce two indicator variable, y d,s,n and u d,s,n , for the nth word w d,s,n . We draw y d,s,n from a multinomial distribution over {0, 1, 2}, parameterized by π d,s,n . y d,s,n determines whether w d,s,n is a background word, aspect word or opinion word. We will discuss how to set π d,s,n in Section 3.2. We draw u d,s,n from a Bernoulli distribution over {0, 1} parameterized by p, which in turn is drawn from a symmetric Beta(γ). u d,s,n determines whether w d,s,n is general or aspect-specific. We then draw w d,s,n as Figure 1: The plate notation of our model.
T β Φ B Φ A , t Φ O , t Φ A , g Φ O , g D S N d , s x d , s , n π d , s , n y d , s , n w d , s , n u d , s , n z d , s θ d { B , O , A } λ p γ α
follows:
w d,s,n ∼ Multi(φ B ) if y d,s,n = 0 Multi(φ A,z d,s ) if y d,s,n = 1, u d,s,n = 0 Multi(φ A,g ) if y d,s,n = 1, u d,s,n = 1 Multi(φ O,z d,s ) if y d,s,n = 2, u d,s,n = 0 Multi(φ O,g ) if y d,s,n = 2, u d,s,n = 1 .
Figure 1 shows our model using the plate notation.
Setting π with a Maximum Entropy Model
A simple way to set π d,s,n is to draw it from a symmetric Dirichlet prior. However, as suggested in (Mei et al., 2007;Lin and He, 2009), fully unsupervised topic models are unable to identify opinion words well. An important observation we make is that aspect words and opinion words usually play different syntactic roles in a sentence. Aspect words tend to be nouns while opinion words tend to be adjectives. Their contexts in sentences can also be different. But we do not want to use strict rules to separate aspect and opinion words because there are also exceptions. E.g. verbs such as recommend can also be opinion words.
In order to use information such as POS tags to help discriminate between aspect and opinion words, we propose a novel idea as follows: We set π d,s,n using a maximum entropy (MaxEnt) model applied to a feature vector x d,s,n associated with w d,s,n . x d,s,n can encode any arbitrary features we think may be discriminative, e.g. previous, current and next POS tags. Formally, we have
p(y d,s,n = l|x d,s,n ) = π d,s,n l = exp λ l · x d,s,n 2 l =0 exp λ l · x d,s,n ,
where {λ l } 2 l=0 denote the MaxEnt model weights and can be learned from a set of training sentences with labeled background, aspect and opinion words. This MaxEnt-LDA hybrid model is partially inspired by (Mimno and McCallum, 2008).
As for the features included in x, currently we use two types of simple features: (1) lexical features which include the previous, the current and the next words {w i−1 , w i , w i+1 }, and (2) POS tag features which include the previous, the current and the next POS tags
{POS i−1 , POS i , POS i+1 }.
Inference
We use Gibbs sampling to perform model inference. Due to the space limit, we leave out the derivation details and only show the sampling formulas. Note that the MaxEnt component is trained first independently of the Gibbs sampling procedure, that is, in Gibbs sampling, we assume that the λ parameters are fixed.
We use w to denote all the words we observe in the collection, x to denote all the feature vectors for these words, and y, z and u to denote all the hidden variables. First, given the assignment of all other hidden variables, to sample a value for z d,s , we use the following formula:
P (z d,s = t|z ¬(d,s) , y, u, w, x) ∝ c d (t) + α c d (·) + T α × Γ c A,t (·) + V β Γ c A,t (·) + n A,t (·) + V β · V v=1 Γ c A,t (v) + n A,t (v) + β Γ c A,t (v) + β × Γ c O,t (·) + V β Γ c O,t (·) + n O,t (·) + V β · V v=1 Γ c O,t (v) + n O,t (v) + β Γ c O,t (v) + β .
Here c d (t) is the number of sentences assigned to aspect t in document d, and c d (·) is the number of sentences in document d. c A,t (v) is the number of times word v is assigned as an aspect word to aspect t, and c O,t (v) is the number of times word v is assigned as an opinion word to aspect t. c A,t (·) is the total number of times any word is assigned as an aspect word to aspect t, and c O,t (·) is the total number of times any word is assigned as an opinion word to aspect t. All these counts represented by a c variable exclude sentence s of document d. n A,t (v) is the number of times word v is assigned as an aspect word to aspect t in sentence s of document d, and similarly, n O,t (v) is the number of times word v is assigned as an opinion word to aspect t in sentence s of document d.
Then, to jointly sample values for y d,s,n and u d,s,n , we have
P (y d,s,n = 0|z, y ¬(d,s,n) , u ¬(d,s,n) , w, x) ∝ exp(λ 0 · x d,s,n ) l exp(λ l · x d,s,n ) · c B (w d,s,n ) + β c B (·) + V β , P (y d,s,n = l, u d,s,n = b|z, y ¬(d,s,n) , u ¬(d,s,n) , w, x) ∝ exp(λ l · x d,s,n ) l exp(λ l · x d,s,n ) · g(w d,s,n , z d,s , l, b), where the function g(v, t, l, b) (1 ≤ v ≤ V, 1 ≤ t ≤ T, l ∈ {1, 2}, b ∈ {0, 1}) is defined as follows: g(v, t, l, b) = c A,t (v) +β c A,t (·) +V β · c (0) +γ c (·) +2γ if l = 1, b = 0 c O,t (v) +β c O,t (·) +V β · c (0) +γ c (·) +2γ if l = 2, b = 0 c A,g (v) +β c A,g (·) +V β · c (1) +γ c (·) +2γ if l = 1, b = 1 c O,g (v) +β c O,g (·) +V β · c (1) +γ c (·) +2γ if l = 2, b = 1. .
Here the various c variables denote various counts excluding the nth word in sentence s of document d. Due to space limit, we do not give full explanation here.
Experiment Setup
To evaluate our MaxEnt-LDA hybrid model for jointly modeling aspect and opinion words, we used a restaurant review data set previously used in (Ganu et al., 2009;Brody and Elhadad, 2010) and a hotel review data set previously used in (Baccianella et al., 2009). We removed stop words and used the Stanford POS Tagger 2 to tag the two data sets. Only reviews that have no more than 50 sentences were used. We also kept another version of the data which includes the stop words for the purpose of extracting the contextual features included in x. Some details of the data sets are given in Table 1 50/T , β = 0.1 and γ = 0.5. We also experimented with other settings of these priors and did not notice any major difference. For MaxEnt training, we tried three labeled data sets: one that was taken from the restaurant data set and manually annotated by us 3 , and two from the annotated data set used in (Wu et al., 2009). Note that the latter two were used for testing domain adaptation in Section 6.3. Some details of the training sets are shown in Table 2.
In our preliminary experiments, we also tried two variations of our MaxEnt-LDA hybrid model. (1) The first is a fully unsupervised model where we used a uniform Dirichlet prior for π. We found that this unsupervised model could not separate aspect and opinion words well. (2) The second is a bootstrapping version of the MaxEnt-LDA model where we used the predicted values of y as pseudo labels and re-trained the MaxEnt model iteratively. We found that this bootstrapping procedure did not boost the overall performance much and even hurt the performance a little in some cases. Due to the space limit we do not report these experiments here.
Evaluation
In this section we report the evaluation of our model. We refer to our MaxEnt-LDA hybrid model as ME-LDA. We also implemented a local version of the standard LDA method where each sentence is treated as a document. This is the model used in (Brody and Elhadad, 2010) to identify aspects, and we refer to this model as LocLDA.
Qualitative Evaluation
For each of the two data sets, we show four sample aspects identified by ME-LDA in Table 3 and Table 5. Because the hotel domain is somehow similar to the restaurant domain, we used the labeled training data from the restaurant domain also for the hotel data set. From the tables we can see that generally aspect words are quite coherent and meaningful, and opinion words correspond to aspects very well. For comparison, we also applied LocLDA to the restaurant data set and present the aspects in Table 4. We can see that ME-LDA and LocLDA give similar aspect words. The major difference between these two models is that ME-LDA can sperate aspect words and opinion words, which can be very useful. ME-LDA is also able to separate general opinion words from aspect-specific ones, giving more informative opinion expressions for each aspect.
Evaluation of Aspects Identification
We also quantitatively evaluated the quality of the automatically identified aspects. Ganu et al. (2009) provide a set of annotated sentences from the restaurant data set, in which each sentence has been assigned one or more labels from a gold standard label set S = {Staff, Food, Ambience, Price, Anecdote, Misc}. To evaluate the quality of our aspect identification, we chose from the gold standard labels three major aspects, namely Staff, Food and Ambience. We did not choose the other aspects because (1) Price is often mixed with other aspects such as Food, and (2) Anecdote and Misc do not show clear chocolate good service friendly table seated room small good dessert best staff attentive minutes asked dining nice well cake great food great wait told tables beautiful nice cream delicious wait nice waiter waited bar romantic great ice sweet waiter good reservation waiting place cozy better desserts hot place excellent order long decor great small coffee amazing waiters helpful time arrived scene open bad tea fresh restaurant rude hour rude space warm worth bread tasted waitress extremely manager sat area feel definitely cheese excellent waitstaff slow people finally table comfortable special Table 3: Sample aspects and opinion words of the restaurant domain using ME-LDA. patterns in either word usage or writing styles, making it even hard for humans to identify them. Brody and Elhadad (2010) also only used these three aspects for quantitative evaluation. To avoid ambiguity, we used only the single-labeled sentences for evaluation. About 83% of the labeled sentences have a single label, which confirms our observation that a sentence usually belongs to a single aspect. We first ran ME-LDA and LocLDA each to get an inferred aspect set T . Following (Brody and Elhadad, 2010), we set the number of aspects to 14 in both models. We then manually mapped each inferred aspect to one of the six gold standard aspects, i.e., we created a mapping function f (t) : T → S. For sentence s of document d, we first assign it to an inferred aspect as follows:
Service
t * = arg max t∈T N d,s n=1
log P (w d,s,n |t).
We then assign the gold standard aspect f (t * ) to this sentence. We then calculated the F-1 score of the three aspects: Staff, Food and Ambience. The results are shown in Table 6. Generally ME-LDA has given competitive results compared with LocLDA. For Food and Ambience ME-LDA outperformed Lo-cLDA, while for Staff ME-LDA is a little worse than LocLDA. Note that ME-LDA is not designed to compete with LocLDA for aspect identification.
Evaluation of Opinion Identification
Since the major advantage of ME-LDA is its ability to separate aspect and opinion words, we further quantitatively evaluated the quality of the aspectspecific opinion words identified by ME-LDA. Brody and Elhadad (2010) has constructed a gold standard set of aspect-specific opinion words for the restaurant data set. In this gold standard set, they manually judged eight out of the 14 automatically inferred aspects they had: J = {Ambiance, Staff, Food-Main Dishes, Atmosphere-Physical, Food-Baked Goods, Food-General, Drinks, Service}. Each word is assigned a polarity score ranging from -2.0 to 2.0 in each aspect. We used their gold standard words whose polarity scores are not equal to zero. Because their gold standard only includes adjectives, we also manually added more opinion words into the gold standard set. To do so, we took the top 20 opinion words returned by our method and two baseline methods, pooled them together, and manually judged them. We use precision at n (P@n), a commonly used metric in information retrieval, for evaluation. Because top words are more important in opinion models, we set n to 5, 10 and 20. For both ME-LDA and BL-1 below, we again manually mapped each automatically inferred aspect to one of the gold standard aspects. Since LocLDA does not identify aspect-specific opinion words, we consider the following two baseline methods that can identify aspect-specific opinion words: BL-1: In this baseline, we start with all adjectives as candidate opinion words, and use mutual information (MI) to rank these candidates. Specifically, given an aspect t, we rank the candidate words according to the following scoring function:
Score BL-1 (w, t) = v∈Vt p(w, v) log p(w, v) p(w)p(v) ,
where V t is the set of the top-100 frequent aspect words from φ A,t . BL-2: In this baseline, we first use LocLDA to learn a topic distribution for each sentence. We then assign a sentence to the aspect with the largest probability and hence get sentence clusters. We manually map these clusters to the eight gold standard aspects. Finally, for each aspect we rank adjectives by their Table 7: Average P@n of aspect-specific opinion words on restaurant. * and indicate that the improvement hypothesis is accepted at confidence level 0.9 respectively for BL-1 and BL-2.
frequencies in the aspect and treat these as aspectspecific opinion words. The basic results in terms of the average precision at n over the eight aspects are shown in Table 7. We can see that ME-LDA outperformed the two baselines consistently. Especially, for P@5, ME-LDA gave more than 100% relative improvement over BL-1. The absolute value of 0.825 for P@5 also indicates that top opinion words discovered by our model are indeed meaningful.
Evaluation of the Association between Opinion Words and Aspects
The evaluation in the previous section shows that our model returns good opinion words for each aspect. It does not, however, directly judge how aspectspecific those opinion words are. This is because the gold standard created by (Brody and Elhadad, 2010) also includes general opinion words. E.g. friendly and good may both be judged to be opinion words for the staff aspect, but the former is more specific than the latter. We suspect that BL-2 has comparable performance with ME-LDA for this reason. So we further evaluated the association between opinion words and aspects by directly looking at how easy it is to infer the corresponding aspect by only looking at an aspect-specific opinion word. We selected four aspects for evaluation: Ambiance, Staff, Food-Main Dishes and Atmosphere-Physical . We chose these four aspects because they are quite different from each other and thus manual judgments on these four aspects can be more objective. For each aspect, similar to the pooling strategy in IR, we pooled the top 20 opinion words identified by BL-1, BL-2 and ME-LDA. We then asked two human assessors to assign an association score to each of these words as follows: If the word is closely associated with an aspect, a score of 2 is given; if it is marginally as- Table 8: Average nDCG performance of BL-2 and ME-LDA. Because only four aspects were used for evaluation, we did not perform statistical significance test. We found that in all cases ME-LDA outperformed BL-2 for either all aspects or three out of four aspects.
sociated with an aspect, a score of 1 is given; otherwise, 0 is given. We calculated the Kappa statistics of agreement, and we got a quite high Kappa value of 0.8375 and 0.7875 respectively for the restaurant data set and the hotel data set. Then for each word in an aspect, we took the average of the scores of the two assessors. We used an nDCG-like metric to compare the performance of our model and of BL-2. The metric is defined as follows:
nDCG@k(t, M) = k i=1 Score(M t,i ) log 2 (i+1) iDCG@k(t) ,
where M t,i is the ith aspect-specific opinion word inferred by method M for aspect t, Score(M t,i ) is the association score of this word, and iDCG@k(t) is the score of the ideal DCG measure at k for aspect t, that is, the maximum DCG score assuming an ideal ranking. We chose k = 5 and k = 10. The average nDCG over the four aspects are presented in Table 8. We can see that ME-LDA outperformed BL-2 quite a lot for the restaurant data set, which conforms to our hypothesis that ME-LDA generates aspect-specific opinion words of stronger association with aspects. For the hotel data set, ME-LDA outperformed a little. This may be due to the fact that we used the restaurant training data for the hotel data set.
Further Analysis of MaxEnt
In this section, we perform some further evaluation and analysis of the MaxEnt component in our model.
Feature Selection
Previous studies have shown that simple POS features and lexical features can be very effective for discovering aspect words and opinion words (Hu
Methods
Average F-1 LocLDA 0.690 ME-LDA + A 0.631 ME-LDA + B 0.695 ME-LDA + C 0.705 Table 9: Comparison of the average F-1 using different feature sets for aspect identification on restaurant.
and Liu, 2004;Wu et al., 2009;Brody and Elhadad, 2010). for POS features, since we observe that aspect words tend to be nouns while opinion words tend to be adjectives but sometimes also verbs or other part-of-speeches, we can expect that POS features should be quite useful. As for lexical features, words from a sentiment lexicon can also be helpful in discovering opinion words. However, lexical features are more diverse so presumably we need more training data in order to detect useful lexical features. Lexical features are also more domain-dependent. On the other hand, we hypothesize that POS features are more effective when the amount of training data is small and/or the training data comes from a different domain. We therefore compare the following three sets of features:
• A: w i−1 , w i , w i+1 • B: POS i−1 , POS i , POS i+1 • C: A + B
We show the comparison of the performance in Table 9 using the average F-1 score defined in Section 5.2 for aspect identification, and in Table 10 using the average P@n measure defined in Section 5.3 for opinion identification. We can see that Set B plays the most important part, which conforms to our hypothesis that POS features are very important in opinion mining. In addition, we can see that Set C performs a bit better than Set B, which indicates that some lexical features (e.g., general opinion words) may also be helpful. Note that here the training data is from the same domain as the test data, and therefore lexical features are likely to be useful.
Examine the Size of Labeled Data
As we have seen, POS features play the major role in discriminating between aspect and opinion words. Because there are much fewer POS features than word features, we expect that we do not need many Methods P@5 P@10 P@20 BL-2 0.725 0.650 0.563 ME-LDA + A 0.150 0.200 0.231 ME-LDA + B 0.775 0.688 0.569 ME-LDA + C 0.825 0.700 0.569 Table 10: Comparison of the average P@n using different feature sets for opinion identification on restaurant.
Method
F-1 LocalLDA 0.690 ME-LDA + 10 0.629 ME-LDA + 20 0.692 ME-LDA + 30 0.691 ME-LDA + 40 0.726 ME-LDA + 46 0.705 labeled sentences to learn the POS-based patterns. We now examine the sensitivity of the performance with respect to the amount of labeled data. We generated four smaller training data sets with 10, 20, 30 and 40 sentences each from the whole training data set we have, which consists of 46 labeled sentences. The results are shown in Table 11 and Table 12. We can see that generally the performance stays above BL when the number of training sentences is 20 or more. This indicates that our model needs only a relatively small number of high-quality training sentences to achieve good results.
Domain Adaption
Since we find that the MaxEnt supervision relies more on POS features than lexical features, we also hypothesize that if the training sentences come from a different domain the performance can still remain relatively high. To test this hypothesis, we tried two Method P@5 P@10 P@20 BL-2 0.725 0.650 0.563 ME-LDA + 10 0.700 0.563 0.488 ME-LDA + 20 0.875 0.650 0.600 ME-LDA + 30 0.825 0.700 0.569 ME-LDA + 40 0.825 0.688 0.581 ME-LDA + 46 0.825 0.700 0.569 quite different training data sets, one from the cell phone domain and the other from the DVD player domain, both used in (Wu et al., 2009).
We consider two feature sets defined in Section 6.1 for domain adaption, namely B and C. The results are shown in Table 13 and Table 14.
For aspect identification, using out-of-domain training data performed worse than using in-domain training data, but the absolute performance is still decent. And interestingly, we can see that using B is better than using C, indicating that lexical features may hurt the performance in the cross-domain setting. It suggests that lexical features are not easily adaptable across domains for aspect identification.
For opinion identification, we can see that there is no clear difference between using out-of-domain training data and using in-domain training data, which may indicate that our opinion identification component is robust in domain adaption. Also, we cannot easily tell whether B has advantage over C for opinion identification. One possible reason may be that those general opinion words are useful across domains, so lexical features may still be useful for domain adaption.
Conclusions
In this paper, we presented a topic modeling approach that can jointly identify aspect and opinion words, using a MaxEnt-LDA hybrid. We showed that by incorporating a supervised, discriminative maximum entropy model into an unsupervised, generative topic model, we could leverage syntactic features to help separate aspect and opinion words. We evaluated our model on two large review data sets from the restaurant and the hotel domains. We found that our model was competitive in identifying meaningful aspects compared with previous models. Most importantly, our model was able to identify meaningful opinion words strongly associated with different aspects. We also demonstrated that the model could perform well with a relatively small amount of training data or with training data from a different domain.
Our model provides a principled way to jointly model both aspects and opinions. One of the future directions we plan to explore is to use this model to help sentence-level extraction of specific opinions and their targets, which previously was only tackled in a fully supervised manner. Another direction is to extend the model to support polarity classification.
Table 1 :
1Some statistics of the data sets.data set
#sentences #tokens
restaurant
46
634
cell phone
125
4414
DVD player
180
3024
Table 2 :
2Some statistics of the labeled training data.
Table 4 :
4Sample aspects of the restaurant domain using LocLDA. Note that the words in bold are opinion words which are mixed with aspect words.
Table 5 :
5Sample aspects and opinion words of the hotel domain using ME-LDA.
Table 6 :
6Results of aspects identification on restaurant.
Table 11 :
11Average F-1 with differen sizes of training data on restaurant.
Table 12 :
12Average P@n of aspect-specific opinion words with differen sizes of training data on restaurant.Method
Average F-1
restaurant + B
0.695
restaurant + C
0.705
cell phone + B
0.662
cell phone + C
0.629
DVD player + B
0.686
DVD player + C
0.635
Table 13 :
13Average F-1 performance for domain adaption on restaurant.
Table 14 :
14Average P@n of aspect-specific opinion words for domain adaption on restaurant.
http://www.bing.com/shopping
http://nlp.stanford.edu/software/tagger.shtml
We randomly selected 46 sentences for manual annotation.
ACKNOWLEDGMENT
Multi-facet rating of product reviews. Stefano Baccianella, Andrea Esuli, Fabrizio Sebastiani, Proceedings of the 31st ECIR. the 31st ECIRStefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2009. Multi-facet rating of product reviews. In Proceedings of the 31st ECIR.
Latent Dirichlet allocation. David M Blei, Andrew Y Ng, Michael I Jordan, Journal of Machine Learning Research. 3David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3.
An unsupervised aspect-sentiment model for online reviews. Samuel Brody, Noemie Elhadad, Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational LinguisticsSamuel Brody and Noemie Elhadad. 2010. An unsuper- vised aspect-sentiment model for online reviews. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics.
Beyond the stars: Improving rating predictions using review text content. Gayatree Ganu, Noemie Elhadad, Amelie Marian, Proceedings of the 12th International Workshop on the Web and Databases. the 12th International Workshop on the Web and DatabasesGayatree Ganu, Noemie Elhadad, and Amelie Marian. 2009. Beyond the stars: Improving rating predictions using review text content. In Proceedings of the 12th International Workshop on the Web and Databases.
Finding scientific topics. L Thomas, Mark Griffiths, Steyvers, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of AmericaThomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences of the United States of America.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningMinqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining.
A novel lexicalized HMM-based learning framework for web opinion mining. Wei Jin, Hung Hay Ho, Proceedings of the 26th International Conference on Machine Learning. the 26th International Conference on Machine LearningWei Jin and Hung Hay Ho. 2009. A novel lexicalized HMM-based learning framework for web opinion min- ing. In Proceedings of the 26th International Confer- ence on Machine Learning.
OpinionMiner: A novel machine learning system for web opinion mining and extraction. Wei Jin, Hung Hay Ho, Rohini K Srihari, Proceedings of the 15th ACM SIGKDD. the 15th ACM SIGKDDWei Jin, Hung Hay Ho, and Rohini K. Srihari. 2009. OpinionMiner: A novel machine learning system for web opinion mining and extraction. In Proceedings of the 15th ACM SIGKDD.
Joint sentiment/topic model for sentiment analysis. Chenghua Lin, Yulan He, Proceeding of the Eighteenth ACM Conference on Information and Knowledge Management. eeding of the Eighteenth ACM Conference on Information and Knowledge ManagementChenghua Lin and Yulan He. 2009. Joint senti- ment/topic model for sentiment analysis. In Proceed- ing of the Eighteenth ACM Conference on Information and Knowledge Management.
Topic sentiment mixture: Modeling facets and opinions in weblogs. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, Chengxiang Zhai, Proceedings of the 16th International Conference on World Wide Web. the 16th International Conference on World Wide WebQiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: Modeling facets and opinions in weblogs. In Proceed- ings of the 16th International Conference on World Wide Web.
Topic models conditioned on arbitrary features with dirichlet-multinomial regression. David Mimno, Andrew Mccallum, Conference on Uncertainty in Artificial Intelligence. David Mimno and Andrew McCallum. 2008. Topic models conditioned on arbitrary features with dirichlet-multinomial regression. In Conference on Uncertainty in Artificial Intelligence.
Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval. Bo Pang, Lillian Lee, 2Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Infor- mation Retrieval, 2(1-2).
Thumbs up? Sentiment classification using machine learning techniques. Bo Pang, Lillian Lee, Shivakumar Vaithyanathan, Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. the 2002 Conference on Empirical Methods in Natural Language ProcessingBo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing.
Extracting product features and opinions from reviews. Ana-Maria Popescu, Oren Etzioni, Proceedings of the HLT-EMNLP. the HLT-EMNLPAna-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Pro- ceedings of the HLT-EMNLP.
Modeling online reviews with multi-grain topic models. Ivan Titov, Ryan Mcdonald, Proceeding of the 17th International Conference on World Wide Web. eeding of the 17th International Conference on World Wide WebIvan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceeding of the 17th International Conference on World Wide Web.
Phrase dependency parsing for opinion mining. Yuanbin Wu, Qi Zhang, Xuangjing Huang, Lide Wu, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingYuanbin Wu, Qi Zhang, Xuangjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing.
Conditional topic random fields. Jun Zhu, P Eric, Xing, Proceedings of the 27th International Conference on Machine Learning. the 27th International Conference on Machine LearningJun Zhu and Eric P. Xing. 2010. Conditional topic ran- dom fields. In Proceedings of the 27th International Conference on Machine Learning. |
7,361,735 | Detecting and Correcting Syntactic Errors in Machine Translation Using Feature-Based Lexicalized Tree Adjoining Grammars | Statistical machine translation has made tremendous progress over the past ten years. The output of even the best systems, however, is often ungrammatical because of the lack of sufficient linguistic knowledge. Even when systems incorporate syntax in the translation process, syntactic errors still result. To address this issue, we present a novel approach for detecting and correcting ungrammatical translations. In order to simultaneously detect multiple errors and their corresponding words in a formal framework, we use feature-based lexicalized tree adjoining grammars, where each lexical item is associated with a syntactic elementary tree, in which each node is associated with a set of feature-value pairs to define the lexical item's syntactic usage. Our syntactic error detection works by checking the feature values of all lexical items within a sentence using a unification framework. In order to simultaneously detect multiple error types and track their corresponding words, we propose a new unification method which allows the unification procedure to continue when unification fails and also to propagate the failure information to relevant words. Once error types and their corresponding words are detected, one is able to correct errors based on a unified consideration of all related words under the same error types. In this paper, we present some simple mechanism to handle part of the detected situations. We use our approach to detect and correct translations of six single statistical machine translation systems. The results show that most of the corrected translations are improved. | [
7972355,
60785379,
33703974,
17962063,
6158237
] | Detecting and Correcting Syntactic Errors in Machine Translation Using Feature-Based Lexicalized Tree Adjoining Grammars
December 2012
Wei-Yun Ma [email protected]
Department of Computer Science
Columbia University
New YorkUSA
Kathleen Mckeown
Department of Computer Science
Columbia University
New YorkUSA
Wei-Yun Ma
Department of Computer Science
Columbia University
New YorkUSA
Kathleen Mckeown
Department of Computer Science
Columbia University
New YorkUSA
Detecting and Correcting Syntactic Errors in Machine Translation Using Feature-Based Lexicalized Tree Adjoining Grammars
Computational Linguistics and Chinese Language Processing
1741December 2012Machine TranslationSyntactic ErrorPost EditingTree Adjoining GrammarUnification *
Statistical machine translation has made tremendous progress over the past ten years. The output of even the best systems, however, is often ungrammatical because of the lack of sufficient linguistic knowledge. Even when systems incorporate syntax in the translation process, syntactic errors still result. To address this issue, we present a novel approach for detecting and correcting ungrammatical translations. In order to simultaneously detect multiple errors and their corresponding words in a formal framework, we use feature-based lexicalized tree adjoining grammars, where each lexical item is associated with a syntactic elementary tree, in which each node is associated with a set of feature-value pairs to define the lexical item's syntactic usage. Our syntactic error detection works by checking the feature values of all lexical items within a sentence using a unification framework. In order to simultaneously detect multiple error types and track their corresponding words, we propose a new unification method which allows the unification procedure to continue when unification fails and also to propagate the failure information to relevant words. Once error types and their corresponding words are detected, one is able to correct errors based on a unified consideration of all related words under the same error types. In this paper, we present some simple mechanism to handle part of the detected situations. We use our approach to detect and correct translations of six single statistical machine translation systems. The results show that most of the corrected translations are improved.
Introduction
Statistical machine translation has made tremendous progress over the past ten years. The output of even the best systems, however, is often ungrammatical because of the lack of sufficient linguistic knowledge. Even when systems incorporate syntax in the translation process, syntactic errors still result. We have developed a novel, post-editing approach which features: 1) the use of XTAG grammar, a rule-based grammar developed by linguists, 2) the ability to simultaneously detect multiple ungrammatical types and their corresponding words by using unification of feature structures, and 3) the ability to simultaneously correct multiple ungrammatical types based on the detection information. To date, we have developed the infrastructure for this approach and demonstrated its utility for agreement errors.
As illustrative examples, consider the following three ungrammatical English sentences:
1. Many young student play basketball.
2. John play basketball and Tom also play basketball.
3. John thinks to play basketball.
In 1 and 2 above, number agreement errors between the subjects and verbs (and quantifier) cause the sentences to be ungrammatical, while in 3, the infinitive following the main verb makes it ungrammatical. One could argue that an existing grammar checker could do the error detection for us, but if we use Microsoft Word 2010 (MS Word)'s grammar checker (Heidorn, 2000) to check the three sentences, the entire first sentence will be underlined with green wavy lines without any indication of what should be corrected, while no errors are detected in 2 and 3.
The grammar we use is based on a feature-based lexicalized tree adjoining grammars (FB-LTAG) English grammar, named XTAG grammar (XTAG group, 2001). In FB-LTAG, each lexical item is associated with a syntactic elementary tree, in which each node is associated with a set of feature-value pairs, called Attribute Value Matrices (AVMs). AVMs define the lexical item's syntactic usage. Our syntactic error detection works by checking the AVM values of all lexical items within a sentence using a unification framework. Thus, we use the feature structures in the AVMs to detect the error type and corresponding words. In order to simultaneously detect multiple error types and track their corresponding words, we propose a new unification method which allows the unification procedure to continue when unification fails and also to propagate the failure information to relevant words. We call the modified unification a fail propagation unification.
Related Work
Grammar checking is mostly used in word processors as a writing aid. Three methods are widely used for grammar checking given a sentence: statistic-based checking, rule-based checking and syntax-based checking. In statistic-based checking, POS tag sequences (Atwell & Elliot, 1987) or an N-gram language model (Alam et al., 2006;Wu et al., 2006) is trained from a training corpus and uncommon sequences in the training corpus are considered incorrect. Huang et al. (2010) extracted erroneous and correct patterns of consecutive words from the data of an online-editing diary website. In rule-based checking, a set of hand crafted rules out of words, POS tags and chucks (Naber, 2003) or parsing results (Heidorn, 2000) are designed to detect errors. In syntax-based checking, Jensen et al. (1993) utilize a parsing procedure to detect errors: each sentence must be syntactically parsed; a sentence is considered incorrect if parsing does not succeed.
Focusing on machine translation's grammar checking, Stymne and Ahrenberg (2010) utilized an existing rule-based Swedish grammar checker, as a post-processing tool for their English-Swedish translation system. They tried to fix the ungrammatical translation parts by applying the grammar checker's correction suggestions. In contrast of their using an existing grammar checker, we developed our own novel grammar checker for translated English in order to better controlling the quality of error detection, error types, and the directions of error correction in translation context.
Our approach is a mix of rule-based checking and syntax-based checking: The XTAG English grammar is designed by linguists while the detecting procedure is based on syntactic operations which dynamically reference the grammar. The work could be regarded as an extension of (Ma & McKeown, 2011), in which grammatical error detection based on XTAG English grammar is carried out to filter out ungrammatical combined translations in their framework of system combination for machine translation. In contrast of (Ma & McKeown, 2011), our approach is not only capable to detect grammatical errors, but also has the capability of identifying error types and errors' causes, and correcting certain cases of errors.
Background
We briefly introduce the FB-LTAG formalism and XTAG grammar in this section.
Feature-Based Lexicalized Tree Adjoining Grammars
FB-LTAG is based on tree adjoining grammar (TAG) proposed in (Joshi et al., 1975). The TAG formalism is a formal tree rewriting system, which consists of a set of elementary trees, corresponding to minimal linguistic structures that localize the dependencies, such as specifying the predicate-argument structure of a lexeme. Elementary trees are divided into initial and auxiliary trees. Initial trees are those for which all non-terminal nodes on the frontier are substitutable, marked with "↓". Auxiliary trees are defined as initial trees, except that exactly one frontier, nonterminal node must be a foot node, marked with "*", with the same label with the root node. Two operations -substitution and adjunction are provided in TAG to adjoin elementary trees.
FB-LTAG has two important characteristics: First, it is a lexicalized TAG (Schabes, 1988). Thus each elementary tree is associated with at least one lexical item. Second, it is a feature-based lexicalized TAG (Vijay-Shanker . Each node in an elementary tree is constrained by two sets of feature-value pairs (two AVMs). One AVM (top AVM) defines the relation of the node to its super-tree, and the other AVM (bottom AVM) defines the relation of the node to its descendants. We use Fig1 and Fig2 1 to illustrate the substitution and adjunction operations with the unification framework respectively.
Y tr br X Y↓ t [ ] X Y t U tr br Y tr br X Y↓ t [ ] X Y t U tr br tf bf t b Y tr br X X Y t U tr br Y* Y Y tf b U bf tf bf t b Y tr br X X Y t U tr br Y* Y Y tf b U bf
Figure 1. Substitution of FB-LTAG Figure 2. Adjunction of FB-LTAG
In Fig 1, we can see that the feature structure of a new node created by substitution inherits the union of the features of the original nodes. The top feature of the new node is the union of the top features of the two original nodes, while the bottom feature of the new node is simply the bottom feature of the top node of the substituting tree. In Fig 2, we can see that the node being adjoined into splits, and its top feature unifies with the top feature of the root adjoining node, while its bottom feature unifies with the bottom feature of the foot adjoining node.
XTAG English Grammar
XTAG English grammar (XTAG group, 2001) is designed using the FB-LTAG formalism, released 2 by UPENN in 2001. The range of syntactic phenomena that can be handled is large. It defines 57 major elementary trees (tree families) and 50 feature types, such as agreement, case, mode (mood), tense, passive, etc, for its 20027 lexical entries. Each lexical entry is
Machine Translation Using Feature-Based Lexicalized Tree Adjoining Grammars
associated with at least one elementary tree, and each elementary tree is associated with at least one AVM. For example, Fig 3 shows the simplified elementary tree of "saw". "<number>" indicates the same feature value. For example, the feature -"arg_3rdsing" in bottom AVM of root S should have the same feature value of "arg_3rdsing" in top AVM of VP. In our implementation, it is coded using the same object in an object-oriented programming language. Since the feature value of mode in top AVM of "S↓" is "base", we know that "saw" can only be followed by a sentence with a base verb. For example, "He saw me do that" shown in Fig 4(a) is a grammatical sentence while "He saw me to do that" shown in Fig 4(b) is an ungrammatical sentence because "saw" is not allowed to be followed by an infinitive sentence.
S NP↓ VP saw S↓ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 2 : mode 1 : g agr_3rdsin ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ − + ind : mode / : g agr_3rdsin [ ] [ ] base : mode ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 2 : mode 1 : g agr_3rdsin ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 4 : mode 3 : g agr_3rdsin ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 4 : mode 3 : g agr_3rdsin [ ] ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 4 : mode 3 : g agr_3rdsin [ ]
(a). Grammatical sentence of "saw" (b) Ungrammatical sentence of "saw"
But if we look at the simplified elementary tree of "asked" shown in Fig 5, we can find that "asked" can only be followed by a sentence with an infinitive sentence (inf). For example, "He asked me to do that" shown in Fig 6(a) is a grammatical sentence while "He asked me do that" shown in Fig 6(b) is an ungrammatical sentence because "asked" is not allowed to be followed by a sentence with a base verb.
S NP↓ VP asked S↓ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 2 : mode 1 : g agr_3rdsin ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ − + ind : mode / : g agr_3rdsin [ ] [ ] inf : mode ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 2 : mode 1 : g agr_3rdsin ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 4 : mode 3 : g agr_3rdsin ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 4 : mode 3 : g agr_3rdsin [ ] ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ > < > < 4 : mode 3 : g agr_3rdsin [ ] NP↓ [ ] [ ]
Syntactic Error Detection
Our procedure for syntactic error detection includes 1. decomposing each sentence hypothesis parse tree into elementary trees, 2. associating each elementary tree with AVMs through look-up in the XTAG grammar, and 3. reconstructing the original parse tree out of the elementary trees using substitution and adjunction operations along with AVM unifications.
When unification of the AVMs fails, a grammatical error has been detected and its error type is also identified by the corresponding feature in the AVM. In order to simultaneously detect multiple error types and their corresponding words, we adjust the traditional unification definition to allow the unification procedure to continue after an AVM failure occurs and also propagate the failure information to relevant words. We call the modified unification fail propagation unification.
Each step is illustrated in this section.
Decomposing to Elementary trees
Given a translation sentence, we first get its syntactic parse using the Stanford parser (Klein & Manning, 2003) and then decompose the parse to multiple elementary trees by using an elementary tree extractor, a modification of (Chen & Vijay-Shanker, 2000). After that, each lexical item in the sentence will be assigned one elementary tree. Taking the sentence -"Many young student play basketball" as an example, its parse and extracted elementary trees are shown in Fig 7 and Fig 8,
Associating AVMs to Elementary trees
Each elementary tree is associated with AVMs through look-up in the XTAG English grammar. Using the same example of the sentence -"Many young student play basketball", its elementary trees, relations and one set of AVMs (simplified version) are shown in Fig 9. To keep tracing what word(s) that a feature value relates to for the next step of reconstruction, we design a new data structure of word set, named "word trace". It is represented by "{…}" and attached with each feature value except the value of "null", such as "agr_num:pl{play}"
Figure 9. The elementary trees of 'Many young student play basketball", their relations and AVMs (simplified version).
Detecting and Correcting Syntactic Errors in 9 Machine Translation Using Feature-Based Lexicalized Tree Adjoining Grammars
When we loop up the XTAG English Grammar, sometimes one elementary tree could have multiple possible AVM associations. For example, for the verb "are", one of its elementary trees is associated with three different AVMs, one for 2nd person singular, one for 2nd person plural, and one for 3rd person plural. Unless we can reference the context for "are" (e.g., its subject), we are not sure which AVM should be used in the reconstruction. So we postpone this decision until later in the reconstruction process. At this point, we associate each elementary tree with its all possible AVMs defined in the XTAG English Grammar.
Reconstruction Framework
Once the elementary trees are associated with AVMs, they will be used to reconstruct the original parse tree through substitution and adjunction operations which are indicated during the process of decomposing a parse tree to elementary trees. The reconstruction process is able to decide if there is any conflict with the AVMs values. When a conflict occurs, it will cause an AVM unification failure, referring to a certain grammatical error.
We already illustrated how substitution and adjunctions along with AVM unifications work in section 3.1; one implementation complement is, once the original parse is constructed, it is necessary to unify every node's top and bottom AVMs in the constructed tree. This is because, in XTAG grammar, most AVM values are assigned in the anchor nodes of elementary trees and were not unified with others yet. This end step will assure that all related AVMs are unified.
As we stated in Section 4.2, sometimes we are not sure which AVM association for one elementary tree should be used in the reconstruction. So our strategy is to carry out reconstruction process for all sets out of every elementary tree's each possible AVM association. We choose the set that causes the minimal grammatical errors as the detection result.
Fail Propagation Unification
Our system detects grammatical errors by identifying unification fails. However, traditional unification does not define how to proceed after fails occur, and also lacks an appropriate structure to record error traces. So we extend it as follows:
[
Where f is a feature type, such as "arg_num"; x and y are two different feature values; U represents the "unify" operation; t1 and t2 are word traces introduced in section 4.2. "fail" is also defined as a kind of value.
(1)~(4) are actually traditional unification definitions except that the word trace union operations and the characteristic of fail have been added. When a unification failure occurs in (4), the unification procedure does not halt but only assigns f a value of fail and proceeds. (5)~(7) propagate the fail value to the related words' AVMs. We use the following two unifications occurring in order in Fig 9' By the feature value of "fail" and the word trace, we identify that there is an agr_num error related to three words -"many", "student" and "play".
Syntactic Error Correction
Once error types and their corresponding words are detected, one is able to correct errors based on an unified consideration of all related words under the same error types.
Given a set of related ungrammatical words, there are two tasks for the correction process: which words should be corrected and how to correct them? To date, we have developed the following simple mechanism to handle the agreement problem: First, the words whose feature value is in the minority will be selected to be corrected. We call this feature-value voting. Take the above example: "student" should be corrected since its agr_num is "sing" and the other two words' agr_num is "plural". When facing cases of equal votes, we tend to correct nouns if there are nouns.
Once the corrected words are selected, we replace them with their variations but with the same elementary tree type, such as replacing the above "student" with "students."
Experiment
Among the 57 major elementary trees and 50 feature types that XTAG defines, we have implemented 26 major elementary trees and 4 feature types -agr_pers, arg_num, arg_3rdsing and several cases of mode/mood at this point (The first three belong to agreement features.) We apply our syntactic error detection and correction on 422 translation sentences of six Chinese-English machine translation systems A~F from the DARPA Global Autonomous Language Exploitation (GALE) 2008 evaluation. Every source sentence is provided along with four target references. The six systems are described in Table 1, and the results of syntactic error detection for agreement and mode errors and correction for agreement errors are shown in Table 2. From Table 2, even the overall Bleu score for all sentences is not significantly improved, but if we take a close look at those corrected sentences for agreement errors and calculate their Bleu scores, we can see the corrected translations are improved for every system except for one (F), which shows the effectiveness and potential of our approach.
Conclusion
This paper presents a new FB-LTAG-based syntactic error detection and correction mechanism along with a novel AVM unification method to simultaneously detect multiple ungrammatical types and their corresponding words for machine translation. The mechanism can also be applied to other languages if the grammar is well defined in the FB-LTAG structure of certain languages.
Detecting and Correcting Syntactic Errors in 13 Machine Translation Using Feature-Based Lexicalized Tree Adjoining Grammars
While the basic design philosophy and algorithm are fully described in this paper, we are continuing to implement more elementary trees and feature types defined in the XTAG grammar, and we are extending our correction mechanism as our future work.
Figure 3 .Figure 4
34Elementary tree for "saw"
Figure 5 .
5Elementary tree for "asked" Figure 6(a). Grammatical sentence of "asked"(b) Ungrammatical sentence of "asked"
respectively. In Fig 8, the arrows represent relations among the elementary trees and the relations are either substitution or adjunction. In this example, the two upper arrows are substitutions and the two bottom arrows are adjunctions.
Figure 7 .Figure 8 .
78Parse of "Many young student play basketball" The elementary trees of 'Many young student play basketball" and their relations
All
AVMs in Fig 9 after unifications along with reconstruction operations are shown in Fig 10.
Figure 10 .
10Reconstructed parse of the sentence-"Many young student play basketball" after unifications with fail propagation
Machine Translation Using Feature-Based Lexicalized Tree Adjoining GrammarsDetecting and Correcting Syntactic Errors in
3
s adjoining operations to illustrate the procedure of fail propagation unification:[arg_num=pl]{many} U [arg_num=sing]{student}
=> [arg_num =fail]{many,student}
[arg_num=fail]{many, student} U [arg_num=pl]{play}
=> [arg_num =fail]{many,student,play}
Table 1 .
1Six MT systemsSystem name
Approach
A
NRC
phrase-based SMT
B
RWTH-PBT
phrase-based SMT
C
RWTH-PBT-AML
phrase-based SMT + source reordering
D
RWTH-PBT-JX
phrase-based SMT + Chinese word segmentation
E
RWTH-PBT-SH
phrase-based SMT + source reordering + rescoring
F
SRI-HPBT
hierarchical phrase-based SMT
Table 2 .
2The results of syntactic error detection and correctionDetected
sentences
(arg error +
mode error)
Corrected
sentences
(arg error)
Bleu for all
sentences
(before)
Bleu for all
sentences
(after)
Bleu for
corrected
sentences
(before)
Bleu for
corrected
sentences
(after)
A 23
9
32.99
32.99
26.75
27.80
B 23
14
27.95
27.97
22.08
23.03
C 18
7
34.40
34.41
32.13
32.67
D 25
14
32.96
32.99
31.49
32.17
E 30
11
34.64
34.68
29.31
30.61
F 18
8
34.13
34.14
29.15
28.83
The two figures and their descriptions are based on the XTAG technical report (XTAG group, 2001) 2 http://www.cis.upenn.edu/~xtag/gramrelease.html
AcknowledgmentsWe would like to thank the anonymous reviewers for their helpful comments. This work is supported by the National Science Foundation via Grant No. 0910778 entitled "Richer Representations for Machine Translation". All views expressed in this paper are those of the authors and do not necessarily represent the view of the National Science Foundation.
N-gram based Statistical Grammar Checker for Bangla and English. M J Alam, N Uzzaman, M Khan, Proceedings of ninth International Conference on Computer and Information Technology. ninth International Conference on Computer and Information TechnologyDhaka, BangladeshICCIT 2006Alam, M. J., UzZaman, N. & Khan, M. (2006). N-gram based Statistical Grammar Checker for Bangla and English. In Proceedings of ninth International Conference on Computer and Information Technology (ICCIT 2006), Dhaka, Bangladesh.
Dealing with Ill-formed English Text. E S Atwell, S Elliot, The Computational Analysis of English: A Corpus-based Approach. R. Garside, G. Leech and G. SampsonLondonLongmanAtwell, E. S. & Elliot, S. (1987). Dealing with Ill-formed English Text. In: R. Garside, G. Leech and G. Sampson (Eds.) The Computational Analysis of English: A Corpus-based Approach. London: Longman.
Automated extraction of TAGs from the Penn treebank. J Chen, K Vijay-Shanker, Proceedings of the Sixth International Workshop on Parsing Technologies. the Sixth International Workshop on Parsing TechnologiesChen, J. & Vijay-Shanker, K. (2000). Automated extraction of TAGs from the Penn treebank. In Proceedings of the Sixth International Workshop on Parsing Technologies.
Intelligent writing assistance. G E Heidorn, A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text. Marcel Dekker. R. Dale, H. Moisl and H. SomersNew YorkHeidorn, G. E. (2000). Intelligent writing assistance. In R. Dale, H. Moisl and H. Somers (eds.), A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text. Marcel Dekker, New York. 181-207.
Identifying Correction Rules for Auto Editing. A Huang, T T Kuo, Y C Lai, S D Lin, Proceedings of the 22nd Conference on Computational Linguistics and Speech Processing. the 22nd Conference on Computational Linguistics and Speech ProcessingHuang, A., Kuo, T. T., Lai, Y. C. & Lin, S. D. (2010). Identifying Correction Rules for Auto Editing. In Proceedings of the 22nd Conference on Computational Linguistics and Speech Processing (ROCLING), 251-265.
K Jensen, G Heidorn, Natural Language Processing: The PLNLP Approach. E. & Richardson, S. D.Kluwer Academic PublishersJensen, K., Heidorn, G. E. & Richardson, S. D. (Eds.) (1993). Natural Language Processing: The PLNLP Approach, Kluwer Academic Publishers.
Tree Adjunct Grammars. A K Joshi, L S Levy, M Takahashi, Journal of Computer and System Science. 10Joshi, A. K., Levy, L. S. & Takahashi M. (1975). Tree Adjunct Grammars. Journal of Computer and System Science, 10, 136-163.
Accurate Unlexicalized Parsing. D Klein, C D Manning, Proceedings of the 41st Meeting of the Association for Computational Linguistics. the 41st Meeting of the Association for Computational LinguisticsKlein, D. & Manning, C. D. (2003). Accurate Unlexicalized Parsing. In Proceedings of the 41st Meeting of the Association for Computational Linguistics, 423-430.
System Combination for Machine Translation Based on Text-to-Text Generation. W Y Ma, K Mckeown, Proceedings of Machine Translation Summit XIII. Machine Translation Summit XIIIXiamen, ChinaMa, W. Y. & McKeown, K. (2011). System Combination for Machine Translation Based on Text-to-Text Generation. In Proceedings of Machine Translation Summit XIII. Xiamen, China.
A Rule-Based Style and Grammar Checker. D Naber, GermanyUniversity of BielefeldDiploma ThesisNaber, D. (2003). A Rule-Based Style and Grammar Checker. Diploma Thesis. University of Bielefeld, Germany.
. Wei- , Yun Ma, Kathleen Mckeown, Wei-Yun Ma and Kathleen McKeown
Parsing strategies with 'lexicalized' grammars: Application to tree adjoining grammars. Y Schabes, A Abeille, A K Joshi, Proceeding of 12th International Conference on Computational Linguistics (COLING'88). eeding of 12th International Conference on Computational Linguistics (COLING'88)Budapest, HungarySchabes, Y., Abeille, A. & Joshi, A. K. (1988). Parsing strategies with 'lexicalized' grammars: Application to tree adjoining grammars. In Proceeding of 12th International Conference on Computational Linguistics (COLING'88), Budapest, Hungary.
Using a Grammar Checker for Evaluation and Postprocessing of Statistical Machine Translation. S Stymne, L Ahrenberg, Proceedings of International Conference on Language Resources and Evaluation (LREC). International Conference on Language Resources and Evaluation (LREC)Stymne, S. & Ahrenberg, L. (2010). Using a Grammar Checker for Evaluation and Postprocessing of Statistical Machine Translation. In Proceedings of International Conference on Language Resources and Evaluation (LREC).
Feature structure based tree adjoining grammar. K Vijay-Shanker, A K Joshi, Proceeding of 12th International Conference on Computational Linguistics (COLING'88). eeding of 12th International Conference on Computational Linguistics (COLING'88)Vijay-Shanker, K. & Joshi, A. K. (1988). Feature structure based tree adjoining grammar, In Proceeding of 12th International Conference on Computational Linguistics (COLING'88), 714-719.
An Evaluation of Adopting Language Model as the Checker of Preposition Usage. S H Wu, C Y Su, T J Jiang, W L Hsu, Proceedings of the Conference on Computational Linguistics and Speech Processing. the Conference on Computational Linguistics and Speech ProcessingROCLINGWu, S. H., Su, C. Y., Jiang, T. J. & Hsu, W. L. (2006). An Evaluation of Adopting Language Model as the Checker of Preposition Usage. In Proceedings of the Conference on Computational Linguistics and Speech Processing (ROCLING).
A Lexicalized Tree Adjoining Grammar for English. Xtag Group, IRCS 01-03University of PennsylvaniaTechnical ReportXTAG Group. (2001). A Lexicalized Tree Adjoining Grammar for English. Technical Report IRCS 01-03, University of Pennsylvania. |
253,098,851 | Iteratively Prompt Pre-trained Language Models for Chain of Thought | While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have been shown incapable of recalling these knowledge to solve tasks requiring complex & multi-step reasoning. Similar to how humans develop a "chain of thought" for these tasks, how can we equip PLMs with such abilities? In this work, we explore an iterative prompting framework, a new prompting paradigm which progressively elicits relevant knowledge from PLMs for multi-step inference. We identify key limitations of existing prompting methods, namely they are either restricted to queries with a single identifiable relation/predicate, or being agnostic to input contexts, which makes it difficult to capture variabilities across different inference steps. We propose an iterative context-aware prompter, which addresses these limitations by learning to dynamically synthesize prompts conditioned on the current step's contexts. Experiments on three datasets involving multi-step reasoning show the effectiveness of the iterative scheme and the context-aware prompter design. | [
222303616,
208117506,
215745286,
230433941,
239016860,
202539551,
250390946,
234797436,
211205183,
226236740,
202773198,
221005781,
233210199,
233231453,
218486753,
40100965,
202538609,
189762556,
221970302
] | Iteratively Prompt Pre-trained Language Models for Chain of Thought
2730 December 7-11, 2022
Boshi Wang
The Ohio State University
ColumbusOH
Xiang Deng
The Ohio State University
ColumbusOH
Huan Sun
The Ohio State University
ColumbusOH
Iteratively Prompt Pre-trained Language Models for Chain of Thought
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
the 2022 Conference on Empirical Methods in Natural Language Processing27142730 December 7-11, 2022
While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have been shown incapable of recalling these knowledge to solve tasks requiring complex & multi-step reasoning. Similar to how humans develop a "chain of thought" for these tasks, how can we equip PLMs with such abilities? In this work, we explore an iterative prompting framework, a new prompting paradigm which progressively elicits relevant knowledge from PLMs for multi-step inference. We identify key limitations of existing prompting methods, namely they are either restricted to queries with a single identifiable relation/predicate, or being agnostic to input contexts, which makes it difficult to capture variabilities across different inference steps. We propose an iterative context-aware prompter, which addresses these limitations by learning to dynamically synthesize prompts conditioned on the current step's contexts. Experiments on three datasets involving multi-step reasoning show the effectiveness of the iterative scheme and the context-aware prompter design.
Introduction
Humans can develop a "chain of thought" for complex decision making. For example, when asked the question (Q) shown in Figure 1, which involves composition, an important type of multi-step reasoning, humans apply two consecutive steps to derive the final answer: 1) find "father" of the topic entity "Gwilym Lloyd George" (C1); 2) find "birthplace" of the entity returned in the first step (C2).
Recently, large-scale pre-trained language models (PLMs) have been shown capable of internalizing a great amount of simple factual knowledge such as C1 and C2, yielding competitive performance on a range of knowledge-intensive tasks without resorting to any external knowledge source Figure 1: Our Iterative Prompting approach (on the right), compared with Standard Probing (on the left). In Standard Probing, a question is directly fed to the PLM to output the final answer, which could work for simple factual questions but fails for complex questions that require multi-step reasoning. In contrast, we augment the PLM with a Prompter, which learns to iteratively prompt the PLM to recall a series of knowledge and derive a "chain of thought". (Petroni et al., 2019;Shin et al., 2020;Zhong et al., 2021;Roberts et al., 2020;Lee et al., 2020). However, work such as (Talmor et al., 2020a;Kassner et al., 2020;Rae et al., 2021) reveals that PLMs face difficulties in complex, multi-step reasoning. For example, they struggle with answering complex questions like Q without using external sources, no matter whether they are fine-tuned based on QA pairs or simply prompted to produce the answer (where even if they have memorized C1 and C2).
In this paper, we study the following question: How to shepherd a PLM to recall a series of stored knowledge (e.g., C1 and C2) that is necessary for multi-step inference (e.g., answering Q), analogous to how humans develop a "chain of thought" for complex decision making?
A direct way would be to fine-tune the PLM to generate the series of knowledge all at once (assuming such supervision is available), but soon one realizes the practical issue in this approach: PLMs which internalize a great amount of knowledge are inevitably large in scale, and fine-tuning all their parameters would become more and more costly as they keep scaling up. There is also the concern that fine-tuning PLMs may interfere with their implicit knowledge storage, a phenomenon observed in which is more generally related to the catastrophic forgetting problem of deep learning models (McCloskey and Cohen, 1989;Kirkpatrick et al., 2017;Howard and Ruder, 2018). Therefore, lightweight methods such as prompting which keep a PLM's parameters intact would be preferable for our purpose of eliciting knowledge. However, we find that no matter whether it is fine-tuned or prompted to generate the series of knowledge all at once, the PLM tends to lose its "chain of thought" during the process, generating irrelevant facts or suffering from hallucination.
Motivated by the iterative nature of multistep reasoning problems, we explore an iterative prompting framework in this paper, which elicits knowledge from PLMs step by step for a given inference task. We have two desiderata in iterative prompting: (1) At different inference steps, the prompts need to focus on different components of the complex query.
(2) The prompts should appropriately integrate knowledge gathered in previous steps into the current step; for instance, during the 2nd step in the example in Figure 1, the prompts need to combine the entity "David Lloyd George" (from knowledge recalled in the 1st step) with the unresolved part "What is the place of birth of" in the query.
A natural thought is to directly apply existing prompting methods in an iterative fashion. Unfortunately, their prompts are either restricted to queries with a single, identifiable relation/predicate (Jiang et al., 2020;Petroni et al., 2019;Zhong et al., 2021;Shin et al., 2020;Qin and Eisner, 2021), or being agnostic and insensitive to step-wise inputs (Lester et al., 2021;Li and Liang, 2021;Brown et al., 2020), and hence not ideal for our desiderata.
We design a novel iterative prompting method towards that end. We augment the PLM with an iterative Context-Aware Prompter, a model which learns to dynamically synthesize prompts based on the current step context. At each step, the Prompter learns to process the query and previously gathered evidence, and composes a prompt which steers the PLM to recall the next piece of knowledge. Like other prompting methods, the PLM is kept fixed throughout the learning process. In addition, as the PLM size increases, the number of trainable parameters in our method scales comparably with or slower than previous prompting methods.
We conduct experiments on three datasets involving multi-step reasoning, including two recent multi-hop Question Answering datasets: 2Wiki-MultiHopQA (Ho et al., 2020) and R4C (Inoue et al., 2020), and a scientific dataset (Talmor et al., 2020b) for reasoning over taxonomic relations. Our experimental results show (1) effectiveness of the iterative scheme; (2) our proposed Context-Aware Prompter design outperforms existing prompting methods by notable margins; (3) quantitative and qualitative analysis which reveal the faithfulness of our learned prompter.
Methodology
In this section, we first formalize our problem setup ( §2.1), and then introduce our iterative prompting framework ( §2.2), followed by our context-aware prompter design ( §2.3) which addresses key limitations of previous prompting methods when applied in this iterative scheme.
Problem Setup
Given a complex query q, our goal is to drive a PLM M to recall a sequence of simple knowledge statements C q = [c 1 , ..., c nq ] which is sufficient for deciding the response to q. In particular, we focus on developing prompting methods, where the parameters of M are fixed and we aim to construct prompt T which steer M to recall C q . Note that here we treat T as a variable, which may or may not depend on other variables based on different modelings. Writing M(T ) as M augmented with prompt T , our training objective is to learn how to find T which could maximize the log-likelihood
L(T ) = N i=1 log P (C q i |q i ; M(T )) with a set of training data {q i , C q i } N i=1 .
Our formulation here is general and applicable to all prompting-based methods, where the settings in previous work such as (Zhong et al., 2021;Shin et al., 2020;Lester et al., 2021;Li and Liang, 2021;Qin and Eisner, 2021) correspond to the reduced case where |C q | = 1 for any query q. In our experiments, we also consider PLM fine-tuning, in which case there's no prompt T in the pipeline, and instead the parameters of M are optimized.
Iterative Prompting Framework
Inspired by the sequential nature of multi-step inference tasks, we approach the problem in an iterative way:
P (C q |q; M(T )) = nq j=1 P (c j |q, c 1 , ..., c j−1 ; M(T ))
where at each step j, M(T ) recalls the next piece of knowledge c j conditioned on the query q and all previously gathered knowledge c 1 , ..., c j−1 (concatenated with q).
Context-Aware Prompter
Previous prompting methods which take singlerelation inputs clearly fail to apply in this iterative setting due to the complexity of the input context q, c 1 , ..., c j−1 . Task-level prompting methods such as Prompt-Tuning (Lester et al., 2021) and Prefix-Tuning (Li and Liang, 2021) are applicable here, where T is treated as a static parameter. However, as described earlier, this modeling is not ideal for T to fully capture variabilities across different inference steps. In this work, we model T as the output of our Prompter, a learnable function mapping f W which dynamically synthesizes T w.r.t. the current step input context:
T = f W (q, c 1 , ..., c j−1 ), ∀j
Prompter Instantiation. While there are many plausible design choices for the Prompter f W , here we instantiate it with a transformer-based language model (shown in Figure 2). The prompts are designed to be contextualizations (by the Prompter) of a set of special tokens w.r.t. the current step input context, linearly projected into the PLM's embedding space by a trainable matrix (omitted in the figure due to space limit). In this work, we adopt an Encoder-Decoder PLM and use prefix-prompts in the implementation; hence we have prompts that are prepended to both the PLM's encoder inputs and decoder inputs. Note that our design could be easily adapted to other types of PLMs (e.g., encoder-only/decoder-only models) and different prompt positionings (e.g., infix, postfix). Comparison with Prompt/Prefix-Tuning. Both Prompt-Tuning (Lester et al., 2021) and Prefix-Tuning (Li and Liang, 2021) model the prompt T as a context-agnostic parameter. In Prompt-Tuning, T has the same identity as in our approach which is a set of virtual input tokens (Encoder Prompts Figure 2: Our context-aware prompter design. The prompter contextualizes a set of special tokens w.r.t. the current step context q, c 1 , ..., c j−1 to get the resulting prompts, which steers the PLM to recall the next piece of knowledge c j . & Decoder Prompts in Figure 2). In Prefix-Tuning, T is modeled to be the set of activations (keys & values in the transformer attention blocks) of the virtual prompt tokens across all PLM layers. Let D be the embedding dimension of the PLM, h be the number of layers in the PLM, d be the embedding dimension of the Prompter (d ≤ D), and l be the length of the prompt tokens (both encoder & decoder prompts). Then the number of trainable parameters is Θ(d · (D + l)) for our proposed method, Θ(l · D) for Prompt-Tuning and Θ(l · h · D) for Prefix-Tuning. It can thus be seen that our proposed method scales comparatively with Prompt-Tuning, slower than Prefix-Tuning, and overall maintains a manageable amount of trained parameters as the PLM scales up (which increases D and h). Continuous v.s. Discrete Prompts. While modeling T as discrete tokens in the PLM's vocabulary could increase the readability of the prompts, a discrete space is much less expressive than its continuous counterpart, and optimization over a discrete space could be highly inefficient. Also, despite being inside the vocabulary, the searched discrete prompts could still have low interpretability as seen by the given examples in (Shin et al., 2020). Hence, we follow prior work (Zhong et al., 2021;Li and Liang, 2021;Lester et al., 2021;Qin and Eisner, 2021) and model the prompts to be continuous virtual tokens instead of discrete tokens.
Learning and Inference
We use teacher-forcing for model training, namely, at each step, the ground truth contexts at that step (query and previous knowledge pieces) are presented to the model. We maximize L(T ) using standard sequence-to-sequence (seq2seq) objectives. During inference, we proceed autoregressively by feeding the recalled knowledge at step t − 1 as the additional context at step t, and execute for some predefined number of steps. We also explore jointly training the prompter with a "stopper" which learns to stop the knowledge recall process when it decides that the recalled evidence is adequate enough; details are included in Appendix A.4.
Experimental Setup
Our research question is how to shepherd a PLM to recall a series of knowledge and derive a "chain of thought" for multi-step reasoning. To this end, we conduct experiments on several datasets that require complex multi-step reasoning and compare different methods to guide the PLM via prompt/prefix tuning, fine-tuning, and our prompter design. We use both intrinsic and extrinsic metrics to evaluate the quality of recalled knowledge, considering both end answer accuracy and coverage of intermediate evidence.
Datasets & Preprocessing
We conduct experiments on three datasets involving multi-step reasoning which include annotations for knowledge statements relevant to the queries: 2WikiMultiHopQA (abbreviated as 2Wiki) (Ho et al., 2020), R4C (Inoue et al., 2020), and a scientific commonsense reasoning dataset (abbreviated as LoT 2 ) constructed by (Talmor et al., 2020b). 2WikiMultiHopQA (Ho et al., 2020). 2Wiki is a recent large scale multi-hop QA dataset, which contains in total over 192k (167k train, 12.5k development, and 12.5k test) samples constructed jointly from Wikipedia and Wikidata. Since the test set is private, we randomly split the original development set into our development & test set (6k samples each). The dataset format largely follows HotpotQA (Yang et al., 2018), but includes more diverse reasoning types of questions and detailed annotations of evidence paths for each question.
Here, an evidence path is an ordered list of (subject entity, relation, object entity) knowledge base triplets. We use the question as the query q, and use a simple template to convert each triplet in the evidence path into a natural language statement, forming C q . Due to the large training set size and limited computing budget, we randomly sample 10% of the training data to form our final training set, which has the side benefit of largely reducing the test/train overlap (more details in §4.2).
R4C (Inoue et al., 2020). R4C is another recent multi-hop QA dataset containing annotated evidence paths. The dataset contains 4.6k examples (2.4k train, 2.2k development) constructed on top of HotpotQA, where the authors used crowdsourcing efforts to collect the evidence paths in the form of simple subject-verb-object natural language sentences. Again, we randomly split the development set (there's no test set given) into our development and test set (1.1k samples each). We use the question as our query q and use the annotated evidence sentences as C q .
LoT (Talmor et al., 2020b). The dataset involves reasoning over a set of taxonomic relations, constructed from ConceptNet and WordNet. Each example consists of a hypothesis (e.g., "A whale has a belly button") which we treat as query q, and a set of simple facts including hypernym rules (e.g., "A whale is a mammal", "A whale is a vertebrate") and properties (e.g., "A mammal has a belly button", "A vertebrate has a tail"). By reasoning over the facts and selecting the correct chain of hypernym rule & property ("A whale is a mammal", "A mammal has a belly button"), one could verify or deny the given hypothesis. One subtle issue of directly using the gold hypernym rule and property as C q is, during the first step, it would be difficult to directly identify the correct object entity without looking ahead on the properties in the second step. Therefore, for the first step, we concatenate all the hypernymic objects appearing in the dataset w.r.t. to the same subject to form c 1 . We drop samples from the original training set where the relevant facts are not (or only partially) provided, and obtain 9.4k/1.2k/1.2k samples for training/development/testing. For 2Wiki and R4C, the number of steps during inference is set to be 4 since over 99% of the samples have less or equal to 4 inference steps. For LoT, we set the number of inference steps to be 2. Overall, we regard 2Wiki as our "major" evaluation dataset due to its largest scale (despite our downsampling) and diverse types of queries, and use it to conduct a faithfulness study of prompting in §4.2. Some examples of the processed data samples are shown in Appendix A.6.
Compared Methods
We compare our proposed iterative Context-Aware Prompter (iCAP) along with Prompt Tuning (Prompt-T), Prefix Tuning (Prefix-T) and PLM fine-tuning (PLM-FT) under both non-iterative and iterative setting. The iterative setting is described in §2.2 and for the non-iterative setting, we simply concatenate all the knowledge statements in C q to form one single piece of knowledge for each query. In extrinsic evaluation, we also compare with fine-tuning the PLM on (query, answer) pairs without knowledge recall (PLM-QA), which measures how much the PLM can solve these multi-step inference problems directly, a skill which PLMs are poor at as shown by previous work. We additionally report final inference results when feeding ground truth contexts to the reader (Oracle-RD) as an upper bound for extrinsic evaluation. Relationspecific prompting methods such as (Shin et al., 2020;Zhong et al., 2021;Petroni et al., 2019) are not included since they're not directly applicable to our problem setup as discussed earlier.
Our focus in this work is on knowledge elicitation from PLMs, and hence we do not aim to compare with previous dataset-specific methods which typically have different problem formulations & focus than ours and utilize other attributes in the datasets which we do not use (e.g., gold & distractor evidence paragraphs).
Evaluation Metric
We use both intrinsic and extrinsic metrics to evaluate the PLM recalled knowledge. Intrinsic Evaluation. Here, we directly measure the quality of recalled knowledge. While there are standard metrics for evaluating text generation such as BLEU and ROUGE, these metrics generally fail to capture the entity-centric nature of the recalled knowledge we wish to examine (more details are included in Appendix A.3). Therefore, we propose a set of measures that are better suited for the tasks in our experiments. For 2Wiki and R4C, we evaluate the ratio where the recalled knowledge contains the answer entity (Ans.R); we also compute the ratio among only those samples where the answer entity does not appear in the query (Ans.R). For 2Wiki and LoT, we also evaluate the evidence coverage of recalled contexts by computing the average ratio of gold evidence appearing in the recalled context (Evi.R) and the ratio of samples where all gold evidence are recalled (Evi.R * ) as a more strict measure. For 2Wiki, we use the entities from the annotated KB triples as evidence. For LoT, we consider the hypernym rule/property as evidence, where in the 1st step, we deem the hypernym rule as correct if the gold object is recalled, and use exact match for the recalled property in the 2nd step. Extrinsic Evaluation. We also conduct extrinsic evaluation by measuring how much the recalled knowledge help find the response to the query. Similar to reading comprehension, we concatenate all recalled knowledge as the contexts, and use a reader which tries to infer the answer given the query and contexts. For 2Wiki and R4C, we first pre-train the reader using the ground truth contexts, and then fine-tune it on the recalled contexts 3 ; for LoT, we use a rule-based reader directly 4 . We report Exact Match (EM) and Answer F1 scores for 2Wiki & R4C, and EM score for LoT where the answer is restricted to yes/no.
Implementation Details
Architectures & hyperparameters. We use BART-large (Lewis et al., 2020) for our PLM and RoBERTa-base (Liu et al., 2019) for our prompter, which is several times smaller than the PLM 5 . We also include some results & discussion for different prompter scales in Appendix A.7. We use another BART-large for the reader in extrinsic evaluation 6 .
Our implementation is based on Hugging Face Transformers (Wolf et al., 2020). We use AdamW optimizer (Loshchilov and Hutter, 2019) and a linear learning rate scheduler with a warmup ratio of 0.06 for optimization. For hyperparameters, we use a batch size of 32, 128, 32 for 2Wiki, LoT and R4C respectively, and tune the learning rate from {4e-5, 8e-5, 4e-4, 8e-4, 4e-3, 8e-3, 4e-2} & length of encoder/decoder prompts 7 from {15, 30, 45, 60, 80, 100}; more details are included in Appendix A.1. We run most experiments with three random seeds and report the mean scores. Knowledge Enhancement for PLM. Since our focus is on how to make PLMs better at recalling relevant knowledge for multi-step inference, we need to make sure the PLM actually memorizes all the relevant knowledge in the first place, so that the results can be attributed solely to the effectiveness of knowledge recall. Hence, we conduct knowledge enhancement for the PLM, where we additionally pre-train the PLM to recover separately masked elements in the triplets which form the knowledge statements, a strategy similar to salient span masking (Roberts et al., 2020;Guu et al., 2020). More details could be found in Appendix A.2. Note the same PLM after knowledge enhancement is used across different compared methods.
Results
Effectiveness of iCAP
The results for intrinsic & extrinsic evaluation are summarized in Table 1 in particular, on the 2Wiki dataset which has the largest scale and diversity of reasoning types, iCAP achieves more than 15% and 10% absolute gains in F1 over Prompt-Tuning & Prefix-Tuning respectively. Overall, the results clearly show the effectiveness of the iterative scheme and our proposed context-aware prompter design. However, we note that even the best results (prompting based or finetuning based) still far lag behind Oracle-RD which uses ground truth contexts as input, which suggests a large room for improvements with better methods for knowledge elicitation from PLMs. Some failure cases of iCAP are included in Appendix A.6.
Helpfulness of Knowledge Recall for Multi-step
Inference. The result obtained by fine-tuning the PLM on (query, answer) directly without knowledge recall (PLM-QA) is outperformed by almost all other compared methods, verifying the previous findings that PLMs face difficulties in using their stored knowledge to perform multi-step inference tasks. The large gain obtained from methods based on knowledge recall shows the helpfulness of deriving a "chain of thought" (especially iteratively) from PLMs for multi-step inference. portant questions in optimization-based prompting methods: Are the prompts "really" doing prompting? Is it possible that they capture dataset regularities too? The issue is related to the notion of test-train overlap , where the dataset may contain some underlying spurious patterns that the model exploits, and thus standard evaluations could not truthfully measure their generalization behaviors. Here, we take this concern seriously and conduct a series of analysis to faithfully interpret the results we obtained. We focus on 2Wiki under iterative setting for our analysis.
Faithfulness of Prompting
Test-Train Overlap. For each development & test sample, we compute the ratio of knowledge statements in C q that also appear in the training set, meaning that during certain steps for some training samples, the model has "seen" the exact same piece of knowledge. Note that this is a rather strict measure: even if all the knowledge pieces in C q are seen during training, they may come from completely different samples & steps and hence organized in different ways. We summarize the overlapping ratios of development & test set samples in Table 7 in Appendix A.5. It can be seen that the down-sampling has the side benefit of greatly reducing the test-train overlap; in particular, the percentage of examples where all knowledge statements are seen during training is reduced from almost 30% to less than 2%, and more importantly, over 70% of the samples have no overlap. This suggests a rather low risk for the existence of strong spurious regularities in our setup.
Random Control Experiments.
Examining the data-level statistics is helpful, but still not sufficient in terms of revealing the spurious regularities that different methods may capture. Hence, we follow (Zhong et al., 2021) to conduct two random control experiments. In the Random Model experiment, we re-initialize all parameters of the PLM to clean out its internal knowledge, and proceed with the same training procedure as earlier. In this way, any positive signal obtained could only be attributed to dataset regularities captured by the method. In the Random Embedding experiment, we re-initialize only the input embeddings of the PLM, a setting analogous to the control task introduced in (Hewitt and Liang, 2019) (more discussions can be found in (Zhong et al., 2021)). Here we only proceed with the iterative setting and conduct intrinsic evaluation, where the results are summarized in Table 3. It can be seen that 1) PLM fine-tuning captures significantly more regularities in the dataset than prompting-based methods; 2) While our proposed method captures a bit more regularities than Prompt/Prefix Tuning, they still remain at a very small level. Overall, our random control experiments show that the exploitation of spurious dataset patterns by the evaluated prompting methods is rather mild, and that by PLM fine-tuning could potentially be larger.
Prompter Attention Visualization. To see whether our proposed iCAP behaves in the way we expect, one direct approach is to examine the inner workings of the prompter. Towards this end, we visualize the attentions during the prompter forward pass at different steps. We randomly choose examples in the development/test set, and use BertViz (Vig, 2019) to visualize the attentions within the forward pass of the prompter after the following processing steps: 1) we aggregate the attention weights of different attention heads within the same transformer layer; 2) to better view the prompt tokens as one single unit, we average the attentions across different prompt tokens to form one "master" prompt token; 3) we drop all special tokens (BOS, EOS) for cleaner visualization. One example (the same example which we use in Figure 1) is in Figure 3, and we include more examples in Appendix A.8. As briefly illustrated earlier in §1, during the 1st step towards solving this query, the prompter should focus on the part concerning "father" of "Gwilym Lloyd George"; during the 2nd step, the prompter should integrate the answer "David Lloyd George" from the 1st step evidence and the "place of birth" part in the query to synthesize the prompt. We can see that the attention distributions at different steps accord well with our expectations. However, we note that attention visualization is only a qualitative approach; more systematic ways for examining the inner working behaviors of transformers remain an open challenge.
Related Work
Memorization and Reasoning in PLMs. With the recent success of large-scale pre-trained language models (PLMs), there has been growing interest in investigating what is captured by these PLMs during pre-training (Talmor et al., 2020a;Rogers et al., 2020;Kassner et al., 2020). Studies have shown that in addition to learning linguistic knowledge about language use, PLMs are capable of memorizing a great amount of world knowledge (Rogers et al., 2020), yielding competitive performance on knowledge probing (Petroni et al., 2019;Shin et al., 2020;Zhong et al., 2021) and other knowledgeintensive tasks such as question answering (Roberts et al., 2020) and fact checking (Lee et al., 2020), without resorting to any external knowledge source.
On the other hand, other work such as (Talmor et al., 2020a;Kassner et al., 2020;Rae et al., 2021) reveals that PLMs face difficulties in recalling their stored knowledge for multi-step inferences (such as answering complex, multi-hop questions), which is also verified in our experiments.
Prompt Learning. One type of method for eliciting knowledge from PLMs is prompting , which is gaining increasing research interests & potential recently. Prompting methods seek to re-frame queries into prompts which accord with the PLM's input format, and extract useful information from the predicted results. The benefit of not needing to tune PLMs makes prompting especially appealing as PLMs scale up in size. In this work, we are interested in developing prompting methods which could enable PLMs to recall a series of relevant knowledge for multi-step inference. Previous work along this direction mainly use manually designed prompts/templates suited for specific datasets (Paranjape et al., 2021;Mishra et al., 2021;Shwartz et al., 2020); instead, we seek to develop a general method which can learn to construct appropriate prompts automatically. Concurrent to our work, Chain-of-Thought (CoT) prompting (Wei et al., 2022b) shares similar high-level ideas as ours, where the authors propose to provide intermediate reasoning steps in the prompts to encourage the PLM to perform step-by-step inference. While CoT shows great successes, we note it is one of the emergent abilities of large language models (Wei et al., 2022a) and only works well with extremely large PLMs (>100B typically) such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022). In our work, we use PLMs that are several orders of magnitude smaller than those used in CoT and demand much less computing resources. We hope our efforts could contribute towards developing LM-based systems with better multi-step reasoning abilities but also moderate scale.
For existing work on learning-based prompting, (Shin et al., 2020) proposes to use gradient-guided search to find appropriate discrete prompt tokens in the PLM's vocabulary to form prompt templates. While the resulting prompts are readable, most of them have very low fluency and interpretability. (Zhong et al., 2021;Qin and Eisner, 2021) propose to optimize the prompts in continuous space instead, which shows large benefits in both effectiveness and optimization efficiency. (Zhong et al., 2021) also raises and studies the question of whether learning-based prompting could exploit spurious dataset regularities which would weaken the validity of standard evaluation results, a concern we seriously address in our work. (Lester et al., 2021;Li and Liang, 2021) follow the continuous prompting paradigm, and tune task-level prompts for lightweight adaptation of PLMs. Overall, existing prompt learning methods are either restricted to cases where there exists a single & identifiable relation/predicate within the query (Zhong et al., 2021;Qin and Eisner, 2021;Shin et al., 2020), or being static and not sensitive to sample-wise inputs (Lester et al., 2021;Li and Liang, 2021). Iterative Knowledge Retrieval. We are also inspired by methods that iteratively retrieve knowledge from explicit knowledge sources for multistep reasoning, such as (Xiong et al., 2021;Qi et al., 2019;Khattab et al., 2021;Mo et al., 2022b). Our problem setting could be viewed as iterative retrieval over implicit knowledge in PLMs, instead of from explicit knowledge sources.
Conclusion & Future Work
We explore an iterative prompting framework towards driving a "chain of thought" from PLMs for multi-step reasoning tasks. We show the superiority of this iterative scheme, and also the effectiveness of our proposed context-aware prompter design, which addresses key limitations of previous prompting methods when applied in this new scheme. In addition, we conduct both quantitative & qualitative analysis on the faithfulness of the learned prompting behaviors. In the future, we aim to further extend and apply our ideas to language model pretraining, with the hope that PLMs can be inherently equipped with stronger multi-step reasoning capabilities. The iterative framework we explore here also opens the possibility of human intervention and interaction during inference; namely, a human can track along the PLM's chain of thought and make edits and corrections at different steps, similarly as in (Mo et al., 2022a), which improves the transparency and trustworthiness of inference and also helps reduce error propagation along the reasoning process. We leave these investigations as future work.
Limitations
Experiments with larger-scale models. We explored a novel framework to prompt (or elicit knowledge from) PLMs for multi-step inference. Although our iterative prompting approach outper-forms the baselines by a large margin, there is still much room to improve. One promising direction is to experiment with PLMs larger than what is used in our experiments (i.e., BART-large), which have better capacities for internalizing knowledge. However, when the models get larger, the associated computational cost will increase accordingly, which was also the main obstacle for us to pursue this front. We intend to conduct such experiments in the future when we have access to better computing resources. Datasets with noisy knowledge statements. We used three recently released datasets (2Wiki, R4C, LoT) that require multi-step inference for our experiments. Compared with alternative datasets such as HotpotQA and StrategyQA (Geva et al., 2021), they include knowledge statements that have cleaner formats and are much more suitable for multi-step inference (in fact, this is one of the main motivations behind the construction of 2Wiki & R4C). For HotpotQA & StrategyQA, the knowledge statements are given as raw sentences from the evidence paragraphs and include information irrelevant to the original question. We exercised our best effort to process them (e.g., resolving coreferences, simplifying & decomposing nested sentences, etc.) into our desired formats, but the resulting knowledge statements are still very noisy. All methods including ours cannot be trained well under such knowledge statements. How to use such naturally occurring but noisy knowledge statements as supervision to guide PLMs to develop a chain of thought is an interesting topic to study in the future. Exploring alternative architectural designs. Another limitation is that we only implemented an intuitive and simple instantiation (Figure 2) of our proposed context-aware prompter to illustrate its promises. It is an interesting future direction to further explore various design choices for iterative prompting, e.g., alternative design for the Prompter-PLM interface, dynamic prompt length across different inference steps, etc. -T 8e-3 80 4e-3 80 4e-3 60 Prefix-T 8e-4 80 4e-4 60 4e-4 80 PLM-FT 4e-5 -4e-5 -4e-5 -PLM-QA 4e-5 -8e-5 -4e-5 -iCAP 8e-5 30 8e-5 60 8e-5 30
A.2 More details on PLM Knowledge Enhancement
To make sure the PLM knows all the relevant knowledge for subsequent recall, we further pretrain the PLM to recover separately masked elements in the triplets which form the knowledge statements. For 2Wiki and LoT, we also additionally include knowledge statements that are not used in the dataset to make the setting more challenging; one can think of these extra knowledge statements as "distractors". For 2Wiki, we filter from the processed Wikidata triples provided by (Agarwal et al., 2021) vided knowledge statements are in natural language and it's hard to retrieve high quality knowledge statements as such. We verified that the PLM after knowledge enhancement can indeed recover the masked elements in the knowledge statements in near-perfect accuracy.
A.3 Standard Metrics for Intrinsic Evaluation
The intrinsic evaluation results obtained by using standard text generation metrics (ROUGE 1/2/L & BLEU) for 2Wiki are shown in Table 5. Comparing with results using our proposed metrics (Table 1), it could be seen that while overall they show the same trend, the standard evaluation results tend to group closer due to their lack of focus on the important parts (e.g., entities) of the recalled evidence.
A.4 Prompter with Automatic Stopping
Here we explore augmenting an additional Stopper module which could learn to decide to stop the knowledge recall process appropriately when the recalled evidence pieces are enough to answer the query. Since the representations from the Prompter are already rich, we design the Stopper module to be a simple feed-forward DNN on top of the [CLS] embedding of the Prompter. The DNN has two hidden layers of dimensions 500 and 100 respectively, and outputs the probability of stopping the knowledge recall process. The loss for the Stopper is standard binary classification loss, which is combined with the original Prompter loss with weight factor 0.1. The Prompter and Stopper are jointly trained under this combined objective. We experiment on 2Wiki only and run the experiment once due to efficiency considerations. We first evaluate the frequency that the Stopper decides to stop the recall at the same number of steps as in the ground truth knowledge pieces. Note that this is not a perfect measure, as the actual recalled knowledge is different from the ground truth knowledge. The frequency is 98.5%, which indicates that the stopper can learn to stop the recall process appro-0% 1%-20% 21%-40% 41%-60% 61%-80% 81-99% 100% priately. Then we use our intrinsic measures to see the quality of the recalled evidence after truncation by the Stopper; the results are shown in Table 6. Note that here, the "iCAP" setting (top row) is different from that in Table 1 (despite having the same name) since the prompter is trained together with the stopper for fair comparison. It can be seen from the results that there're small performance drops after truncating by the Stopper, which suggests that the Stopper can learn to stop the knowledge recall process rather appropriately but not perfectly. Table 7 shows the 2Wiki Test-Train knowledge statement overlap, where 2Wiki (full) corresponds to the statistics using the full training set, and 2Wiki (down-sampled) corresponds to the downsampled training set that we used in our actual experiment. The inference steps in 2Wiki are mostly 2 or 4, so overall there're higher chances for the coverage ratio to be 50%.
A.5 Test-Train Overlap
A.6 Examples of processed data samples & failure cases of iCAP Table 9 shows examples of our processed data samples for each dataset and each sub-category, along with some failure cases of our proposed method.
A.7 Variants of Prompter Scales
While we used RoBERTa-base to instantiate the prompter in our main experiments, it is also interesting to see how the performance varies along different scales of the prompter. Towards this end, we conducted experiments on 2Wiki with two smaller scale prompters: BERT-small (28.8 million parameters) & BERT-tiny (4.4 million parameters). The intrinsic evaluation results are shown in Table 8. It can be seen that the performance grows as the prompter scale grows; in addition, BERT-small can also achieve an impressive performance (underperforming RoBERTa-base used in our main experiments by just a small gap) while BERT-tiny basically fails. This suggests that the prompter still needs to be larger than a certain scale for our method to work well.
A.8 More Examples on Prompter Attention Visualizations
Figure 3 :
3Prompter Attention Visualization. Attentions during the forward pass for the 1st & 2nd step are shown on the left & right respectively. Different colors correspond to different transformer layers. More examples of different reasoning types are included in Appendix A.8.
Figure 4 ,
45, 6, 7 show additional example prompter attention visualizations in the 2Wiki dataset, each corresponding to a different reasoning type as indicated in the captions.
Figure 4 :
4Prompter attention visualization. Reasoning type: Composition.
Figure 5 :
5Prompter attention visualization. Reasoning type: Comparison.
Figure 6 :Figure 7 :
67Prompter attention visualization. Reasoning type: Inference. q: Which film whose director is younger, Khoon Ka Khoon or Idaho Transfer? Cq: [ Sohrab Modi is director of Khoon Ka Khoon, Peter Fonda is director of Idaho Transfer, 2 November 1897 is date of birth of Sohrab Modi, February 23, 1940 is date of birth of Peter Fonda ] Prompter attention visualization. Reasoning type: Bridge-comparison.
Evi.R * Evi.R Ans.R Ans.R Evi.R * Evi.R Ans.R Ans.R2Wiki
LoT
R4C
PLM-FT
10.3
33.8
12.3
45.3
41.8
70.8
38.1
43.9
PLM-FT (Iter)
26.3
48.9
35.4
60.6
41.3
70.1
43.1
48.5
Prompt-T
5.5
22.3
6.6
41.3
35.3
62.8
28.2
33.4
Prompt-T (Iter)
10.8
27.5
16.7
46.2
33.3
63.4
30.6
36.0
Prefix-T
6.7
25.9
7.6
44.2
31.8
64.0
27.2
33.9
Prefix-T (Iter)
14.8
33.9
22.5
53.2
31.6
64.9
33.7
39.8
iCAP
22.0
42.1
28.6
54.6
34.1
65.0
36.8
41.5
Table 1: Results for Intrinsic Evaluation, where "(Iter)" indicates the iterative setting. All metrics are defined in §3.3
and overall measure the gold (answer) entity/object coverage of the recalled knowledge from different perspectives.
2Wiki
LoT
R4C
EM F1
EM EM F1
Oracle-RD
97.8 98.9 100.0 75.7 86.8
PLM-QA
24.1 29.3 68.3 22.6 28.8
PLM-FT
33.6 37.8 76.0 25.3 36.8
PLM-FT (Iter) 45.5 50.9 77.8 32.2 42.5
Prompt-T
26.9 31.0 65.9 16.6 25.9
Prompt-T (Iter) 25.0 30.2 68.8 22.4 30.4
Prefix-T
31.6 35.6 69.0 19.2 29.2
Prefix-T (Iter)
31.1 36.4 72.6 24.0 34.2
iCAP
42.8 47.9 73.8 25.7 35.2
Table 2 :
2Results for Extrinsic Evaluation, where the recalled knowledge of each method is used for final inference, except for Oracle-RD and PLM-QA.
Evi.R * Evi.R Ans.R Ans.R Evi.R * Evi.R Ans.R Ans.R(Zhong et al., 2021) raised and studied some im-
Random Model
Random Embedding
PLM-FT
1.77
5.20
3.76
37.48
4.10
11.47
6.52
37.18
Prompt-T
0.0
0.0
0.0
0.0
0.006
0.013
0.003
0.002
Prefix-T
0.001
0.0
0.0
0.0
0.009
0.014
0.004
0.002
iCAP
0.001
0.001
0.0
0.0
1.49
2.83
0.98
0.59
Table 3 :
3Intrinsic Evaluation Results on Random Control Experiments. Here we only focus on the iterative setting using the 2Wiki dataset.
Table 4 :
4Hyperparameter settings for all compared methods. lr: learning rate, pt_len: prompt length.Rouge (R)
BLEU
R-1 R-2 R-L
PLM-FT
74.3 62.4 72.7
52.9
PLM-FT (Iter) 83.6 76.3 82.3
70.8
Prompt-T
68.7 55.5 66.4
45.4
Prompt-T (Iter) 74.5 64.7 73.7
56.7
Prefix-T
70.8 57.8 68.9
48.7
Prefix-T (Iter)
79.0 70.3 77.6
64.0
iCAP
79.2 70.5 78.3
64.9
Table 5 :
5Intrinsic evaluation on 2Wiki using standard text generation metrics (ROUGE & BLEU). A Appendix A.1 Additional Details on Experiments Hyperparameters. We set the batch size to be 32, 128, 32 and train for 70, 50, 40 epochs for 2Wiki, LoT & R4C respectively. Table 4 summarizes other hyperparameters used in our experiments.
by keeping those with subject entities appearing in the original knowledge statements, and in the end, we obtain 383k extra knowledge statements v.s. 240k original ones (note that while we downsample the training set during our main experiment, the knowledge enhancement step is performed on the full dataset). For LoT, we directly use the provided distractor knowledge in the original dataset. We don't add distractors for R4C because the pro-Evi.R * Evi.R Ans.R Ans.RiCAP
20.0
39.1
26.5
54.0
iCAP (with stopper)
18.4
37.5
22.9
51.8
Table 6 :
6Intrinsic evaluation results from jointly training the Prompter and the Stopper which learns to stop the knowledge recall process when it decides that the recalled knowledge is adequate enough for answering the query.
Table 7 :
7Test/Train simple knowledge overlap on 2Wiki. The horizontal bar represents the percentage range of simple knowledge statements appearing in the training set, and the content values are the percentages of development & test set examples that fall into the corresponding range.Evi.R * Evi.R Ans.R Ans.R Evi.R * Evi.R Ans.R Ans.RBERT-tiny
BERT-small
6.0
17.7
9.0
35.3
21.4
41.2
29.1
54.2
Table 8 :
82Wiki intrinsic evaluation results with two smaller-scale prompter instantiations.
Our source code is available at https://github. com/sunlab-osu/IterPrompt.
The abbreviation here comes from the phrase "Leap-of-Thought" in the paper title of(Talmor et al., 2020b).
We found in our preliminary experiments that this approach gives the best results across different methods.4 LoT is constructed using templates, and therefore a rulebased reader can perfectly solve the inference task (100% accuracy when ground truth contexts are given, seeTable 2). 5 While our prompter is also initialized using a Pre-trained Language Model, we'll use the term "PLM" to refer only to the larger & more knowledgeable one.6 For the reader, we intentionally choose the same architecture with the PLM for a fair comparison with PLM-QA.7 We set the length of encoder & decoder prompts to be the same, as we do not observe improvements otherwise in preliminary experiments.
AcknowledgementsThe authors would like to thank colleagues from the OSU NLP group for their thoughtful comments. This research was supported in part by Google Faculty Award, Google Research Scholar Award, NSF IIS 1815674, NSF CAREER 1942980, and Ohio Supercomputer Center (Center, 1987).
Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. Oshin Agarwal, Heming Ge, Siamak Shakeri, Rami Al-Rfou, 10.18653/v1/2021.naacl-main.278Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsOshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 3554-3565, Online. As- sociation for Computational Linguistics.
Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Advances in Neural Information Processing Systems. Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems, volume 33, pages 1877-1901.
Ohio supercomputer center. Ohio Supercomputer, Center, Ohio Supercomputer Center. 1987. Ohio supercomputer center.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won, Charles Chung, Sebastian Sutton, Gehrmann, arXiv:2204.02311Palm: Scaling language modeling with pathways. arXiv preprintAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, Jonathan Berant, Transactions of the Association for Computational Linguistics. 9Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346- 361.
Retrieval augmented language model pre-training. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Mingwei Chang, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR.
Designing and interpreting probes with control tasks. John Hewitt, Percy Liang, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsJohn Hewitt and Percy Liang. 2019. Designing and in- terpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics.
Constructing a multihop QA dataset for comprehensive evaluation of reasoning steps. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, Akiko Aizawa, 10.18653/v1/2020.coling-main.580Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain (OnlineInternational Committee on Computational LinguisticsXanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi- hop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 6609-6625, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, 10.18653/v1/P18-1031Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.
R4C: A benchmark for evaluating RC systems to get the right answer for the right reason. Naoya Inoue, Pontus Stenetorp, Kentaro Inui, 10.18653/v1/2020.acl-main.602Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsNaoya Inoue, Pontus Stenetorp, and Kentaro Inui. 2020. R4C: A benchmark for evaluating RC systems to get the right answer for the right reason. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6740-6750, Online. Association for Computational Linguistics.
How can we know what language models know?. Zhengbao Jiang, Frank F Xu, Jun Araki, Graham Neubig, 10.1162/tacl_a_00324Transactions of the Association for Computational Linguistics. 8Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438.
Are pretrained language models symbolic reasoners over knowledge?. Nora Kassner, Benno Krojer, Hinrich Schütze, 10.18653/v1/2020.conll-1.45Proceedings of the 24th Conference on Computational Natural Language Learning. the 24th Conference on Computational Natural Language LearningOnline. Association for Computational LinguisticsNora Kassner, Benno Krojer, and Hinrich Schütze. 2020. Are pretrained language models symbolic reasoners over knowledge? In Proceedings of the 24th Confer- ence on Computational Natural Language Learning, pages 552-564, Online. Association for Computa- tional Linguistics.
Baleen: Robust multi-hop reasoning at scale via condensed retrieval. Omar Khattab, Christopher Potts, Matei Zaharia, Advances in Neural Information Processing Systems. 34Omar Khattab, Christopher Potts, and Matei Zaharia. 2021. Baleen: Robust multi-hop reasoning at scale via condensed retrieval. Advances in Neural Infor- mation Processing Systems, 34.
Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, 10.1073/pnas.1611835114Proceedings of the National Academy of Sciences. 11413James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, Demis Hassabis, Clau- dia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521-3526.
Question and answer test-train overlap in opendomain question answering datasets. Patrick Lewis, Pontus Stenetorp, Sebastian Riedel, 10.18653/v1/2021.eacl-main.86Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsPatrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open- domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics.
Prefix-tuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, 10.18653/v1/2021.acl-long.353Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing1Long Papers)Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582- 4597, Online. Association for Computational Lin- guistics.
Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, arXiv:2107.13586arXiv preprintPengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer- ence on Learning Representations.
Catastrophic interference in connectionist networks: The sequential learning problem. Michael Mccloskey, Neal J Cohen, 10.1016/S0079-7421(08)60536-8Psychology of Learning and Motivation. Gordon H. Bower24Academic PressMichael McCloskey and Neal J. Cohen. 1989. Catas- trophic interference in connectionist networks: The sequential learning problem. In Gordon H. Bower, editor, Psychology of Learning and Motivation, vol- ume 24, pages 109-165. Academic Press.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi, arXiv:2109.07830Reframing instructional prompts to gptk's language. arXiv preprintSwaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. 2021. Reframing instructional prompts to gptk's language. arXiv preprint arXiv:2109.07830.
Towards transparent interactive semantic parsing via step-by-step correction. Lingbo Mo, Ashley Lewis, Huan Sun, Michael White, Findings of the Association for Computational Linguistics: ACL 2022. Lingbo Mo, Ashley Lewis, Huan Sun, and Michael White. 2022a. Towards transparent interactive seman- tic parsing via step-by-step correction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 322-342.
Knowledge transfer between structured and unstructured sources for complex question answering. Lingbo Mo, Zhen Wang, Jie Zhao, Huan Sun, Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI). the Workshop on Structured and Unstructured Knowledge Integration (SUKI)Lingbo Mo, Zhen Wang, Jie Zhao, and Huan Sun. 2022b. Knowledge transfer between structured and unstruc- tured sources for complex question answering. In Proceedings of the Workshop on Structured and Un- structured Knowledge Integration (SUKI), pages 55- 66.
Prompting contrastive explanations for commonsense reasoning tasks. Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Hannaneh Hajishirzi, Luke Zettlemoyer, 10.18653/v1/2021.findings-acl.366Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsBhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2021. Prompting contrastive explana- tions for commonsense reasoning tasks. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4179-4192, Online. Association for Computational Linguistics.
. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, andFabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and
Language models as knowledge bases?. Alexander Miller, 10.18653/v1/D19-1250Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsAlexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Association for Computational Linguistics.
Answering complex open-domain questions through iterative query generation. Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, Christopher D Manning, 10.18653/v1/D19-1261Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsPeng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query gen- eration. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2590-2602, Hong Kong, China. Association for Com- putational Linguistics.
Learning how to ask: Querying LMs with mixtures of soft prompts. Guanghui Qin, Jason Eisner, 10.18653/v1/2021.naacl-main.410Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesGuanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 5203-5212, Online. Association for Computa- tional Linguistics.
Sebastian Jack W Rae, Trevor Borgeaud, Katie Cai, Jordan Millican, Francis Hoffmann, John Song, Sarah Aslanides, Roman Henderson, Susannah Ring, Young, arXiv:2112.11446Scaling language models: Methods, analysis & insights from training gopher. arXiv preprintJack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susan- nah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, 10.18653/v1/2020.emnlp-main.437Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsAdam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.
A primer in bertology: What we know about how bert works. Anna Rogers, Olga Kovaleva, Anna Rumshisky, Transactions of the Association for Computational Linguistics. 8Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842-866.
AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. Taylor Shin, Yasaman Razeghi, Robert L Logan, I V , Eric Wallace, Sameer Singh, 10.18653/v1/2020.emnlp-main.346Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsTaylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Elic- iting Knowledge from Language Models with Auto- matically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics.
Unsupervised commonsense question answering with self-talk. Vered Shwartz, Peter West, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, 10.18653/v1/2020.emnlp-main.373Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4615-4629, Online. Association for Computa- tional Linguistics.
Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant, 2020a. olmpics-on what language model pre-training captures. Transactions of the Association for Computational Linguistics. 8Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020a. olmpics-on what language model pre-training captures. Transactions of the As- sociation for Computational Linguistics, 8:743-758.
Leap-of-thought: Teaching pre-trained models to systematically reason over implicit knowledge. Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, Jonathan Berant, Advances in Neural Information Processing Systems. 33Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Gold- berg, and Jonathan Berant. 2020b. Leap-of-thought: Teaching pre-trained models to systematically rea- son over implicit knowledge. In Advances in Neural Information Processing Systems, volume 33, pages 20227-20237.
A multiscale visualization of attention in the transformer model. Jesse Vig, 10.18653/v1/P19-3007Proceedings of the 57th. the 57thJesse Vig. 2019. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th
Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Florence, ItalyAssociation for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37-42, Florence, Italy. Association for Computational Lin- guistics.
Can generative pre-trained language models serve as knowledge bases for closed-book QA?. Cunxiang Wang, Pai Liu, Yue Zhang, 10.18653/v1/2021.acl-long.251Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Cunxiang Wang, Pai Liu, and Yue Zhang. 2021. Can generative pre-trained language models serve as knowledge bases for closed-book QA? In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3241-3251, Online. Association for Computational Linguistics.
. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Emergent abilities of large language models. Tatsunori Chi, Oriol Hashimoto, Percy Vinyals, Jeff Liang, William Dean, Fedus, Transactions on Machine Learning Research. Survey Certification. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emer- gent abilities of large language models. Transactions on Machine Learning Research. Survey Certifica- tion.
Chain of thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou, arXiv:2201.11903arXiv preprintJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander Lhoest, Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
Answering complex open-domain questions with multi-hop dense retrieval. Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, Barlas Oguz, International Conference on Learning Representations. Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain ques- tions with multi-hop dense retrieval. In International Conference on Learning Representations.
HotpotQA: A dataset for diverse, explainable multi-hop question answering. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D Manning, 10.18653/v1/D18-1259Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsZhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Com- putational Linguistics.
Factual probing is [MASK]: Learning vs. learning to recall. Zexuan Zhong, Dan Friedman, Danqi Chen, 10.18653/v1/2021.naacl-main.398Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsZexuan Zhong, Dan Friedman, and Danqi Chen. 2021. Factual probing is [MASK]: Learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5017-5033, Online. Association for Computational Linguistics.
. Query, 2Wiki[CompositionQuery (2Wiki[Composition])
is date of birth of Viktor Podloucký Recalled Knowledge 30 March 1977 is date of birth of Emma Kealy; 9 October 1964 is date of birth of Viktor Podloucký Query (2Wiki[Inference]) Who is the maternal grandfather of Vyacheslav Yaroslavich? Gold Knowledge Ingegerd Olofsdotter of Sweden is mother of Vyacheslav Yaroslavich; Olof Skötkonung is father of Ingegerd Olofsdotter of Sweden Recalled Knowledge Yaroslavlava of Avidia is mother of Vyacheslav Yaroslavich. La Terre Est RondeGold Knowledge Orelsan is performer of La terre est ronde; Alençon is place of birth of Orelsan Recalled Knowledge Basshunter is performer of La Terre est ronde; Havana is place of birth of Basshunter Query (2Wiki[Comparison]) Who was born first out of Emma Kealy and Viktor Podloucký? Gold Knowledge 29 May 1977 is date of birth of Emma Kealy. Sovatoslav is father of Yaroslavlava of AvidiaWhat is the place of birth of the performer of song La Terre Est Ronde? Gold Knowledge Orelsan is performer of La terre est ronde; Alençon is place of birth of Orelsan Recalled Knowledge Basshunter is performer of La Terre est ronde; Havana is place of birth of Basshunter Query (2Wiki[Comparison]) Who was born first out of Emma Kealy and Viktor Podloucký? Gold Knowledge 29 May 1977 is date of birth of Emma Kealy; December 3, 1950 is date of birth of Viktor Podloucký Recalled Knowledge 30 March 1977 is date of birth of Emma Kealy; 9 October 1964 is date of birth of Viktor Podloucký Query (2Wiki[Inference]) Who is the maternal grandfather of Vyacheslav Yaroslavich? Gold Knowledge Ingegerd Olofsdotter of Sweden is mother of Vyacheslav Yaroslavich; Olof Skötkonung is father of Ingegerd Olofsdotter of Sweden Recalled Knowledge Yaroslavlava of Avidia is mother of Vyacheslav Yaroslavich; Sovatoslav is father of Yaroslavlava of Avidia
is date of death of Chris Marker; March 30, 1999 is date of death of Andrei Arsenevich Query (LoT) A evergreen is a important food source. Gold Knowledge A evergreen is a plant; A plant is not a important food source Recalled Knowledge A evergreen is a material, tree; A tree is a important food source Query (R4C[Comparison]) Which documentary was filmed first. Query, 29One Day In The Life Of Andrei Arsenevich or Wolves Of The Range? Gold Knowledge Chris Marker is director of One Day in the Life of Andrei Arsenevich; Sam Newfield is director of Wolves of the Range. Almost Sunrise or Hail! Hail! Rock 'n' Roll? Gold Knowledge Almost Sunrise was filmed inQuery (2Wiki[Bridge comparison]) Which film has the director died later, One Day In The Life Of Andrei Ar- senevich or Wolves Of The Range? Gold Knowledge Chris Marker is director of One Day in the Life of Andrei Arsenevich; Sam Newfield is director of Wolves of the Range; 29 July 2012 is date of death of Chris Marker; November 10, 1964 is date of death of Sam Newfield Recalled Knowledge Chris Marker is director of One Day in the Life of Andrei Arsenevich; Wallace Fox is director of Wolves of the Range; 21 January 2013 is date of death of Chris Marker; March 30, 1999 is date of death of Andrei Arsenevich Query (LoT) A evergreen is a important food source. Gold Knowledge A evergreen is a plant; A plant is not a important food source Recalled Knowledge A evergreen is a material, tree; A tree is a important food source Query (R4C[Comparison]) Which documentary was filmed first, Almost Sunrise or Hail! Hail! Rock 'n' Roll? Gold Knowledge Almost Sunrise was filmed in 2016;
Roll was filmed in 1986 Recalled Knowledge Almost Sunrise (album) is credited to American singer-songwriter Taylor Swift. ! Hail, Hail, Rock, Hail! Hail! Rock 'n' Roll was filmed in 1986 Recalled Knowledge Almost Sunrise (album) is credited to American singer-songwriter Taylor Swift;
Who was the chief executive officer of the second largest US car rental company by sales? Gold Knowledge The Hertz Corporation is the second-largest US. Query, car rental companyQuery (R4C[Bridge]) Who was the chief executive officer of the second largest US car rental company by sales? Gold Knowledge The Hertz Corporation is the second-largest US car rental company;
Stone was chief executive officer of The Hertz Corporation Recalled Knowledge The Hertz Corporation is the second-largest US car rental company; Enterprise Rent-A-Car founder Jack Taylor was chief executive officer of Hertz Table 9: Examples of our processed data samples for each dataset and sub-category. L Robert, indicated in brackets. along with failure cases of our methodRobert L. Stone was chief executive officer of The Hertz Corporation Recalled Knowledge The Hertz Corporation is the second-largest US car rental company; Enterprise Rent-A-Car founder Jack Taylor was chief executive officer of Hertz Table 9: Examples of our processed data samples for each dataset and sub-category (indicated in brackets), along with failure cases of our method. |
7,717,053 | Corpus Effects on the Evaluation of Automated Transliteration Systems | Most current machine transliteration systems employ a corpus of known sourcetarget word pairs to train their system, and typically evaluate their systems on a similar corpus. In this paper we explore the performance of transliteration systems on corpora that are varied in a controlled way. In particular, we control the number, and prior language knowledge of human transliterators used to construct the corpora, and the origin of the source words that make up the corpora. We find that the word accuracy of automated transliteration systems can vary by up to 30% (in absolute terms) depending on the corpus on which they are run. We conclude that at least four human transliterators should be used to construct corpora for evaluating automated transliteration systems; and that although absolute word accuracy metrics may not translate across corpora, the relative rankings of system performance remains stable across differing corpora. | [
4093737,
5363670,
16731433
] | Corpus Effects on the Evaluation of Automated Transliteration Systems
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2007. 2007
Sarvnaz Karimi [email protected]
School of Computer Science and Information Technology
RMIT University
GPO Box 2476V3001MelbourneAustralia
Andrew Turpin
School of Computer Science and Information Technology
RMIT University
GPO Box 2476V3001MelbourneAustralia
Falk Scholer [email protected]
School of Computer Science and Information Technology
RMIT University
GPO Box 2476V3001MelbourneAustralia
Corpus Effects on the Evaluation of Automated Transliteration Systems
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics
the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicAssociation for Computational LinguisticsJune 2007. 2007
Most current machine transliteration systems employ a corpus of known sourcetarget word pairs to train their system, and typically evaluate their systems on a similar corpus. In this paper we explore the performance of transliteration systems on corpora that are varied in a controlled way. In particular, we control the number, and prior language knowledge of human transliterators used to construct the corpora, and the origin of the source words that make up the corpora. We find that the word accuracy of automated transliteration systems can vary by up to 30% (in absolute terms) depending on the corpus on which they are run. We conclude that at least four human transliterators should be used to construct corpora for evaluating automated transliteration systems; and that although absolute word accuracy metrics may not translate across corpora, the relative rankings of system performance remains stable across differing corpora.
Introduction
Machine transliteration is the process of transforming a word written in a source language into a word in a target language without the aid of a bilingual dictionary. Word pronunciation is preserved, as far as possible, but the script used to render the target word is different from that of the source language. Transliteration is applied to proper nouns and outof-vocabulary terms as part of machine translation and cross-lingual information retrieval (CLIR) (Ab-dulJaleel and Larkey, 2003;Pirkola et al., 2006). Several transliteration methods are reported in the literature for a variety of languages, with their performance being evaluated on multilingual corpora. Source-target pairs are either extracted from bilingual documents or dictionaries (AbdulJaleel and Larkey, 2003;Bilac and Tanaka, 2005;Oh and Choi, 2006;Zelenko and Aone, 2006), or gathered explicitly from human transliterators (Al-Onaizan and Knight, 2002;Zelenko and Aone, 2006). Some evaluations of transliteration methods depend on a single unique transliteration for each source word, while others take multiple target words for a single source word into account. In their work on transliterating English to Persian, Karimi et al. (2006) observed that the content of the corpus used for evaluating systems could have dramatic affects on the reported accuracy of methods.
The effects of corpus composition on the evaluation of transliteration systems has not been specifically studied, with only implicit experiments or claims made in the literature such as introducing the effects of different transliteration models (AbdulJaleel and Larkey, 2003), language families (Lindén, 2005) or application based (CLIR) evaluation (Pirkola et al., 2006). In this paper, we report our experiments designed to explicitly examine the effect that varying the underlying corpus used in both training and testing systems has on transliteration accuracy. Specifically, we vary the number of human transliterators that are used to construct the corpus; and the origin of the English words used in the corpus.
Our experiments show that the word accuracy of automated transliteration systems can vary by up to 30% (in absolute terms), depending on the corpus used. Despite the wide range of absolute values 640 in performance, the ranking of our two transliteration systems was preserved on all corpora. We also find that a human's confidence in the language from which they are transliterating can affect the corpus in such a way that word accuracy rates are altered.
Background
Machine transliteration methods are divided into grapheme-based (AbdulJaleel and Larkey, 2003;Lindén, 2005), phoneme-based (Jung et al., 2000;Virga and Khudanpur, 2003) and combined techniques (Bilac and Tanaka, 2005;Oh and Choi, 2006). Grapheme-based methods derive transformation rules for character combinations in the source text from a training data set, while phoneme-based methods use an intermediate phonetic transformation. In this paper, we use two grapheme-based methods for English to Persian transliteration. During a training phase, both methods derive rules for transforming character combinations (segments) in the source language into character combinations in the target language with some probability.
During transliteration, the source word s i is segmented and rules are chosen and applied to each segment according to heuristics. The probability of a resulting word is the product of the probabilities of the applied rules. The result is a list of target words sorted by their associated probabilities, L i .
The first system we use (SYS-1) is an n-gram approach that uses the last character of the previous source segment to condition the choice of the rule for the current source segment. This system has been shown to outperform other n-gram based methods for English to Persian transliteration (Karimi et al., 2006).
The second system we employ (SYS-2) makes use of some explicit knowledge of our chosen language pair, English and Persian, and is also on the collapsed-vowel scheme presented by Karimi et al. (2006). In particular, it exploits the tendency for runs of English vowels to be collapsed into a single Persian character, or perhaps omitted from the Persian altogether. As such, segments are chosen based on surrounding consonants and vowels. The full details of this system are not important for this paper; here we focus on the performance evaluation of systems, not the systems themselves.
System Evaluation
In order to evaluate the list L i of target words produced by a transliteration system for source word s i , a test corpus is constructed. The test corpus consists of a source word, s i , and a list of possible target words {t i j }, where 1 ≤ j ≤ d i , the number of distinct target words for source word s i . Associated with each t i j is a count n i j which is the number of human transliterators who transliterated s i into t i j .
Often the test corpus is a proportion of a larger corpus, the remainder of which has been used for training the system's rule base. In this work we adopt the standard ten-fold cross validation technique for all of our results, where 90% of a corpus is used for training and 10% for testing. The process is repeated ten times, and the mean result taken. Forthwith, we use the term corpus to refer to the single corpus from which both training and test sets are drawn in this fashion.
Once the corpus is decided upon, a metric to measure the system's accuracy is required. The appropriate metric depends on the scenario in which the transliteration system is to be used. For example, in a machine translation application where only one target word can be inserted in the text to represent a source word, it is important that the word at the top of the system generated list of target words (by definition the most probable) is one of the words generated by a human in the corpus. More formally, the first word generated for source word s i , L i 1 , must be one of t i j , 1 ≤ j ≤ d i . It may even be desirable that this is the target word most commonly used for this source word; that is, L i 1 = t i j such that n i j ≥ n ik , for all 1 ≤ k ≤ d i . Alternately, in a CLIR application, all variants of a source word might be required. For example, if a user searches for an English term "Tom" in Persian documents, the search engine should try and locate documents that contain both " Ã" (3 letters: À--) and " ³"(2 letters: À-), two possible transliterations of "Tom" that would be generated by human transliterators. In this case, a metric that counts the number of t i j that appear in the top d i elements of the system generated list, L i , might be appropriate.
In this paper we focus on the "Top-1" case, where it is important for the most probable target word generated by the system, L i 1 to be either the most pop-641 ular t i j (labeled the Majority, with ties broken arbitrarily), or just one of the t i j 's (labeled Uniform because all possible transliterations are equally rewarded). A third scheme (labeled Weighted) is also possible where the reward for t i j appearing as L i 1 is n i j / ∑ d i j=1 n i j ; here, each target word is given a weight proportional to how often a human transliterator chose that target word. Due to space considerations, we focus on the first two variants only.
In general, there are two commonly used metrics for transliteration evaluation: word accuracy (WA) and character accuracy (CA) (Hall and Dowling, 1980). In all of our experiments, CA based metrics closely mirrored WA based metrics, and so conclusions drawn from the data would be the same whether WA metrics or CA metrics were used. Hence we only discuss and report WA based metrics in this paper.
For each source word in the test corpus of K words, word accuracy calculates the percentage of correctly transliterated terms. Hence for the majority case, where every source word in the corpus only has one target word, the word accuracy is defined as
MWA = |{s i |L i 1 = t i1 , 1 ≤ i ≤ K}|/K
, and for the Uniform case, where every target variant is included with equal weight in the corpus, the word accuracy is defined as
UWA = |{s i |L i 1 ∈ {t i j }, 1 ≤ i ≤ K, 1 ≤ j ≤ d i }|/K.
Human Evaluation
To evaluate the level of agreement between transliterators, we use an agreement measure based on Mun and Eye (2004).
For any source word s i , there are d i different transliterations made by the n i human transliterators (n i = ∑ d i j=1 n i j , where n i j is the number of times source word s i was transliterated into target word t i j ). When any two transliterators agree on the same target word, there are two agreements being made: transliterator one agrees with transliterator two, and vice versa. In general, therefore, the total number of agreements made on source word s i is ∑ d i j=1 n i j (n i j − 1). Hence the total number of actual agreements made on the entire corpus of K words is
A act = K ∑ i=1 d i ∑ j=1 n i j (n i j − 1).
The total number of possible agreements (that is, when all human transliterators agree on a single target word for each source word), is
A poss = K ∑ i=1 n i (n i − 1).
The proportion of overall agreement is therefore
P A = A act A poss .
Corpora
Seven transliterators (T1, T2, . . ., T7: all native Persian speakers from Iran) were recruited to transliterate 1500 proper names that we provided. The names were taken from lists of names written in English on English Web sites. Five hundred of these names also appeared in lists of names on Arabic Web sites, and five hundred on Dutch name lists. The transliterators were not told of the origin of each word. The entire corpus, therefore, was easily separated into three sub-corpora of 500 words each based on the origin of each word. To distinguish these collections, we use E 7 , A 7 and D 7 to denote the English, Arabic and Dutch sub-corpora, respectively. The whole 1500 word corpus is referred to as EDA 7 . Dutch and Arabic were chosen with an assumption that most Iranian Persian speakers have little knowledge of Dutch, while their familiarity with Arabic should be in the second rank after English. All of the participants held at least a Bachelors degree. Table 1 summarizes the information about the transliterators and their perception of the given task. Participants were asked to scale the difficulty of the transliteration of each sub-corpus, indicated as a scale from 1 (hard) to 3 (easy). Similarly, the participants' confidence in performing the task was rated from 1 (no confidence) to 3 (quite confident). The level of familiarity with second languages was also reported based on a scale of zero (not familiar) to 3 (excellent knowledge).
The information provided by participants confirms our assumption of transliterators knowledge of second languages: high familiarity with English, some knowledge of Arabic, and little or no prior knowledge of Dutch. Also, the majority of them found the transliteration of English terms of medium difficulty, Dutch was considered mostly hard, and Arabic as easy to medium. 642
Second Language Knowledge
Difficulty, Confidence Transliterator English Dutch Arabic Other English Dutch Arabic 1 2 0 1 -1,1 1,2 2,3 2 2 0 2 -2,2 2,3 3,3 3 2 0 1 -2,2 1,2 2,2 4 2 0 1 -2,2 2,1 3,3 5 2 0 2 Turkish 2,2 1,1 3,2 6 2 0 1 -2,2 1,1 3,3 7 2 0 1 -2,2 1,1 2,2 Table 1: Transliterator's language knowledge (0=not familiar to 3=excellent knowledge), perception of difficulty (1=hard to 3=easy) and confidence (1=no confidence to 3=quite confident) in creating the corpus. Figure 1 shows the values of UWA and MWA for E 7 , A 7 , D 7 and EDA 7 using the two transliteration systems. Immediately obvious is that varying the corpora (x-axis) results in different values for word accuracy, whether by the UWA or MWA method. For example, if you chose to evaluate SYS-2 with the UWA metric on the D 7 corpus, you would obtain a result of 82%, but if you chose to evaluate it with the A 7 corpus you would receive a result of only 73%. This makes comparing systems that report results obtained on different corpora very difficult. Encouragingly, however, SYS-2 consistently outperforms the SYS-1 on all corpora for both metrics except MWA on E7. This implies that ranking system performance on the same corpus most likely yields a system ranking that is transferable to other corpora.
Results
To further investigate this, we randomly extracted 100 corpora of 500 word pairs from EDA 7 and ran the two systems on them and evaluated the results using both MWA and UWA. Both of the measures ranked the systems consistently using all these corpora ( Figure 2). As expected, the UWA metric is consistently higher than the MWA metric; it allows for the top transliteration to appear in any of the possible variants for that word in the corpus, unlike the MWA metric which insists upon a single target word. For example, for the E 7 corpus using the SYS-2 approach, UWA is 76.4% and MWA is 47.0%.
Each of the three sub-corpora can be further divided based on the seven individual transliterators, in different combinations. That is, construct a subcorpus from T1's transliterations, T2's, and so on; then take all combinations of two transliterators, then three, and so on. In general we can construct 7 C r such corpora from r transliterators in this fashion, all of which have 500 source words, but may have between one to seven different transliterations for each of those words. Figure 3 shows the MWA for these sub-corpora. The x-axis shows the number of transliterators used to form the sub-corpora. For example, when x = 3, the performance figures plotted are achieved on corpora when taking all triples of the seven transliterator's transliterations.
From the boxplots it can be seen that performance varies considerably when the number of transliterators used to determine a majority vote is varied. 643 However, the changes do not follow a fixed trend across the languages. For E 7 , the range of accuracies achieved is high when only two or three transliterators are involved, ranging from 37.0% to 50.6% in SYS-2 method and from 33.8% to 48.0% in SYS-1 (not shown) when only two transliterators' data are available. When more than three transliterators are used, the range of performance is noticeably smaller. Hence if at least four transliterators are used, then it is more likely that a system's MWA will be stable. This finding is supported by Papineni et al. (2002) who recommend that four people should be used for collecting judgments for machine translation experiments.
The corpora derived from A 7 show consistent median increases as the number of transliterators increases, but the median accuracy is lower than for other languages. The D 7 collection does not show any stable results until at least six transliterator's are used.
The results indicate that creating a collection used for the evaluation of transliteration systems, based on a "gold standard" created by only one human transliterator may lead to word accuracy results that could show a 10% absolute difference compared to results on a corpus derived using a different translit- erator. This is evidenced by the leftmost box in each panel of the figure which has a wide range of results. Figure 4 shows this box in more detail for each collection, plotting the word accuracy for each user for all sub-corpora for SYS-2. The accuracy achieved varies significantly between transliterators; for example, for E 7 collections, word accuracy varies from 37.2% for T1 to 50.0% for T5. This variance is more obvious for the D 7 dataset where the difference ranges from 23.2% for T 1 to 56.2% for T 3. Origin language also has an effect: accuracy for the Arabic collection (A 7 ) is generally less than that of English (E 7 ). The Dutch collection (D 7 ), shows an unstable trend across transliterators. In other words, accuracy differs in a narrower range for Arabic and English, but in wider range for Dutch. 644 This is likely due to the fact that most transliterators found Dutch a difficult language to work with, as reported in Table 1.
Transliterator Consistency
To investigate the effect of invididual transliterator consistency on system accuracy, we consider the number of Persian characters used by each transliterator on each sub-corpus, and the average number of rules generated by SYS-2 on the ten training sets derived in the ten-fold cross validation process, which are shown in Table 2. For example, when transliterating words from E 7 into Persian, T3 only ever used 21 out of 32 characters available in the Persian alphabet; T7, on the other hand, used 24 different Persian characters. It is expected that an increase in number of characters or rules provides more "noise" for the automated system, hence may lead to lower accuracy. Superficially the opposite seems true for rules: the mean number of rules generated by SYS-2 is much higher for the EDA 7 corpus than for the A 7 corpus, and yet Figure 1 shows that word accuracy is higher on the EDA 7 corpus. A correlation test, however, reveals that there is no significant relationship between either the number of characters used, nor the number of rules generated, and the resulting word accuracy of SYS-2 (Spearman correlation, p = 0.09 (characters) and p = 0.98 (rules)).
A better indication of "noise" in the corpus may be given by the consistency with which a transliterator applies a certain rule. For example, a large number of rules generated from a particular transliterator's corpus may not be problematic if many of the rules get applied with a low probability. If, on the other hand, there were many rules with approximately equal probabilities, the system may have difficulty distinguishing when to apply some rules, and not others. One way to quantify this effect is to compute the self entropy of the rule distribution for each segment in the corpus for an individual. If p i j is the probability of applying rule 1 ≤ j ≤ m when confronted with source segment i, then H i = − ∑ m j=1 p i j log 2 p i j is the entropy of the probability distribution for that rule. H is maximized when the probabilities p i j are all equal, and minimized when the probabilities are very skewed (Shannon, 1948). As an example, consider the rules: t →< À,0.5 >, t →< ,0.3 > and t →< ,0.2 >; for which H t = 0.79.
The expected entropy can be used to obtain a single entropy value over the whole corpus,
E = − R ∑ i=1 f i S H i ,
where H i is the entropy of the rule probabilities for segment i, R is the total number of segments, f i is the frequency with which segment i occurs at any position in all source words in the corpus, and S is the sum of all f i .
The expected entropy for each transliterator is shown in Figure 5, separated by corpus. Comparison of this graph with Figure 4 shows that generally transliterators that have used rules inconsistently generate a corpus that leads to low accuracy for the systems. For example, T1 who has the lowest accuracy for all the collections in both methods, also has the highest expected entropy of rules for all the collections. For the E 7 collection, the maximum accuracy of 50.0%, belongs to T 5 who has the minimum expected entropy. The same applies to the D 7 collection, where the maximum accuracy of 56.2% and the minimum expected entropy both belong to T 3. These observations are confirmed by a statistically significant Spearman correlation between expected rule entropy and word accuracy (r = −0.54, p = 0.003). Therefore, the consistency with which transliterators employ their own internal rules in developing a corpus has a direct effect on system performance measures.
Inter-Transliterator Agreement and Perceived Difficulty
Here we present various agreement proportions (P A from Section 2.2), which give a measure of consistency in the corpora across all users, as opposed to the entropy measure which gives a consistency measure for a single user. For E 7 , P A was 33.6%, for A 7 it was 33.3% and for D 7 , agreement was 15.5%. In general, humans agree less than 33% of the time when transliterating English to Persian. In addition, we examined agreement among transliterators based on their perception of the task difficulty shown in Table 1. For A 7 , agreement among those who found the task easy was higher (22.3%) than those who found it in medium level 645 Rules Char Rules Char Rules Char Rules T1 23 523 23 623 28 330 31 1075 T2 22 487 25 550 29 304 32 956 T3 21 466 20 500 28 280 31 870 T4 23 497 22 524 28 307 30 956 T5 21 492 22 508 28 296 29 896 T6 24 493 21 563 25 313 29 968 T7 24 495 21 529 28 299 30 952 Mean 23 493 22 542 28 304 30 953 Table 2: Number of characters used and rules generated using SYS-2, per transliterator.
E 7 D 7 A 7 EDA 7 Char
(18.8%). P A is 12.0% for those who found the D 7 collection hard to transliterate; while the six transliterators who found the E 7 collection difficulty medium had P A = 30.2%. Hence, the harder participants rated the transliteration task, the lower the agreement scores tend to be for the derived corpus. Finally, in Table 3 we show word accuracy results for the two systems on corpora derived from transliterators grouped by perceived level of difficulty on A 7 . It is readily apparent that SYS-2 outperforms SYS-1 on the corpus comprised of human transliterations from people who saw the task as easy with both word accuracy metrics; the relative improvement of over 50% is statistically significant (paired t-test on ten-fold cross validation runs). However, on the corpus composed of transliterations that were perceived as more difficult, "Medium", the advantage of SYS-2 is significantly eroded, but is still statistically significant for UWA. Here again, using only one transliteration, MWA, did not distinguish the performance of each system.
Discussion
We have evaluated two English to Persian transliteration systems on a variety of controlled corpora using evaluation metrics that appear in previous transliteration studies. Varying the evaluation corpus in a controlled fashion has revealed several interesting facts.
We report that human agreement on the English to Persian transliteration task is about 33%. The effect that this level of disagreement on the evaluation of systems has, can be seen in Figure 4, where word accuracy is computed on corpora derived from single transliterators. Accuracy can vary by up to 30% in absolute terms depending on the transliterator chosen. To our knowledge, this is the first paper to report human agreement, and examine its effects on transliteration accuracy.
In order to alleviate some of these effects on the stability of word accuracy measures across corpora, we recommend that at least four transliterators are used to construct a corpus. Figure 3 shows that constructing a corpus with four or more transliterators, the range of possible word accuracies achieved is less than that of using fewer transliterators.
Some past studies do not use more than a single target word for every source word in the corpus (Bilac and Tanaka, 2005;Oh and Choi, 2006). Our results indicate that it is unlikely that these results would translate onto a corpus other than the one used in these studies, except in rare cases where human transliterators are in 100% agreement for a given language pair.
Given the nature of the English language, an English corpus can contain English words from a variety of different origins. In this study we have used English words from an Arabic and Dutch origin to show that word accuracy of the systems can vary by up to 25% (in absolute terms) depending on the origin of English words in the corpus, as demonstrated in Figure 1.
In addition to computing agreement, we also in- Table 3: System performance when A 7 is split into sub-corpora based on transliterators perception of the task (Easy or Medium).
vestigated the transliterator's perception of difficulty of the transliteration task with the ensuing word accuracy of the systems. Interestingly, when using corpora built from transliterators that perceive the task to be easy, there is a large difference in the word accuracy between the two systems, but on corpora built from transliterators who perceive the task to be more difficult, the gap between the systems narrows. Hence, a corpus applied for evaluation of transliteration should either be made carefully with transliterators with a variety of backgrounds, or should be large enough and be gathered from various sources so as to simulate different expectations of its expected non-homogeneous users. The self entropy of rule probability distributions derived by the automated transliteration system can be used to measure the consistency with which individual transliterators apply their own rules in constructing a corpus. It was demonstrated that when systems are evaluated on corpora built by transliterators who are less consistent in their application of transliteration rules, word accuracy is reduced.
Given the large variations in system accuracy that are demonstrated by the varying corpora used in this study, we recommend that extreme care be taken when constructing corpora for evaluating transliteration systems. Studies should also give details of their corpora that would allow any of the effects observed in this paper to be taken into account.
Figure 1 :WordFigure 2 :
12Comparison of the two evaluation metrics using the two systems on four corpora. (Lines were added for clarity, and do not represent data points.Accuracy (%) UWA (SYS-2) UWA (SYS-1) MWA (SYS-2) MWA (SYS-1) Comparison of the two evaluation metrics using the two systems on 100 randomly generated sub-corpora.
Figure 3 :
3Performance on sub-corpora derived by combining the number of transliterators shown on the xaxis. Boxes show the 25th and 75th percentile of the MWA for all 7 C x combinations of transliterators using SYS-2, with whiskers showing extreme values.
Figure 4 :
4Word accuracy on the sub-corpora using only a single transliterator's transliterations.
Figure 5 :
5Entropy of the generated segments based on the collections created by different transliterators.
AcknowledgmentsThis work was supported in part by the Australian government IPRS program (SK).
Statistical transliteration for English-Arabic cross-language information retrieval. Nasreen Abduljaleel, Leah S Larkey, Conference on Information and Knowledge Management. Nasreen AbdulJaleel and Leah S. Larkey. 2003. Statistical transliteration for English-Arabic cross-language informa- tion retrieval. In Conference on Information and Knowledge Management, pages 139-146.
Machine transliteration of names in Arabic text. Yaser Al-Onaizan, Kevin Knight, Proceedings of the ACL-02 workshop on Computational approaches to semitic languages. the ACL-02 workshop on Computational approaches to semitic languagesYaser Al-Onaizan and Kevin Knight. 2002. Machine translit- eration of names in Arabic text. In Proceedings of the ACL- 02 workshop on Computational approaches to semitic lan- guages, pages 1-13.
Direct combination of spelling and pronunciation information for robust backtransliteration. Slaven Bilac, Hozumi Tanaka, Conference on Computational Linguistics and Intelligent Text Processing. Slaven Bilac and Hozumi Tanaka. 2005. Direct combination of spelling and pronunciation information for robust back- transliteration. In Conference on Computational Linguistics and Intelligent Text Processing, pages 413-424.
Approximate string matching. A V Patrick, Geoff R Hall, Dowling, ACM Computing Survey. 124Patrick A. V. Hall and Geoff R. Dowling. 1980. Approximate string matching. ACM Computing Survey, 12(4):381-402.
An English to Korean transliteration model of extended Markov window. Sung Young Jung, Sung Lim Hong, Eunok Paek, Conference on Computational Linguistics. Sung Young Jung, Sung Lim Hong, and Eunok Paek. 2000. An English to Korean transliteration model of extended Markov window. In Conference on Computational Linguistics, pages 383-389.
English to Persian transliteration. Sarvnaz Karimi, Andrew Turpin, Falk Scholer, String Processing and Information Retrieval. Sarvnaz Karimi, Andrew Turpin, and Falk Scholer. 2006. En- glish to Persian transliteration. In String Processing and In- formation Retrieval, pages 255-266.
Multilingual modeling of cross-lingual spelling variants. Krister Lindén, Information Retrieval. 93Krister Lindén. 2005. Multilingual modeling of cross-lingual spelling variants. Information Retrieval, 9(3):295-310.
Analyzing Rater Agreement: Manifest Variable Methods. Eun Young Mun, Alexander Von Eye, Lawrence Erlbaum AssociatesEun Young Mun and Alexander Von Eye, 2004. Analyzing Rater Agreement: Manifest Variable Methods. Lawrence Erlbaum Associates.
An ensemble of transliteration models for information retrieval. Jong-Hoon Oh, Key-Sun Choi, Information Processing Management. 424Jong-Hoon Oh and Key-Sun Choi. 2006. An ensemble of transliteration models for information retrieval. Information Processing Management, 42(4):980-1002.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, The 40th Annual Meeting of Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In The 40th Annual Meeting of Associ- ation for Computational Linguistics, pages 311-318.
FITE-TRT: a high quality translation technique for OOV words. Ari Pirkola, Jarmo Toivonen, Heikki Keskustalo, Kalervo Järvelin, Proceedings of the 2006 ACM Symposium on Applied Computing. the 2006 ACM Symposium on Applied ComputingAri Pirkola, Jarmo Toivonen, Heikki Keskustalo, and Kalervo Järvelin. 2006. FITE-TRT: a high quality translation tech- nique for OOV words. In Proceedings of the 2006 ACM Symposium on Applied Computing, pages 1043-1049.
A mathematical theory of communication. Claude Elwood Shannon, Bell System Technical Journal. 27Claude Elwood Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379- 423.
Transliteration of proper names in cross-language applications. Paola Virga, Sanjeev Khudanpur, ACM SIGIR Conference on Research and Development on Information Retrieval. Paola Virga and Sanjeev Khudanpur. 2003. Transliteration of proper names in cross-language applications. In ACM SIGIR Conference on Research and Development on Information Retrieval, pages 365-366.
Discriminative methods for transliteration. Dmitry Zelenko, Chinatsu Aone, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingDmitry Zelenko and Chinatsu Aone. 2006. Discriminative methods for transliteration. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Process- ing, pages 612-617. |
8,941,443 | TIDES Language Resources: A Resource Map for Translingual Information Access | Continuing improvements in human language algorithms, coupled with improvements in digital storage and processing, inspire growing confidence in multilingual information access systems. Systems exist to transcribe broadcast news, segment broadcasts into individual stories and sort them by topic. These technologies, useful in isolation, are now being combined to produce intelligent multilingual systems. DARPA TIDES combines technologies in detection, extraction, summarization and translation to create systems capable of searching a wide range of streaming multilingual text and speech sources, in real time, to provide effective access for English-speaking users. The broad scope of tasks and languages in programs like TIDES demands close coordination of research and shared resources. These resources includes large collections of raw text and speech; translations and summaries; annotations of topics, named entities and relations, syntactic structures and propositional content; lexicons; annotation specifications and protocols; and distribution formats and standards. The TIDES program has initiated ambitious attacks on difficult problems, with linguistic resources matched to the needs of each piece of the overall research enterprise. This paper will describe the coordinated language resources being created under the TIDES aegis. | [
2700066,
46591940,
12928205,
761999,
870319,
38067901
] | TIDES Language Resources: A Resource Map for Translingual Information Access
Christopher Cieri [email protected]
University of Pennsylvania
Linguistic Data Consortium
3615 Market Street19104-2608PhiladelphiaPAU.S.A
Mark Liberman
University of Pennsylvania
Linguistic Data Consortium
3615 Market Street19104-2608PhiladelphiaPAU.S.A
TIDES Language Resources: A Resource Map for Translingual Information Access
Continuing improvements in human language algorithms, coupled with improvements in digital storage and processing, inspire growing confidence in multilingual information access systems. Systems exist to transcribe broadcast news, segment broadcasts into individual stories and sort them by topic. These technologies, useful in isolation, are now being combined to produce intelligent multilingual systems. DARPA TIDES combines technologies in detection, extraction, summarization and translation to create systems capable of searching a wide range of streaming multilingual text and speech sources, in real time, to provide effective access for English-speaking users. The broad scope of tasks and languages in programs like TIDES demands close coordination of research and shared resources. These resources includes large collections of raw text and speech; translations and summaries; annotations of topics, named entities and relations, syntactic structures and propositional content; lexicons; annotation specifications and protocols; and distribution formats and standards. The TIDES program has initiated ambitious attacks on difficult problems, with linguistic resources matched to the needs of each piece of the overall research enterprise. This paper will describe the coordinated language resources being created under the TIDES aegis.
Introduction
The past 15 years of research in Human Language Technology (HLT) have shown that effective linguistic technology is based on statistical modeling of large amounts of linguistic data, and that the most reliable way to improve linguistic technology is to improve the linguistic resources upon which it is based. For familiar languages, improved HLT technology depends on orderof-magnitude increases in the underlying text and speech corpora, while porting HLT technology to new languages requires creation of similar-sized linguistic resources for them. As database sizes increase, new research methods come into play; at the same time, in order for database sizes to increase, new and more efficient methods of collection and creation, based on earlier research results, are needed. Today's key HLT research challenge is to create and digest linguistic resources on a significantly larger scale than ever before.
The DARPA program in Translingual Information Detection Extraction and Summarization (TIDES) aims to enable users to find and interpret needed information efficiently regardless of language or medium. TIDES research areas include information detection, extraction, summarization and translation; researchers in the program work on one or more areas, their intersection or integration into end-to-end systems. TIDES core languages are English, Mandarin, Arabic; second tier languages are Korean, Spanish and Japanese. The primary medium is text though this includes speech recognition output.
Data in Human Language Technology
Recent improvements in HLT algorithms, coupled with on-going cost/performance improvements in digital storage and processing, inspire growing confidence in the future value of HLT technologies. Systems exist to transcribe broadcast news, segmenting programs into stories and sorting them by topic. There is ongoing research into information extraction from text, question answering, document summarization and machine translation. These technologies, useful in isolation, address current needs even more effectively when combined. Several DARPA-sponsored common-task research programs now focus on extending HLT to new languages, and combining technologies to produce multilingual intelligent systems. The DARPA TIDES program combines technologies in detection, extraction, summarization and translation to create systems capable of searching a wide range of streaming multilingual text and speech sources, in real time, to provide effective access for English-speaking users.
Extension of HLT technology to wider coverage of the world's languages raises some new research problems, which in turn require new kinds of linguistic data. For example, treatment of collections of related languages and dialects requires new kinds of adaptive algorithms, whose development must be based on new kinds of multi-dialect resources.
The Linguistic Data Consortium (LDC) was founded in 1992 at the University of Pennsylvania, with seed money from DARPA, specifically to address the need for shared language resources. Since then, the LDC has created and published more than 209 linguistic databases, and has accumulated considerable experience and skill in managing large-scale, multilingual data collection and annotation projects. Responding to the need for more data in a wider variety of languages with more sophisticated annotation, the LDC has established itself as a center for research into standards and best practices in linguistic resource development, while participating actively in ongoing HLT research.
In the context of DARPA TIDES we have begun to develop methods for creating and using linguistic resources on a larger scale than the HLT research community has previously undertaken, both in terms of the amount of material per language, and the number of languages covered. This work requires close collaboration with other DARPA HLT researchers.
The DARPA TIDES Program
The broad scope of tasks and languages in programs like TIDES demands close coordination of research and shared resources. These resources includes large collections of raw text and speech; translations and summaries; annotations of topics, named entities and relations, syntactic structures and propositional content; lexicons; annotation specifications and protocols; and distribution formats and standards. TIDES has initiated ambitious attacks on difficult problems, with linguistic resources matched to the needs of each piece of the overall research enterprise. LDC's role in TIDES is to coordinate resources for the multiple common-task, metrics-based technology development and evaluation projects.
A recent survey of TIDES participants showed that the most often used resources were the annotated corpora, such as those created for TDT and TREC, followed closely by bilingual lexicons, Treebanks and parallel texts. Participants saw the greatest future need for more and bigger parallel corpora, more and better bilingual lexicons, and more and bigger treebanks and proposition banks.
This survey concluded that resource development should focus on TIDES core languages and give priority to:
1) direct support for common tasks 2) data essential to the task definitions in detection, extraction, translation, summarization, for example continuations of the TREC, TDT and ACE corpora 3) indirect support for common tasks by providing data needed by most algorithms for example bilingual dictionaries for cross-language information retrieval 4) background resources used to develop better system components for example texts, bitexts, treebanks, proposition banks TIDES resources can be categorized roughly as follows.
Gigaword News Text Corpora
Very large scale text databases of roughly one billion words per language supports robust statistical modeling, and provides raw data to be annotated as evaluation resources for all research groups. In addition, comparable corpora of this size permit statistical MT research even in the absence of parallel (translated) text. Current target languages are English, Mandarin and Arabic. The worldwide web contains text in such volume in many languages. However, web data is subject to copyright that constrains the ability to share such material across research groups. Furthermore the dynamic nature of web pages makes them undesirable as evaluation resource in their natural state. One solution is for a central archiving agent to harvest web material and to establish a legal basis for broad and ongoing research access to the resulting archive. News services are another source of large volumes of text data, and are accustomed to licensing it for external use. Here again, however, a central archiving agent is essential to establish legal foundations and to apply common standards. In each language, we pursue an appropriate strategy to guarantee adequate text access for the research community, in a timely and cost-effective fashion.
There has been considerable progress in building the Gigaword News Text corpora. LDC has acquired rights to distribute more than 1.4 billion words of English, 1.5 billion characters of Chinese and nearly 500 million words of Arabic. Work is underway to format these corpora for broad distribution.
Broadcast News
Broadcast news is an important domain for human language technology both because of the inherent interest in systems that can process news and because its vocabulary is expansive and touches most areas of everyday life. LDC has collected and annotated broadcast news since the mid-1990's for DARPA's common task research projects.
TIDES needs broadcast news in much greater volume, from more sources and over a longer duration than any previous program. This initially involves broadcast radio and television sources and Internet based distributions (webcasts) in English, Chinese and Arabic with possible additions in the coming years.
The problem of finding legal/technological models for creation and distribution of research archives of BN has become an acute one. Standard license fees for broadcast materials, even for non-profit or educational use, are established at between $10 and $80 per second. In the past, LDC has managed to get licenses for thousands of hours of BN at typical costs of $100 per hour. This worked well when research needs were for tens or hundreds of hours. Now, although it is two or three orders of magnitude below the "list price", it is still too much to pay as the research needs escalate to tens of thousands of hours. Even worse, there are serious delays and even exclusions entailed by the arduous process of IPR negotiations in this domain, especially for broadcasts from other countries. Whatever the legal model, LDC will research best practices for BN collection, including identifying sources with appropriate coverage and availability, selecting those that can legally be recorded and archived for research purposes, developing mechanisms to capture the data efficiently and reliably, implementing a local version of the capture systems and testing it on long-term collections. We will carry out these collections completely in the digital domain throughout the duration of the project, yielding broadcast news collection of up to five years in time depth. Both the description of the collection mechanism and the resulting data will be made available generally. The diagram above shows our system for collection of Voice of America and other broadcasts, which is typical of the problem of multi-channel digital data capture. Our satellite antenna captures the digital, multi-channel, broadcast signal directly from Voice of America's worldwide distribution network. That signal is demultiplexed into up to 24 simultaneous channels of MPEG audio and video. A series of digital audio tape recorders capture the output of the MPEG decoders sampling the audio at rates up to 48KHz. The digital audio passes through the Townshend DAT Links and onto spinning disk managed by a small Sun workstation. Raw audio is written to disk in daily clusters where staffers review it for signal quality and sort it by language. The sorted data is written to near-line storage maintained on a 3.5TB tape robot. Broadcast news capture from other sources -cable TV, consumer satellite and the broadcast airwaves will require similar programmable data capture systems that we propose to plan, deploy and evaluate over the course of this project, in collaboration and consultation with other sites that do similar sorts of data capture. Work on the alldigital broadcast news capture facility is still in the planning stage.
Parallel Text
Parallel text describes documents that have been translated from the original source language into one or more target languages. Statistical machine translation technology is based on relatively large volumes of parallel text -at least tens of millions of words for good coverage. Available algorithms require parallel texts to be accurately aligned at the sentence level. Parallel text of adequate quantity and quality is hard to find, and as a result significant experiments have only been attempted in a handful of languages. LDC has experience harvesting parallel text from the WWW, acquiring parallel-text archives from a variety of sources, and managing intellectual property rights for the distribution of these archives to researchers. To solve the problem of locating parallel text, LDC has created technology called Bilingual Internet Text Search (BITS). Recently BITS has unearthed large repositories of parallel Chinese-English, German-English and Korean-English text on the worldwide web.
BITS was developed to find, collect and align parallel text from Internet. Other researchers, especially Resnik and others at the University of Maryland, have proposed indexing the web to create a list of URLs pointing to translation pairs. However, URLs are not durable references, and so this solution does not create databases that can be used over time, as is required by a typical DARPA common-task research paradigm. Stable access is required in order for research progress to be documented, by comparing the performance of different algorithms on the same data, both across research sites and across time. However, given a durable Internet archive, accessible to researchers in the way that library archives are -such as is promised by http://archive.org --Resnik's approach may become a practical one. LDC is cooperating with Resnik and others to explore all feasible approaches to finding and obtaining access to parallel text for TIDES research.
This includes improving the performance of web spiders specialized for finding parallel text, as well as developing a range of legal/technological models for research access and/or distribution. Progress on harvesting Chinese-English parallel text from the Internet has been good. LDC has published the Hong Kong News, Hong Kong Laws and Hong Kong Hansards that were identified during a BITS search. The document-level alignment component of BITS has also been used to identify more than 19,000 translation pairs in the Chinese and English services provided by Xinhua news agency. The parallel Xinhua data will be distributed to TIDES research sites in 2002.
Bilingual Dictionaries and Morphological Analyzers
Multilingual lexical resources are critical for translingual technologies, not only translation but detection, extraction and summarization as well. A minimal lexicon for a new language relates word forms to a set of English glosses for each. Syntactic information, frequency information, pronunciation and so on are also useful. Often the set of word forms is too large to list conveniently, and it is more efficient to provide a list of word stems along with the set of affixes that may be added to each. Traditional methods of creating such resources are both expensive and time-consuming, involving human lexicographers working with concordances of text material and existing dictionaries. There are several ways to acquire lexical resources more efficiently. The first is simply to license such resources where already available. The second method is to apply machine-learning techniques to supplement human lexicographers, or to increase their efficiency. Both techniques have been applied in the HLT research community. However, providing lexicographic resources for new levels of performance and for larger numbers of languages will require improved techniques.
For many languages, a key lexicographic roadblock is the need for a system to analyze and synthesize word forms: a computational morphology. There are some promising ideas for the application of machine learning in this area. LDC has begun to confront this problem using an approach that has worked well in other HLT domains: to define and implement realistic test problems where algorithm performance can be evaluated quantitatively, and alternative approaches can be compared.
Our growing test bed will contain language data from approximately ten languages, chosen for their diverse morphological typologies.
The data will include morphologically analyzed texts, dictionaries, and a computationally interpretable morphological grammar (useable by a parser and a generator, also provided). An interface will allow the use of this data in a variety of ways. For example, the texts can be treated as unannotated monolingual text or as glossed interlinear text; and the grammar can be used as a parser, or to simulate a human producing forms from a paradigm on demand.
By providing a standard set of morphological data on a variety of language types, it will be possible to fairly evaluate the capabilities and limitations of morphology learning systems which have been or will be developed, whether at our site or by other HLT researchers. Work on machine learning is progressing apace. Mike Maxwell's report appears in this same volume of proceedings.
X-Banks
Here we use the term X-Bank to refer collectively to Treebanks, Proposition Banks and related resources. Treebanks are collections of text that have been annotated to show the morpho-syntactic properties of sentences and their constituents. Treebanks, like those created for English, Chinese and Korean, are critical for development of many types of HLT technology. LDC is coordinating the extension and distribution of the extended Chinese Treebank and is creating a new Treebank in Arabic targeting one million words of source text. Each Treebank contains some parallel text to support research into transfer grammar.
Research efforts will continue to define the aspects of text meaning whose annotation is most effective in developing new technology, and also to steadily improve the efficiency of X-banking annotation by developing and applying new automatic techniques. The Arabic annotation team has made good progress on the part-ofspeech tagging prerequisite to creating the Arabic Treebank. 130,000 words of news text has been part-ofspeech tagged and hand-checked by human annotators. Of those, approximately 10,000 words have been syntactically annotated as of the time of writing.
Evaluation Resources, Detection
Research in information detection is often cast in terms either of identifying all documents that discuss a topic of interest or of clustering all documents according to their topic. Annotations that identify news stories or other documents types and consistently categorize them according to topic serve both kinds of research. LDC has created databases to support detection research in English, Chinese and Arabic under both the Topic Detection and Tracking program and the Text Retrieval Conference's Cross-Language Information Retrieval track (TREC CLIR).
TDT-4, currently under development is the most recent of the corpora created to support research in topic detection and tracking. TDT-4 is based upon data collected daily from October 2000 through July 2001 including eight English, seven Chinese sources and three Arabic sources. Over a four-month period, these eighteen sources will include a very large number of stories. We anticipate using about 61,000 stories after sub-sampling. LDC collected data from each source once per day on which the source was available.
To reduce costs associated with licensing and transcription, LDC has sub-sampled the data to select more than half of the days that a source is available per week and to stagger sources so that there are multiple sources per day per language. While collecting the material LDC managed intellectual property negotiations necessary to make this data available for research use in the TDT project and beyond. We also identified external transcription bureaus, negotiated rates and manage the offsite transcription of this material. Transcription quality is similar to that available from commercial closed captioning. No attempt is made to produce transcripts of the quality used in speech engineering research projects such as Hub-4.
With transcripts in hand, LDC segments the text to identify individual stories. The transcription bureaus provide the first-pass story segmentation and LDC annotators perform the second pass during which they will listen to the audio of the entire broadcast while viewing the corresponding waveform display and text intermediary and add, remove or re-position story boundaries. Annotators also classify each boundary as beginning a 1) news story 2) miscellaneous text section or 3) untranscribed section. As a quality check, we use patterns of segment boundaries observed in previous corpora to guide segmentation.
For thee months of broadcast news, LDC typically defines 60 topics by a random process that gives each month of data from each source an equal opportunity to contribute a topic. To improve consistency, we perform research on each topic before annotation begins and to maximize annotator efficiency, we use a search guided annotation procedure developed in 2000.
LDC annotates all English, Mandarin and Arabic stories sampled against all topics defined. The first pass involves submitting the concatenation of all on-topic stories as a query into the corpus. During this first stage, on-topic stories include the seed story itself and any stories found during topic definition and research. The annotators then read through the stories in the relevance ranked list until reaching the "off-topic threshold" defined as a 2:1 ration of off-topic to on-topic stories in which the last 10 stories read were off-topic. In the second stage, annotators will iterate their searches using the concatenation of all on-topic stories as they continued to find them. During the third stage, annotators issue new queries using the text of the topic research document and topic explications. As before, annotators will read the relevance ranked list of returns to reach the off-topic threshold before progressing to the next stage. In the fourth stage, annotators will think creatively to conduct additional manual searches through the corpus.
Corpus creation procedures for TREC differ in several ways. Where TDT topics are based upon a seminal event reported in the news, TREC topics are broader and answer a general question about events in the news. Additionally TREC does not attempt an exhaustive annotation of all stories in a corpus prior to technology evaluation. Instead, TREC research sites submit all of the stories they believe to be on-topic. Human assessors then review the returns compiled from all sites that have the highest probability of being on-topic. LDC performed topic definition and system assessment for TREC's Arabic-English crosslingual track in 2001 and will repeat the annotation and assessment with a larger set of topics in 2002. The Arabic corpus contained over 383,000 stories from 6 years of Agence France Presse newswire. Systems were assessed for 25 topics.
Evaluation Resources, Extraction
The DARPA program in Automatic Content Extraction has developed specifications for the annotation of all entities (person, locations, organizations, etc) and relations mentioned in a text. These specifications are clearly relevant to TIDES sponsored work in information extraction. LDC has joined three other sites in annotating newswire, newspaper text and broadcast news transcripts for entities and relations. The corpora are based on raw data from broadcast news, newswire and stories clipped from printed newspapers. Much of the ACE data can also be found in the TDT corpora. This overlap is useful and will be encouraged throughout TIDES as well.
The types of annotation specified for ACE involve identifying entities (persons, locations, organizations, etc) in text including the maximal extent of the string that represents the entity and its type. Because ACE annotators also mark co-reference and metonymy, they are building up a database of all forms in which entities are mentioned in a selected set of texts. Recently some groups began annotating relations among entities while other are beginning the annotation in Chinese. ACE annotation use MITRE's Alembic Workbench. Starting in FY2002, ACE annotation work is carried out under the TIDES umbrella. IN 2002, LDC is the only site performing entity annotation in English. LDC has also begun relation annotation in English and entity annotation in Chinese
Evaluation Resources, Summarization
Research in information summarization may rely upon two kinds of data 1) collections of source documents that have been annotated to indicate sections that express information that is new, or deemed important or that bears on the topicality of the document 2) collection of fulllength document and separate summaries of those documents. Within the TIDES community, the data specification for summarization research is still under development.
The Johns Hopkins summer workshop in speech and language engineering experimented with one possible approach, LDC created 40 topics from the Hong Kong News Corpus. For each topic, annotators labeled 100 documents for relevance. For ten documents per topic, annotators also labeled each sentence in the document for its "importance" to the topic as a whole; sentence-labeling was repeated by three annotators per topic. Query Formulation was based upon clusters generated automatically. LDC used story clusters to identify potentially useful topics. Annotators read documents within each cluster. Any cluster that did not discuss an appropriate query was discarded. For any good cluster, annotators removed those documents that were relevant. A topic title was then created based on the content of the documents in the cluster.
For each good topic, annotators performed documentbased annotation. Annotators use the EZQuery search engine to create a list of 100 relevance-ranked documents based on a query of all stories previously known to be ontopic. Annotators then read and labeled each story in this list for relevance, using the labels YES (relevant), BRIEF (less than 10% relevant) or NO (irrelevant).
For each of the topics labeled during the second phase of annotation, the annotator who conducted document relevance assessment for that query completed sentencebased relevance judgments for the 10 documents the search engine gave the highest ranking for that topic. In addition to the original annotator, two additional annotators, working independently, completed sentencebased relevance for each of the queries. LDC will release this annotated data in 2002.
Evaluation Material, Machine Translation
The TIDES machine translation has until recently focused on formulating a system evaluation metrics. To support this work, LDC has produced multiple clusters of 10K words that have each been translated by a number of translators running the gamut from low to high quality. LDC selected documents for these experiments, acquired distribution rights, outsourced Chinese-English and Arabic-English human translations and ran commercial MT system to produce automatic translations. Source material is sampled from existing corpora to provide overlap with other communities and from new material so that sites can test the generality of their algorithms. LDC has remained in close contact with the MT research community so that the resources we will provide will continue to match its evolving needs.
Our guidelines for this project were based upon the guidelines developed for the translation of the Chinese Treebank with two important modifications. First, we instructed agencies to describe the translation teams used. The Chinese Treebank guidelines emphasize "faithfulness" in terms of vocabulary, syntactic structures, and of course semantic contents providing specific examples. This, coupled with the relatively constrained writing in the original documents, threatens to produce less variation than desired. The Chinese Treebank guidelines were intended for a single translation of a very precious data set and require uniformly excellent quality. We relax the "faithfulness" constraint in our translations
In this project, quality assurance means making sure the translation agencies understand the task and checking returning translations for coverage and format. The LDC does not in any way modify the content of the translation because variation in quality is desired.
Intellectual Property Arrangements
Much of the material described above is based upon large volumes of text and speech best collected from commercial providers. Commercial sources may require the negotiation of agreements that permit the distribution of data to researchers while constraining the use of the material to linguistic education, research and technology development. LDC has negotiated such agreements since 1992. Other arrangements are possible and warrant consideration. Some material may be in the public domain. Some uses of material may fit under the doctrine of Fair Use. Furthermore, Copyright Law is dynamic and responds to changes in the information technology. LDC endeavors to keep abreast of changes in copyright law as they affect information distribution. In any case, LDC coordinates all necessary intellectual property arrangements for multiple research programs including TIDES to make resources gathered in this way available to the broader research communities.
Resource Distribution Infrastructure
Researchers in speech and language technologies have realized for at least three decades that shared linguistic resources are an important stimulant to progress. DARPA sponsored common task research programs rely heavily upon shared resources. LDC was in fact created specifically to facilitate research sharing. With the support of the TIDES sponsors, LDC has extended its role in research sharing by coordinating all resource acquisition for TIDES. We focus our efforts on resources that can be shared at least as broadly as the sponsoring program but
Figure 1 :
1Current configuration of multi-channel broadcast news capture facility
typically to all communities working in linguistic education, research and technology development. 15Referencestypically to all communities working in linguistic education, research and technology development. 15. References
Automatic Content Extraction. ACE. www.nist.gov/speech/tests/aceACE, 2000, Automatic Content Extraction [www.nist.gov/speech/tests/ace].
Steven Bird, Kazuaki Maeda, Xiaoyi Ma, Haejoong Lee, MultiTrans and TableTrans: Annotation Tools Based on the Annotation Graph Toolkit (AGTK), Proceedings of the Third International Language Resources and Evaluation Conference. Las Palmas, SpainBird, Steven, Kazuaki Maeda, Xiaoyi Ma, Haejoong Lee, 2002, MultiTrans and TableTrans: Annotation Tools Based on the Annotation Graph Toolkit (AGTK), Proceedings of the Third International Language Resources and Evaluation Conference, Las Palmas, Spain, May-June 2002.
A Call for Open Source Lexicons. Steven Bird, Mark Liberman, Proceedings of the Third International Language Resources and Evaluation Conference. the Third International Language Resources and Evaluation ConferenceLas Palmas, SpainBird, Steven, Mark Liberman, 2002, A Call for Open Source Lexicons, Proceedings of the Third International Language Resources and Evaluation Conference, Las Palmas, Spain, May-June 2002.
The Open Language Archives Community. Steven Bird, Hans Uskoreit, Gary Simons, Proceedings of the Third International Language Resources and Evaluation Conference. the Third International Language Resources and Evaluation ConferenceLas Palmas, SpainBird, Steven, Hans Uskoreit, Gary Simons, 2002, The Open Language Archives Community, Proceedings of the Third International Language Resources and Evaluation Conference, Las Palmas, Spain, May-June 2002.
Linguistic Annotation Page. Steven Bird, Mark Liberman, www.ldc.upenn.edu/annotationBird, Steven and Mark Liberman, 1999, Linguistic Annotation Page, [www.ldc.upenn.edu/annotation]
Large Multilingual Broadcast News Corpora for Cooperative Research in Topic Detection and Tracking: The TDT2 and TDT3 Corpus Efforts. Christopher Cieri, Dave Graff, Mark Liberman, Nii Martey, Stephanie Strassel, Proceedings of the Second International Language Resources and Evaluation Conference. the Second International Language Resources and Evaluation ConferenceAthens, GreeceCieri, Christopher, Dave Graff, Mark Liberman, Nii Martey and Stephanie Strassel, 2000, Large Multilingual Broadcast News Corpora for Cooperative Research in Topic Detection and Tracking: The TDT2 and TDT3 Corpus Efforts, In Proceedings of the Second International Language Resources and Evaluation Conference, Athens, Greece, May 2000.
The 1999 Topic Detection and Tracking (TDT) Task Definition and Evaluation Plan. G Doddington, Doddington, G. (1999). The 1999 Topic Detection and Tracking (TDT) Task Definition and Evaluation Plan. Available at http://www.nist.gov/TDT.
Portability Issues for Speech Recognition Technologies. Lori Lamel, Fabrice Lefevre, Jean-Luc Gauvain, Gilles Adda, HLT 2001: Proceedings of the First International Conference on Human Language Technology Research. San Diego, CALamel, Lori, Fabrice Lefevre, Jean-Luc Gauvain and Gilles Adda, 2001, Portability Issues for Speech Recognition Technologies, HLT 2001: Proceedings of the First International Conference on Human Language Technology Research, San Diego, CA, March 18-21, 2001.
Linguistic Data Consortium Homepage. LDC, 2000, Linguistic Data Consortium Homepage [http://www.ldc.upenn.edu]
BITS: A Method for Bilingual Text Search over the Web, presented at Machine Translation Summit VII. Xiaoyi Ma, Mark Liberman, Kent Ridge Digital Labs, National University of Singaporewww.ldc.upenn.edu/Papers/MTSVII1999/BITS.psMa, Xiaoyi and Mark Liberman, 1999, BITS: A Method for Bilingual Text Search over the Web, presented at Machine Translation Summit VII, September 13th, 1999, Kent Ridge Digital Labs, National University of Singapore, [www.ldc.upenn.edu/Papers/MTSVII1999/BITS.ps]
Corpus-Based Comprehensive and Diagnostic MT Evaluation: Initial Arabic, Chinese, French and Spanish Results. Kishore Papineni, Salim Roukos, Todd, John Ward, Florence Henderson, Reeder, HLT 2002: Proceedings of the Second International Conference on Human Language Technology Research. San Diego, CAPapineni, Kishore, Salim Roukos, Todd, Ward, John Henderson, Florence Reeder, 2002, Corpus-Based Comprehensive and Diagnostic MT Evaluation: Initial Arabic, Chinese, French and Spanish Results, HLT 2002: Proceedings of the Second International Conference on Human Language Technology Research, San Diego, CA, March 24-27, 2002.
Mining the Web for Bilingual Text. Philip Resnik, ACL 1999: 37 th Annual Meeting of the Association for Computational Linguistics (ACL' 99). College Park, MarylandResnik, Philip, 1999, Mining the Web for Bilingual Text, ACL 1999: 37 th Annual Meeting of the Association for Computational Linguistics (ACL' 99), College Park, Maryland, June 1999.
Developing Infrastructure for the Evaluation of Single and Multi-document Summarization Systems in a Multilingual Environment. Horacio Saggion, Dragomir Radev, Simone Teufel, Wai Lam, Stephanie Strassel, Proceedings of the Third International Language Resources and Evaluation Conference. the Third International Language Resources and Evaluation ConferenceLas Palmas, SpainSaggion, Horacio, Dragomir Radev, Simone Teufel, Wai Lam and Stephanie Strassel, 2002, Developing Infrastructure for the Evaluation of Single and Multi-document Summarization Systems in a Multi- lingual Environment, Proceedings of the Third International Language Resources and Evaluation Conference, Las Palmas, Spain, May-June 2002.
Resource Development for Topic Detection and Tracking Research: The TDT-4 Corpus. Stephanie Strassel, Christopher Cieri, Proceedings of the Third International Language Resources and Evaluation Conference. the Third International Language Resources and Evaluation ConferenceLas Palmas, SpainDARPA Program in Translingual Information Detection Extraction and SummarizationStrassel, Stephanie and Christopher Cieri, 2002, Resource Development for Topic Detection and Tracking Research: The TDT-4 Corpus, Proceedings of the Third International Language Resources and Evaluation Conference, Las Palmas, Spain, May-June 2002. TIDES, 2000, DARPA Program in Translingual Information Detection Extraction and Summarization [www.arpa.mil/ito/research/tides]
Multilingual Topic Detection and Tracking: Successful Research Enabled by Corpora and Evaluation. Charles Wayne, Proceedings of the Second International Language Resources and Evaluation Conference. the Second International Language Resources and Evaluation ConferenceAthens, GreeceWayne, Charles, 2000, Multilingual Topic Detection and Tracking: Successful Research Enabled by Corpora and Evaluation, In Proceedings of the Second International Language Resources and Evaluation Conference, Athens, Greece, May 2000.
Topic Detection & Tracking: A Case Study in Corpus Creation & Evaluation Methodologies. Charles Wayne, Proceedings of Language Resources and Evaluation Conference. Language Resources and Evaluation ConferenceGranada, SpainWayne, Charles, 1998, Topic Detection & Tracking: A Case Study in Corpus Creation & Evaluation Methodologies, In Proceedings of Language Resources and Evaluation Conference, Granada, Spain, May 1998. |
54,160,318 | NEW PA RSING METHOD USING G�OBAL ASSOCIATION TABLE | _ This paper presents a new parsing method using statistical information extracted fre_m corpu�, especially for I{orean . The structural ambiguities are occurred in deciding the dependency relation between words in l{orean. \i Vhile figuring out the correct dependency, the lexical associations play an important role in resolving the ambi guities. Our parser uses statistical cooccurrence data to compute the lexical associations. In addition, it can be shown that sentences are parsed deterministically by the global management of the association. In this paper, the global association table(GAT ) is defined and the association between words is recorded in the GAT. The system is the hybrid semi-deterministic parser and is controlled not by the condition-action rule. but by the association value between phrases. \Vhenever the expectation of the parser fails, it chooses the alternatives using a chart to remove the backtracking.The Characteristics of KoreanStructures of Korean sentencesKorean is an agglutinative language and has different features from an inflectional language such as English. A sentence consists of a sequence of eojeols composed of a content word and functional words. A content ,vord | [
608
] | NEW PA RSING METHOD USING G�OBAL ASSOCIATION TABLE
Junt Ae
Depart1nent of Con1puter Science
Yonsei University
Seout Korea
Yo On
Depart1nent of Con1puter Science
Yonsei University
Seout Korea
Seonho Kim
Depart1nent of Con1puter Science
Yonsei University
Seout Korea
Mansiikson
Depart1nent of Con1puter Science
Yonsei University
Seout Korea
NEW PA RSING METHOD USING G�OBAL ASSOCIATION TABLE
_ This paper presents a new parsing method using statistical information extracted fre_m corpu�, especially for I{orean . The structural ambiguities are occurred in deciding the dependency relation between words in l{orean. \i Vhile figuring out the correct dependency, the lexical associations play an important role in resolving the ambi guities. Our parser uses statistical cooccurrence data to compute the lexical associations. In addition, it can be shown that sentences are parsed deterministically by the global management of the association. In this paper, the global association table(GAT ) is defined and the association between words is recorded in the GAT. The system is the hybrid semi-deterministic parser and is controlled not by the condition-action rule. but by the association value between phrases. \Vhenever the expectation of the parser fails, it chooses the alternatives using a chart to remove the backtracking.The Characteristics of KoreanStructures of Korean sentencesKorean is an agglutinative language and has different features from an inflectional language such as English. A sentence consists of a sequence of eojeols composed of a content word and functional words. A content ,vord
Introduction
The association of words takes an important role in finding out the dependency relation among them. The associations among words or phrases are indicators of the lexical preference. :t vlany works have shown that the association value computed with statistical information makes good results in resolving structural ambigui ties [·Hindle, 1993;:r viagerman , 1995;Collins, 199(5). Statistical information has led recent researches for syntactic ana ysis not to the problem of recognizing sentences by given grammar but to that of finding the correct one in multiple parse trees.
A chart parser has been used to produce all possibilities when a sentence is analysed. However, it generates too many structures trying to find a correct one. v V hile reading a sentence, in many cases, a reader can make decisions without examining the entire sentence. A deterministic parser has been worked with the determinism hypothesis for natural language parsing (Marcus, 1980;Faisal et al., 19�)4). The deterministic parser makes errorneous results because of the limited lookahead, however. This paper presents a new parsing method that uses the lexical association for parsing sentences semi deterministically. First, a global association table(GAT) is defined to record and manage the association. As all the associations can be globally observed through the CAT, the parser can obviate the error caused by the limited lookahead. The associations among words are estimated on the basis of lexical association calculated using data acquired from corpus. Next, a parsing algorithm is described using the GAT. The parser selects the action according to the association among the nodes presented by the CAT. That is, the parser is controlled not by condition-action rules, but by the associations between phrases. It merges one phrase with another phrase that has the highest association value, or will wait until it meets the most probable candidate indicated by the GAT. To recapitulate, our system is the parser with the lookahead buffer of sentence length. Experiments show that it doesn't lose accuracy as well as it is as efficient as the deterministic parser. is the subject of the above sentence, and 'chaek-e-ul' , the object.
Sa-t-da
Second, Korean is an SOV(' Subject Object Ve rb' order) language, where the head always follows its comple ments. In Korean, a head eojeol follows its complement eojeols. A new phrase is generated when one or more eoj,eols are merged, and the head of the phrase is always the last eojeol of the phrase.
g f( e u-ga norae-reul boore'll-myu hakkyo-e ga-t-da.
He
a. song singing to school went � He went to school singing a. song. Both 'booreu-my· u' and 'ga-t-da' have verbs as their content words. Predicative eojeols a. re the heads for nominal eojeols and follow the eoj eols, 'keu-ga ', 'norae-re ul', and 'hakk y o-e' , respectively. Third, the grammatical dependency relation is determined decisively by functional words. For example, 'in the box' in English can modify both a. noun and a. verb. In contrast, in order that 'sangja', which means box, modifies a. noun, it has to have the post.position, '1ti' in Korean. · whenever it has another postpositions, it is the complement of a. verb. There is syntactic levels in Korean, and the dependency relation of an eojeol is fi xed according to the level.
Syntactic analysis
The dependency tree of a. sentence is built up through parsing. For the operation of the deterministic parser such as CREATE, the dependecy tree is represented by the binary tree for phrase structure that has two children nodes for a. complement and a head. The complement node is the dependent node of the dependency tree and the head node is the head of the dependent node. Consequently, the parser uses binary grammar described with the feature structure about the morphemes that constitute an eoj eol. The parent node inherits the feature of its head node. Since the head follows its co111plement in Korean, the root node of the parse tree inherits the feature of the last eojeol in the sentence.
( Figure 1) shows the dependency tree and our parse tree of the sentence given in example 1. 'Na-neun(I)' is a nominal eojeol and dependent on the predicative eojeol, 'sa-t-da(bought)'. The feature of 'sa-t-da' is placed in the root node of the parse tree because it is the head of the sentence. Table T\1e global associaJ,ion ta�le(CA'r) has the association between words or _eoj eols that have dependency relation. The assoc�a.tion can be estii:na.ted in various ways; for example, if it is likely that a word depends on the word nearest to it, the estimation fu nction would be given as follows.
Assoc( ei , e_; ) = 1/ d Let the, row and the column of the CAT represent eojeols occurring to the left-hand side and to the right-hand side, respectively, in the parsing process. The left-hand side eojeol is a complement, and the right-hand side, the candidate for its head. G' AT( i, j) indicates the degree of association in case the ith eojeol is dependent on the jth eojeol. Because the head follows its complement in I�orean, and the table is a. triangular matrix.
Estimation funct ion for association
Two kinds of co-occurrence dc\ta were extr;:t.cted from 30 million eojeol corpus. OQ.e i� fo. r compound noun an�lysis, the oth.er i$ for dependency analysis of verb and noun. Ti1e assoc_ iations of mod.ifier-head relati <? ns su�h as an adverb and a. verb, or a . pre-noun and a no\m, are estimated by dista:µce. Distance measure is also used for the case. there is no co-occurrence dat;1., which is. ea. used by .data sparseness. The distance has be�n shown to be the most plausible estimation method without any linguistic knowledge (Collins, 1996;I�urohashi et al ., 1994). First of all, co-occurrence pairs of tv, 1 0 nouns were collected by the method presented in (Pustejovksy et al .,199: 3). Let iV be the set of eojeols consisting of only a. noun and ]VP the set of eoj eols composed of a. noun with a post.position . From e1 ,e3 ,e3 ( e 1 (/:. N, 1:3 E JV ,e3 E NP), we can obtain complete noun compounds, ( n3 , n3) such that 113 and 113 a. re the nouns that belong to the eojeols, e3 and e3, respectively. The parser analyzes compound nouns, based on the complete noun compounds. (Figure 2) shows an example of compound noun pa.us.
The association between nouns is computed using the co-occurrence data. extracted by the above method. Let 1V = { n 1 , ... , n 111 } N be the set of nouns. Given n1, n3 E N, association score , Assoc, between n 1 and n3 is defined to be
AssocN N ( n1, n3 ) P(n1, n3) (1) .freq ( n1 , n3 ) L i L _; f1·eq ( ni , n_; )
As mentioned above, the distance measure is suggested without any cooccurrence data. Therefore, these estimators are sequentially applied for two eojeols in the fo llowing way. Here, ei and e_; are the ith and the jth nominal eojeols, and ni and n_; the nouns that belongs to the nominal eojeols.
I.f AssocN N ( ni , n. i ) =f. 0
Assoc( ei , e_; ) = AssocN N ( ni , n.i ) else Assoc(ei, e_; ) = l/d Because the associations a. re calculated and compared for all e.i on which ei have the possiblity to be dependent, the compound noun analysis is based on the dependency model rather than the adj acency model( Koba. yasi et a. I., 1994;Lauer, 1995). Because the two estimate functions a. re used, the extra-comparison routine is required. It will be explained in the next section.
Second, the co-occurrence pairs of nominal eojeols and predicative eojeols were extracted by the partial parser from a. corpus. (Figure 3 ) sho"vs an example of the triples generated from the text. In (Figure 3), the triple, Figure 3: Examples of triples extracted from the text ( masida/VB, mool/NN, reul/OBJ ) indicates that the verb, 'masida', and the noun, 'moo/' which mean 'drink' and '"vat.er' respectively, ·co-occur under the grammatical circumstance, 'reul' which is the post.position tha. t makes a noun an object. The association between a verb and a noun is evaluated based on the triples obtained by the above method. Let re· ul, e, . . . } V, N, S be the sets of predicates, nouns and syntactic relations respectively. Given v E V, s E S, n E N, association score, Assoc, between v and n with syntactic relation s is defined to be
V = { v 1 , ... , v 1}, 1V = { n 1, ... , nm } S = {ga,AssocvN(n, s, v) >-.1P (n, sjv) + >-.2P(slv) (>-.1 � >-.2 ) ( 2 )
The conditional probability, P( n, sjv) measures the strength of the statistical association between the given verb, v, and the noun, n, with the given syntactic relation , s. That is, it fa vors those that have more co occurrences of nouns and syntactic relations for verbs. However, the above formula, including the maximum liklihood estimator, suffers from the problem of data sparseness. To back off the estimation, it is introduced the probability, P( sjv) that means how much the verb requires the given syntactic relation.
The association measure based on the distance between two eojeols is used without any co-occurrence data. These estimators are applied sequentially in the following way. Let us suppose that ei be the ith eojeol and e j the jth eojeol. In addition, ni and Si are the noun and the post.position in the nominal eojeol, ei, and v . i the verb in the predicative eojeol, e_; , respectively.
If Assoc" N ( ni , Si, Vj ) =J=. 0 Assoc(ei , e . i) = AssocvN(ni, Si , V_j ) else Assoc( ei , e_; ) = 1/ d
Making GAT
The association value of two eojeols is recorded in the GAT only when the eojeols have dependency relation. Above a.ll, the dependency relation of two eojeols is checked, therefore. For two eojeols to have dependency relation indicates that they have a possibility to be combined in parsing process. For example, a nominal eojeol with the post.position for case mark depends on a predicative eojeol that follows them . Second, if a dependency relation can be assigned to tvw eojeols, the association value is calculated using the estimators described in the previous section.
The association is represented by a pair, (rnethod, association-value) . If a sentence consists of n eojeols, the GAT used is the n x n triangular ma.tri. x. As mentioned in the previous section, each eojeol has its own syntactic level in Korean , and an eojeol can be combined "vith either a predicate or a noun. This follows that an eojeol doesn't have dependency relation to the nominal eojeol, whenever it is dependent on the predicative eojeol, 'Vice ·versa. Because the different esti�nators a. re applied for the analysis of compound noun and predicate-argument, any collision doesn't take place in the comparison of the association. A.ssoCNN is used as the estimator for compound noun and A.ssocv N, for predicate-argument. The GAT is sorted by the association to look up the most probable phrase in the parsing process. Thus, the global associa.tion table is implemented by the global association list. The algorithm to generate the CAT is represented in (Figure 4). for each eojeol ei 0 <= i <= n -2 1. for each eojeol e_ 7 i + 1 <= j <= n -l if ( depend_on ( ei ,e_ 7 )) compute [J i (j) = < method, Assoc( ei, ej ) > 2. sort gi(i + 1), .. . ,g. i(n -1) and refer it to G(i) The following example is represented by (Table 1 ) In (Table 1), '-' mark means that two eojeols have no dependency relation. The first element of the pair is the method of the measurement and the second is the association value. The pair, (1, 1/2) in GAT(0,2), indicates that the measure by distance is 1/2. The pair (2, 0.11) in GAT(2,3) means that the association value is 0.11 and estimated with co-occurrence relation . The method has the priority for the comparsion of the association. Therefore, (2,0.02) is greater than (1,0.5) because met hod of the first is greater than that of the second. Since the row of the table is sorted for parsing, GAT[2] can be represented in the form of a list of eojeols as follows.
: 3 - - - - (2,0.15) - - 4 - - - - - ( 1,1) (2,0.52) 5 - - - - - - (1,1) 6 - - - - - - -
GAT[2] -----(3,(2,0.11)) -----(5,(2,0.02 )) -----(6,(1, 1/2)) The association list in the above lets the parser know that the eojeol e'.! has the possibility to merge with the eojeol, e3 , es or eG, and the most probable one is e 3 . The function, rnax(G(i)) is defined to return the most probable candidate for the head of the ith eojeol, e;, in the GAT.
Parsi6g.A.7�m
PZng algorit 1
The pars� eel here consists of a stack and a buffer. A two-item lookahead buffer is enough to make decisions in regard to Korean. The grammatical structures lie in the parsing stack and a set of actions are operated on the buffer. Unlike the deterministic parser where the set of rules directs the operation, this system parses by the association value of the GAT.
Since a head follows its complement in Korean, the head of a phrase is the last eojeol of the phrase. A phrase is generated when hvo eojeols or two phrases are merged. In this case, Head Feature Inheritance takes care of the assignment of the same value as the head feature. Suppose an eojeol, e 1 , and an eojeol, e2, merge and a new phrase Pi be generated, as shown in ( Figure 5). As the head of Pi is e2, the parser uses the subscription of f'.! as the index to the GAT, that is, 2.
Basic operations are CREATE, ATTACH, and DROP. However, its operation is conditioned not by rule matching but by the value of the GAT as shown in the following description. The function, position(max(G(i))) returns the sentential position of the most probable candidate for the head of the ith eojeol, ei. ATTACH If the phrase where the e.i is the last eojeol is not the most probable candidate for the head of the eojeol ei, that is, e.i =/=-rnax( G( i)) then wait until ei meets the most probable candidate indicated by the GAT.
DROP DROP operation is accompanied with CREATE operation in our system because the complement precedes the head and thus the top node of the stack must be dropped and checked for dependency immediately after a new node is generated.
The GAT provides the parser with the prediction of the best candidate for the head of the ith eojeol, ei . This is easy because the GAT is already sorted; however, the expectation is not always correct because the value of the GAT is calculated whenever there is a possible dependency relation between one eojeol and another. That is, the parser constructs the GAT as preparsing and it may happen that the two eojeols or phrases which have the possiblity to have dependency relations cannot be merged in parsing. The violation of the 'one case per clause' principle, is the case. ex 4) b, ubun-eul(part/OB.J ) byeonhyeongsiki-myeonseo( change) progmm-rnl(program/OB. J) i y ongha-n da( use). The part being changed, the program is used.
In (Figure 6 ), e 1 and e3 are nominal eojeols, and e2, e4 are predicative eojeols apiece. The phrase P 1 consists of e1, and the phrase P2 consists of three eojeols, e2, e3, e4. Let the most probable candidate, suggested by the GAT, for both e1 and e3 be e4. However, e1 and e3 have the same grammatical case because they contain The phrase Pi and the phrase P 2 cannot be merged because of the violation of 'one case per c/a, use' principle. This means the prediction of the GAT is incorrect, and consequently an analysis with the alternatives is required. If the next candidate is e 2 , the grammatical structure in the buffer must be erroneous. The chart presents the phrase suitable for the alternative execution into the buffer. The chart aHo,'Vs the parser to store the partial structure to remove the backtracking. ALTER operation occurs in this case.
ALTER is required if an eojeol ei cannot be merged with the eojeol e_ 7 which is the prediction of the candidate for the head of ei.
The operation being executed, the structure in the lookahead buffer is backed up into the chart . When ALTER operation is needed, another candidate taken from the chart , has to be put in the buffer. The next candidate, C( ei) is chosen in the following way. Let i be the left-hand position and k the right-hand position of the errorneous prediction in the GAT. C( ei) = the phrase that the left-hand position is i and the right-hand position is ma:r(i + l s; j s; k)
in nodes in the chart. Then, the phrase A including e 2 is the next candidate in (Figure 7). The parsing algorithm with the GAT is described in (Figure 8).
Parsing
The complexity of making the GAT is O(N 3 log 2 (N)), where N is the number of eojeols. This is due to the sorted n x n Figure 9) represents the analysis steps of the sentence in (ex : 3 ). The head on the stack top is the complement, and the candidate for the head of it lies in the head part of the buffer. In the seventh row of the figure, the ATTACH operation is executed by the GAT in (Table 1 ), because the lookahead is not the best candidate for the head of the complement on the stack top. The eojeol, 'sutja-ga', has to wait until it meets its best candidate. A new phrase a. re created in the row (9). The eojeol, 'olaga. -t-da' is the best candidate for the eojeol, 'sutj a. -ga', which was estimated by the GAT. (Figure 10) represents the parse tree of ex 3). The sentence is written in Korean.
Experimental Results
For testing purposes, 400 sentences were randomly extracted from 3 million corpus. First, our parser is compared to the chart parser to show the efficiency of our algorithm. The number of the nodes generated by each parser is represented in ( Table 2). Because of ths size of the searching space, the results from the cha.rt parser a. re calculated whenever the first S is found. The average number of the prediction failure is 0.26 per sentence. That is, The parser has to search for the alternative in the chart once in four sentences. This makes the complexity of the parser a. constant. (Figure 11) shows the occurrence of ALTER operation over the number of words. The average number of ALTER is a.bout 0.36 for the sentences with more than 20 words, which means our parser is efficient .
Second, the precision is given in (Table 3). The precision is defined as the ratio of the precise dependency relation between eojeols in parse trees. No label is attached because the final output is the tree that represents the dependency relation among words. Thus, the number of erroneous and correct relations is considered, which can be estimated by the number of crossing brackets (Table ; 3 ) .
Crossing Brackets number of constituents which violate constituent boundaries with a. constituent in the correct parse.
The cause of the incorrect analysis can be largely classified by two reasons. One of the failures is ea.used by statistical information. We collected the data from 30 million eojeol corpus. The total number of the data is 2 Table 3: The precision of the parser ( a) the number of crossing brackets (b) the average number of crossing brackets per sentence ( c) the number of sentences ( <l ) the percentage million and the average frequency of the co-occurrence data. is 2.5. The triples to have frequencies greater than 2 are 400,000. The frequency of most data. is 1, which was the ea. use of the errorneous results. In addition, the association value of adjuncts and subordinate clauses is estimated by distance. The distance estimator was good but not the best . Semantic information such as thesaurus will help reduce the space of parameters.
Second, liguistic information is needed , e.g., such as light verbs or the lexical characteristics of individual words. Our parser is the hybrid system which uses both rules and statistical information. The linguistic research is prerequired for this, even if these can be partially resolved by statistical methods. However, the parser is satisfactory in spite of some erroneous results in that the association value can be computed in various ways, and the parser can be extended using this.
Conclusion
W e have shown that it is possible to make decision semi-deterministically using the global association table. The GAT is a. very effective structure in that it is triangular matrix because Korean is an SOV language and the dependency relations between words is important. It would have to be transformed for parsing English, because phrase structure grammar is needed for parsing English.
There a. re many possiblities for improvement . The method described for calculating the lexical association in the CAT can be modified in various ways. The GAT and the parser can be extended if the distance measure and the coordinate conjunctive structure a. re considered.
Figure 2 :
2Examples of co-occurrence pairs of two nouns 3 Global Association
Figure 4 :
4The
computer ( 1 )hwamyon-ui (2)gusuk-e ( 3 )natana-n ( 4)s-utja-ga ( 5)parn-ge (6) olaga-t-da.(0)computer (l)of screen (2)in the corner (: 3)appeared (4)the number (5)fast (6)scrolled up -----The number to appear in the corner of computer screen scrolled up fa st .
CREATEFigure 6 :
6If the most probable candidate for the head of the eojeol, ei, is ej , that is, j = position(max(G(i))), -.eul byeonhyungsiki-myeonseo program-.eul iyongha-nda cases are the same example of failure of the prediction then merge ei( or the phrase where the last eojeol is ei ) with e.i (or the phrase where the last eojeol is e_i ), and genera. t. e a. new phrase.
Figure 7 :Figure 8 :
78the content of chart and selection for the next candidate For phrases P i and P j , let their heads ei and e_; respectively. if (lookahead = nil and there is one parse tree in the stack) return SUCCESS else if (GAT(O) = NULL) else return FA IL if (position(G(i)_) The parsing algorithm using the CAT the postpositions marking the same case.
Figure 9 :
9hwamyon-ui) sutj a. -ga. (paru-ge olaga-t-da) olaga-t-da. (ha. n gusuk-e) ) natana-n) sutja-ga) an example of analyzing the sentence in (ex 3). (A) Create & Drop operation (
Figure 11 :
11computer screen in the comer appeared number fast scrolled upFigure 10: the parse binary tree for ex 3) (in Korean) The number of the ALTER operation for words
Table 1 :
1The global association table(GAT) for the example sentence, ex 3
table. The average complexity of the parser is linear, according to the experiments.OP
Stack Top
First Lookahead
Constituents
Head
Constituents
Head
1 A (computer)
computer
( lnvamyon-ui)
hwamyon-ui
2 B
( computer hwamyon-ui)
hwamyon-ui (han )
ha.n
3
A
(han )
han
(gusuk-e)
gusuk-e
4 A
( computer hwamyon-ui)
hwamyon-ui (ha.n gusuk-e)
gusuk-e
5
A
( (computer hwamyon-ui)
gusuk-e
(natana-n)
natana-n
(han gusuk-e))
6
A
(((computer hwamyon-ui)
natana-n
(sutja-ga)
sutja-ga
(han gusuk-e)) natana-n )
7
B
((((computer hwamyon-ui) sutja-ga.
(paru-ge )
pa. ru-ge
(han gusuk-e)) na.tana. -n)
sutja. -ga.)
8
A
(pa. ru-ge)
pa.ru-ge
( olaga-t-da.)
olaga.-t-da.
Table 2 :
2The number of nodes generated by test parsers (
Natural Language Un derstanding. J Allen, Benjamin CummingsAllen, J. ( 1995). Natural Language Un derstanding. Benjamin Cummings.
A Corpus-Based Approach to Lang. E Brill, uage Learnin g Department of Computer and Information Science, University of PennsylvaniaBrill, E. (1993). A Corpus-Based Approach to Lang, uage Learnin g Department of Computer and Information Science, University of Pennsylvania.
Robust Stochastic Parsing Using the In side-Outside Algorithm. T Briscoe, N Waegner, Workshop notes fro m the AAAI statistically-based NLP Te chniques Workshop. Briscoe, T., Waegner, N. (1992). Robust Stochastic Parsing Using the In side-Outside Algorithm In Workshop notes fro m the AAAI statistically-based NLP Te chniques Workshop.
Cha, E Rniak, St ahshcal Language Learning. MIT PressCha.rniak, E. ( 1993). St ahshcal Language Learning MIT Press.
A New St atistical Parser Based on Bi gram Lexical Dependencies. M J Collins, Proceedings of 34th Annual Meeting of Association for Computational Linguistics. 34th Annual Meeting of Association for Computational LinguisticsCollins, M. J. ( 1996 ). A New St atistical Parser Based on Bi gram Lexical Dependencies In Proceedings of 34th Annual Meeting of Association for Computational Linguistics.
Design of a Hybrid Deterministic Parser. K A Faisal, S C Kwasny, Proceedings of COLINC-90. COLINC-90Faisal, K. A. and Kwasny, S. C. ( 1990). Design of a Hybrid Deterministic Parser In Proceedings of COLINC-90.
An Experiment 011 Learning Appropriate Selectional Restrict-ions fro m a Parsed Corp1ts. F R Framis, Proceedings of COLINC-94. COLINC-94Framis, F. R. (1994). An Experiment 011 Learning Appropriate Selectional Restrict-ions fro m a Parsed Corp1ts. In Proceedings of COLINC-94.
Natural Language Processing in LISP. G Ga, C Mellish, Addison WesleyGa. zdar G., and Mellish C. (1993). Natural Language Processing in LISP. Addison Wesley.
Analysis of Japan ese Compound Nouns using Collocational Informat-ion. D Hindle, M Rooth, Proceedings of COLINC-94. Koba. yasi Y., Tokunaga. T., and Tanaka. H.COLINC-94Structural Ambiguity and Lexical Relations Computational LinguisticsHindle, D. and Rooth, M. (1993 ). Structural Ambiguity and Lexical Relations Computational Linguistics Koba. yasi Y., Tokunaga. T., and Tanaka. H. (1994). Analysis of Japan ese Compound Nouns using Collocational Informat-ion In Proceedings of COLINC-94,
A Syn tactic Analysis Ivfethod of Long Japanese Senten ces Based on the Detection of Confuncfrve StT'ltctures. S L� Urohashi, M Nagao, Computational Linguisticsl� urohashi S. and Nagao M. (1994). A Syn tactic Analysis Ivfethod of Long Japanese Senten ces Based on the Detection of Confuncfrve StT'ltctures. Computational Linguistics
M Lauer, Corpus Statistics Meet the Noun Compound: Some Empirical Results In Proceedings of 33rd Annual Jv f eet-ing of Association fo r Computational Linguistics. Lauer, M. (1995 ). Corpus Statistics Meet the Noun Compound: Some Empirical Results In Proceedings of 33rd Annual Jv f eet-ing of Association fo r Computational Linguistics.
A Th eory of Syntaci'ic Recognition fo r Natural Language. M Marcus, MIT PressCambridge; MAMarcus, M. (1980 ). A Th eory of Syntaci'ic Recognition fo r Natural Language. Cambridge, : MA: MIT Press.
Statistical Decision-Tree Models for Parsing. D M Magerman, Proceedings of 33rd Annual Meeting of Association fo r Computational L-ing, uistics. 33rd Annual Meeting of Association fo r Computational L-ing, uisticsMagerman, D. M. (1995). Statistical Decision-Tree Models for Parsing. In Proceedings of 33rd Annual Meeting of Association fo r Computational L-ing, uistics.
Le:i.:ical Se mantic Te chniq· ues fo r Corpus Analysis. J Pustejovsky, S Bergler, Anick , P , Compu tational Linguistics. Pustejovsky, J ., Bergler, S., and Anick, P. ( 1993). Le:i.:ical Se mantic Te chniq· ues fo r Corpus Analysis. Compu tational Linguistics
Wordn et and Distributional Analysis: A Class-Based approach to Lo:ical Discovery. P Resnik, Proceedings of AAA.I VVorkshop 011 Statistical Methods in .NLP. AAA.I VVorkshop 011 Statistical Methods in .NLPResnik, P. (1992). Wordn et and Distributional Analysis: A Class-Based approach to Lo:ical Discovery In Proceedings of AAA.I VVorkshop 011 Statistical Methods in .NLP.
Effi cirnt Parsing fo r Natu ral Language Boston. ] Tomita, Vi, l{ luwer Academic PublishersTomita, ]VI . (1986 ). Effi cirnt Parsing fo r Natu ral Language Boston: l{ luwer Academic Publishers |
2,313,543 | Japanese Pronunciation Prediction as Phrasal Statistical Machine Translation | This paper addresses the problem of predicting the pronunciation of Japanese text. The difficulty of this task lies in the high degree of ambiguity in the pronunciation of Japanese characters and words. Previous approaches have either considered the task as a word-level classification problem based on a dictionary, which does not fare well in handling out-of-vocabulary (OOV) words; or solely focused on the pronunciation prediction of OOV words without considering the contextual disambiguation of word pronunciations in text. In this paper, we propose a unified approach within the framework of phrasal statistical machine translation (SMT) that combines the strengths of the dictionary-based and substring-based approaches. Our approach is novel in that we combine wordand character-based pronunciations from a dictionary within an SMT framework: the former captures the idiosyncratic properties of word pronunciation, while the latter provides the flexibility to predict the pronunciation of OOV words. We show that based on an extensive evaluation on various test sets, our model significantly outperforms the previous state-of-the-art systems, achieving around 90% accuracy in most domains. | [
13538306,
178267,
18936141,
430897,
528246,
7418935,
1435098,
11985819
] | Japanese Pronunciation Prediction as Phrasal Statistical Machine Translation
AFNLPCopyright AFNLPNovember 8 -13, 2011. 2011
Jun Hatori [email protected]
Department of Computer Science
University of Tokyo
7-3-1 Hongo113-0033BunkyoTokyoJapan
Hisami Suzuki [email protected]
Microsoft Research / One Microsoft Way
98052RedmondWAUSA
Japanese Pronunciation Prediction as Phrasal Statistical Machine Translation
Proceedings of the 5th International Joint Conference on Natural Language Processing
the 5th International Joint Conference on Natural Language ProcessingChiang Mai, ThailandAFNLPNovember 8 -13, 2011. 2011
This paper addresses the problem of predicting the pronunciation of Japanese text. The difficulty of this task lies in the high degree of ambiguity in the pronunciation of Japanese characters and words. Previous approaches have either considered the task as a word-level classification problem based on a dictionary, which does not fare well in handling out-of-vocabulary (OOV) words; or solely focused on the pronunciation prediction of OOV words without considering the contextual disambiguation of word pronunciations in text. In this paper, we propose a unified approach within the framework of phrasal statistical machine translation (SMT) that combines the strengths of the dictionary-based and substring-based approaches. Our approach is novel in that we combine wordand character-based pronunciations from a dictionary within an SMT framework: the former captures the idiosyncratic properties of word pronunciation, while the latter provides the flexibility to predict the pronunciation of OOV words. We show that based on an extensive evaluation on various test sets, our model significantly outperforms the previous state-of-the-art systems, achieving around 90% accuracy in most domains.
Introduction
This paper 1 explores the problem of assigning pronunciation to Japanese text, which consists of a mixture of ideographic and phonetic characters. The task is naturally important for the text-tospeech application (Schroeter et al., 2002), and has been researched in that context as letter-tophoneme conversion, which converts an ortho-graphic character sequence into phonemes. In addition to speech applications, the task is also crucial for those languages such as Chinese and Japanese, where users generally type in the pronunciations of words, which are then converted into the desired character string via the software application called input methods (e.g. Gao et al. (2002a); Gao et al. (2002b)).
Predicting the pronunciation of Japanese text is particularly challenging because the word and character pronunciations are highly ambiguous. Japanese orthography employs four sets of characters: hiragana and katakana (called generally as kana), which are syllabary systems and thus phonemic; kanji, which is ideographic and consists of several thousand characters; and Roman alphabet. Out of these, kanji characters typically have multiple possible pronunciations 2 ; especially those in frequent use tend to have many -between 5 and 10, sometimes as many as 20. This yields an exponential number of pronunciation possibilities when multiple kanji characters are combined in a word. Also, the pronunciation of a word is frequently idiosyncratic.
This idiosyncratic property of the word pronunciation naturally motivates us to take a dictionarybased approach. Traditionally, most approaches to Japanese pronunciation prediction have regarded the problem as a word pronunciation disambiguation task. Since there are no white spaces between words in Japanese text, these approaches first segment an input sentence/phrase into words, and then select a word-level pronunciation among those defined in a dictionary (Nagano et al., 2006;Neubig and Mori, 2010). For example, given a word "
", these methods try to select the most appropriate pronunciation out of the three dictionary entries: ninki (popularity), hitoke (sign of life) and jinki (people's atmosphere), depending on the context. However, in these approaches, seg-mentation errors tend to result in the failure of the following step of pronunciation prediction. Moreover, since the dictionary-based approach is inapplicable to those words that are not in the dictionary, there needs to be a separate mechanism for handling out-of-vocabulary (OOV) words.
Nonetheless, the problem of OOV words has received little attention to date. Traditional systems either bypass this problem completely and assign no pronunciation to OOV words, as Mecab (Kudo et al., 2004), a Japanese morphological analyzer, does; or use a simple model to cover them (e.g. Neubig and Mori (2010) uses a noisychannel model with a character bigram language model). Our previous work (Hatori and Suzuki, 2011) explicitly addresses the problem of predicting the pronunciation of OOV words, but focuses solely on predicting the pronunciation of nouns that are found in Wikipedia in isolation, and does not address the contextual disambiguation of pronunciation at the sentence level.
In this paper, we propose a unified approach based on the framework of phrasal statistical machine translation (SMT), addressing the whole sentence pronunciation assignment while integrating the OOV pronunciation prediction as part of the whole task. The novelty of our approach lies in using word and single-character pronunciations from a dictionary within the SMT framework: the former captures the idiosyncratic properties of word pronunciation, while the latter provides the flexibility to predict the pronunciation of OOV words based on the sequence of pronunciations at the substring level.
In addressing the pronunciation disambiguation problem within the framework of phrasal SMT, we extend the use of composed operations, which were applied in a limited manner in Hatori and Suzuki (2011). Within our dictionarybased model, the composed operations are able to incorporate the composition of dictionary words (i.e. phrases) as well as substrings of the character sequence (i.e. (partial) words). In this sense, our approach is more like a standard monotone phrasal SMT, rather than the substring-based string transduction. We also propose to use the joint n-gram model as a feature function, which has been proven to be effective in the letter-tophoneme conversion task (Bisani and Ney, 2008;Jiampojamarn et al., 2010). In the context of our current task, this feature not only incorporates smoothed contextual information for the purpose of pronunciation disambiguation, but also captures the dependency between single-kanji pronuncia-tions, which is effective for predicting the pronunciation of OOV words.
We collected an extensive evaluation set for the task, including newswire articles, search query logs, person names, and Wikipedia-derived instances. Using these test sets, we show that our model significantly outperforms the previous state-of-the-art systems, achieving around 90% accuracy in most test domains, which is the best known result on the task of Japanese pronunciation prediction to date. We also give a detailed analysis of the comparison of the proposed model with an SVM-based model, KyTea (Neubig and Mori, 2010), through which we hope to shed light on the remaining issues in solving this task.
Background
Pronunciation Prediction: Task Setting
We define the task of pronunciation prediction as converting a string of orthographic characters representing a sentence (or a word or phrase) into a sequence of hiragana, which corresponds to how the string is pronounced. For example, given a Japanese sentence " " ("I went to the Exhibition of Tanyu Kano at the Tokyo Metropolitan Art Museum."), the system is expected to output a sequence of hiragana, "
", pronounced as tookyoo to bijutsukan no kanoo tanyuu ten ni itta. The task involves two sub-problems: (a) contextual disambiguation of a word pronunciation, e.g., can be pronounced either as itta "went" or okonatta "did" depending on the context; (b) pronunciation prediction of OOV words, e.g., in the above example, ("the Exhibition of Tanyu Kano") is not likely to be in the dictionary, so the pronunciation must be reasonably guessed based on the possible pronunciations of individual characters.
Related Work
Our research on pronunciation prediction is inspired by previous research on string transduction. The most directly relevant is the work on letter-tophoneme conversion. Previous approaches to this task include joint n-gram models (e.g., Bisani and Ney (2002); Chen (2003); Bisani and Ney (2008)) and discriminatively trained substring-based models (e.g., Jiampojamarn et al. (2007); Jiampojamarn et al. (2008)). This task is typically evaluated at the word level, and therefore does not include contextual disambiguation.
Similar techniques to the letter-to-phoneme task have also been widely applied to the transliteration task (Knight and Graehl (1998)). The most relevant to the current task include an approach based on substring operations in the SMT framework (e.g., , Cherry and Suzuki (2009)), and those that use joint n-gram estimation method for the task of transliteration (e.g., Li et al. (2004); Jiampojamarn et al. (2010)). However, similarly to the letter-to-phoneme task, the contextual disambiguation of the words has not received much attention. The task of Japanese pronunciation prediction itself has been a topic of investigation. Sumita and Sugaya (2006) proposed a method to use the web for assigning word pronunciation, but their focus is limited to the pronunciation disambiguation of known proper nouns. Kurata et al. (2007) and Sasada et al. (2009) discuss the methods of disambiguating new word pronunciation candidates using speech data. Nagano et al. (2006) and Mori et al. (2010b) investigated the use of the joint ngram estimation to this task.
More recently, Neubig and Mori (2010) proposed a classifier-based system called KyTea, which is one of the current state-of-the-art systems for the task of Japanese pronunciation prediction. As we use this system as one of our baseline systems, we describe this work in some detail here. KyTea exploits an SVM-based two-step approach, which performs a word segmentation step, followed by a pronunciation disambiguation step for each word segment. In the pronunciation prediction step, if the word in question exists in the dictionary, KyTea uses character and character-type n-grams within a window as features for the SVM classifier. For OOV words, a simple OOV model based on a noisy channel model with a character bigram language model is used. While KyTea uses the discriminative indicator features, our model instead uses character/joint n-gram language models and composed operations (to be explained in Section 3.3.2) to capture the context for the purpose of pronunciation disambiguation. The use of the indicator features essentially requires probabilistic optimization of a large number of weights, making the training less scalable than our model, which only requires frequencies of operations and phrases in the training data.
In our previous work (Hatori and Suzuki, 2011), we addressed the pronunciation prediction of Japanese words in a semi-supervised, substringbased framework, using word-pronunciation pairs automatically extracted from Wikipedia. Though we obtained more than 70% accuracy on . tookyoo bijutsukan to ni itta Figure 1: Overview of the model.
Wikipedia data, the model is quite specific to handling the noun phrases in Wikipedia, and it is not clear if the approach can handle the pronunciation assignment of a general text, which includes the pronunciation prediction and disambiguation of the words of all types at the sentence level. Since our current work is an extension of this approach, we also adopt our previous work as one of our baseline models in Section 4.4.
Pronunciation Prediction Model
This section describes our phrasal SMT-based approach to pronunciation prediction, which is an extension of our previous work (Hatori and Suzuki, 2011). We assume that the task of translating a Japanese orthography string to a hiragana string is basically monotone and without insertion or deletion. The overview of our model is given in Figure 1. The components of the model will be explained below.
Training and Decoding
As is widely used in SMT research (Och, 2003), we adopt a discriminative learning framework that uses component generative models as real-valued features (Cherry and Suzuki, 2009). Given the source sequence s and the target character sequence t, we define real-valued features over s and t, f i (s, t) for i ∈ {1, . . . , n}. The score of a sequence pair s, t is given by the inner product of the weight vector λ = (λ 1 , . . . , λ n ) and the feature vector f (s, t).
For the training of model parameters, we use the averaged perceptron (Collins and Roark, 2004): given a training corpus of transduction derivations, each of which describes a word/substring operation sequence converting s into t, the perceptron iteratively updates the weight vector every time it encounters an instance for which the model outputs a wrong sequence. For decoding, we use a stack decoder (Zens and Ney, 2004).
Features
For our baseline model features, we first use those from Hatori and Suzuki (2011): the bidirectional translation probabilities, P (t|s) and P (s|t), the target character n-gram probability, P (t), the target character count, and the phrase count. In addition, we incorporate the joint n-gram probability, P (s, t), as a feature (described in Section 3.2.1). The estimation of the translation and joint/character n-gram probabilities requires a set of training corpus with source and target alignment at the word/substring level. Once these probabilities have been estimated by using the frequency of (the sequences of) operations in the training set, we only need a small tuning set to adjust the feature weights of the model. This makes online training and domain adaptation easy, and makes our model more scalable compared to fully discriminative systems with indicator features, such as KyTea.
Joint n-gram Language Model Feature
Motivated by the success in the transliteration task (Jiampojamarn et al., 2010), we incorporate the joint n-gram language model into our SMT-based framework. The joint n-gram sequence is the sequence of operations used in the transduction: for example, when a paired sentence " " is decomposed into three operations " , , ", the corresponding joint n-gram sequence is " , , ,
". The effectiveness of this feature is confirmed in our experiments in Section 5.2.
Translation Table
The corpora we use are a collection of pairs of a Japanese sentence and its hiragana sequence, as described as "paired corpus" in Figure 2. These are just like bilingual corpora if we regard the hiragana sequence as monotonically translated from Japanese text. Since the original corpora do not have any word segmentation or word/substring alignments, we first need to obtain them to construct the translation table for the decoder. In previous work, KyTea used a corpus that is manually aligned using words as a unit of alignment, while Hatori and Suzuki (2011) used an unsupervised substring-based alignment. The former is not scalable easily, while the latter cannot take advantage of existing dictionaries. In this work, we use a novel application of dictionary-based phrasal decoder in order to create an aligned corpus, which allows us to use dictionary information while learning substring-based alignments for handling OOV pronunciation prediction.
Dictionary-based model
In the dictionary-based model we propose, alignments are obtained using a phrasal decoder which is based on a dictionary. This essentially treats the dictionary entries as the minimal unit of substring operations, instead of using single-kanji pronunciations estimated from training corpora as in the case of the substring-based model (Hatori and Suzuki, 2011). We first build a simple dictionarybased decoder with only two features: the forward translation probability and the phrase count; and then use it to decode a paired corpus to obtain the alignments between the source and target strings. In this process, instances including any operation that is not defined in the dictionary are discarded; this is a major difference with the substring-based model of Hatori and Suzuki (2011), which uses all instances of training data.
Since Japanese dictionaries typically include single-kanji entries as well as word entries 3 , dictionary-based substring operations actually consist of both single-kanji (that is not a word per se) and word pronunciations. This is why our dictionary-based model is still able to handle OOV words. We show in Section 5 that the benefit of removing noisy training samples by this process outweighs the risk of discarding infrequent or nonstandard pronunciations that do not exist in the dictionary.
Composed operations
Our previous work (Hatori and Suzuki, 2011) exploits composed operations in order to include local contextual information in the substring-based model. Given a paired corpus, they use an aligner to obtain single-character alignments, which maps one kanji to one or more kana characters, which are then composed into larger operations. This procedure makes it possible to obtain longer alignments with limited memory, rather than using the source phrase length larger than one. In the current work, we extend the use of composed operations so that they work properly with the joint n-gram estimation.
The composed operations are beneficial for capturing contextual information. For example, the phrase " " can be pronounced in two ways: itta "went" and okonatta "did", which cannot be distinguished without any context. However, if this phrase is preceded by a hiragana particle ni "to", we can assume that the correct pronunciation is most likely itta, because the pronunciation ni okonatta is unusual ( okonatta is seldom preceded by ni). The composed operations are also useful in capturing the pronunciation of compound nouns: for example, due to the phonological process called rendaku (sequential voicing) (Vance, 1987), -"plate rack" is pronounced as shokki-dana, while the components of this word are individually pronounced as shokki ("plate") and tana ("rack"). By considering the compositions of operations, we can capture the pronunciation in the context of a compound word. Our phrasal decoder considers all (i.e. composed and non-composed) operations during the decoding, but longer (composed) operations are generally preferred when available because the phrase count feature usually receives a negative weight.
However, the simultaneous use of these operations of different size may cause a problem when the joint n-gram estimation is applied: because composed operations include multiple noncomposed operations, they break the independence assumption of n-gram occurrences in the language model. For example, given a parallel phrase " " (went to an exhibition), which is decomposed into " , , " by dictionary-based alignments, the joint n-gram language model expects that the occurrence of " " (non-composed operation) is independent of that of " --" (composed operation), but this is not the case. To avoid this, we let the model retain the original operations even after they are composed. As shown in Figure 1, even after the two operations " " and " " are merged into a composed operation " --", the joint n-gram probability is still estimated based on the original (non-composed) operations. For efficiency purposes, we only retain the decomposition of the first appearance of each composed operation even if multiple different decompositions are possible.
Experiments
Dictionary
In the dictionary-based framework, we need a dictionary based on which we obtain the alignments. We use a combination of three dictionaries: Uni-Dic (Den et al., 2007), Iwanami Dictionary, and an in-house dictionary that was available to us of unknown origin. UniDic is a dictionary resource available for research purposes, which is updated on a regular basis and includes 625k word forms as of the version 1.3.12 release (July 2009). Iwanami Dictionary consists of 107k words, which expands into 325k surface forms after considering okurigana (verb inflectional ending) variants. The inhouse dictionary consists of a total of 226k words and single-kanji pronunciations. After removing duplicates, the combined dictionary consists of 770k entries. Note that these dictionaries are also used as part of training data.
Training and Test Data
As described in Section 3, we need word/substring-aligned parallel corpora to train the models. We used three different sources of training data in our experiments. First, following Hatori and Suzuki (2011), we used Wikipedia: following the heuristics described in the paper, we extracted about 460k noisy word-pronunciation pairs from Japanese Wikipedia articles as of January 24, 2010. Of these pairs, we set aside 3k instances for use in development and evaluation, and used the rest for training (referred to as "Wiki-Train"). Secondly, since word-pronunciation pairs extracted from Wikipedia are noisy 4 and mostly consist of noun phrases, we also used a newspaper corpus, which is comprised of 1.4m sentence pairs, referred to as "News-Train". Finally, for the comparison with KyTea, we use a publicly available corpus, the Balanced Corpus of Contemporary Written Japanese (Maekawa (2008)). Specifically, we use the 2009 Core Data of this corpus, which consists of 37k sentences annotated with pronunciations (referred to as "BCCWJ").
Our test data consist of six datasets from various domains. Table 1 shows the statistics of these corpora, with the OOV rate estimated using KyTea 5 Table 1: Statistics of test sets, where "Avg. len." is the average length of an instance in the number of characters.
• News-1(N1) and News-2(N2): collections of newswire articles available as Microsoft Research IME Corpus (Suzuki and Gao, 2005). These articles are from different newspapers from the news corpus we used in training. In preparing these test sets, instances including Arabic and kanji numerals (0,1, ,9, , , , ), or Roman alphabets are excluded 6 . • Query-1(Q1) and Query-2(Q2): query logs from a search engine (source undisclosed for blind reviewing). These sets consist of various instances ranging from general noun phrases to relatively new proper nouns. • Name(PN): a collection of difficult-topronounce words, mostly consisting of person names.
• Wiki(WP):
manually-cleaned wordpronunciation pairs from Wikipedia, which consists mostly of proper nouns including names of people and locations as well as terms that are difficult to pronounce.
For the tuning of the weights of the model, we used 200 held-out instances for each test domain, except that the development set of Query-1 is also used for the tuning for Query-2, and the set of Wiki is used for the tuning for Name.
Experimental settings
We use our original implementation of the phrasal aligner and decoder, which is also used as our implementation of the substring-based model of Hatori and Suzuki (2011). An ITG-based aligner with EM algorithm (Zhang et al., 2008) is used with monotonic setting; we set the source (kanji) and target (kana) phrase length limits to 1 and 4, and prohibit alignments to a null symbol in either source or target side. The decoder runs with the beam size of 20. The maximum number of composed operations is 4 for the substringbased model of Hatori and Suzuki (2011), and 3 for the proposed dictionary-based model. In the substring-based model, character 5-gram and joint 4-gram language models with Kneser-Ney smoothing and the BoS (beginning-of-string) and EoS (end-of-string) symbols are used; in the dictionary-based model, character 5-gram and joint 3-gram models with the same settings are used. We did not use the infrequent operation cutoff. All of these parameters and settings are set based on the preliminary experiments. As the evaluation measure, we use instance-level accuracy, which is calculated based on the percentage of the outputs that exactly match the gold standard: instances correspond to sentences in News-1/2, and to words or phrases in all other test domains. The statistical significance of the results is given using McNemar's test.
Baseline Models
We describe three baseline models that we use as reference in our experiment.
• Mecab: Mecab version 0.98 7 , which is the state-of-the-art morphological analyzer for Japanese that also outputs pronunciations of words (Kudo et al., 2004), with the off-the-shelf IPA Dictionary containing 392k word entries provided at the author's page. • KyTea: KyTea version 0.13 8 , which is described in Section 2.2. In our comparison experiment, we run KyTea version 0.13 both as is (using their pre-trained model), and as trained by us to allow the comparison of the framework using the same publicly available training data. • HS11: HS11 is our reimplementation of the substring-based model by Hatori and Suzuki (2011), which was shown to outperform the substring-based joint trigram model on a Wikipedia test set. Table 2: Instance-level accuracy (in %) of pronunciation prediction models. The upper two models use the off-the-shelf models; the lower three models are trained using the same resources: Wiki-Train, News-Train, and the combined dictionary.
Results and Discussion
Main Results
the system does not have a mechanism to handle OOV words. The second row shows the result of KyTea using the off-the-shelf "full SVM model" 9 , which is trained on several resources including BCCWJ and UniDic. It generally does better than Mecab, but the accuracies on the high OOV rate domains (i.e. Name and Wiki) are still quite low.
The bottom three models are all trained with the same resources: Wiki-Train and News-Train with all the three dictionaries. "HS11" is the substring-based model proposed by Hatori and Suzuki (2011), while "HS11+" is the model enhanced with two additional features: the joint ngram feature (as described in Section 3.2), and the dictionary feature, whose value is the total length (in souce characters) of words matching any dictionary entry. 10 By comparing these two models, the effectiveness of these features over the model "HS11" is quite clear. However, the accuracy is below 40% on newswire test sets, where each instance is a full sentence. We assume that this is because the substring-based model cannot capture the contextual information that is broad enough, and also is easily affected by noise in the training data. Our proposed model, corresponding to the last line in the table, overcomes this problem and achives the best accuracy in all but one test domain (Wiki), showing the effectiveness and robustness of the dictionary-based approach. We lags behind "HS11+" on Wiki, probably because the dictionary-based model discards many operations that are uncommon, but are still useful for the pronunciation of OOV words in Wikipedia. Table 3 shows the direct comparison between KyTea and the proposed model trained 11 with exactly the same datasets: BCCWJ, Wiki-Train, 9 We could not train KyTea with the same dataset as the proposed model uses due to memory limitation. 10 The dictionary is also used as the training data. 11 Our training of KyTea is performed as follows: we first train a segmentation model for KyTea using BCCWJ and UniDic, and use this model to segment the substring-aligned Wiki-Train instances to obtain a corpus with consistent segmentation, which is then used to train the final model. Table 3: Instance-level accuracy (in %) of the models trained on Wiki-Train and BC-CWJ with UniDic. " †" denotes a statisticallysignificant (p < 0.01) difference between "KyTea (wo/noise)" and "Proposed".
and UniDic, all of which are from publicly available resources. Whereas "KyTea (w/noise)" uses all the instances for training, "KyTea (wo/noise)" uses only the instances that are filtered using dictionary-based operations 12 . Note that this cleaning process is also a novel contribution of our work. As is observed from Table 3, this cleaning process resulted in a large improvement in accuracy, with the exception of the Name and Wiki sets. After inspecting the errors manually, we have found that this is because the UniDicbased operations do not include many single-kanji pronunciations that are commonly used in person's names, such as " mi" and " to". However, this problems seems negligible when a larger dictionary including common pronunciations for person's names is available. In the comparison in Table 2, where the models use a combination of three dictionaries, the dictionary-based model "Proposed" performs better than the substringbased model "HS11+" even on the Name set.
Overall, the proposed model outperforms "KyTea (wo/noise)" in four out of six test sets, and the differences in the remaining two sets (News-1/2) are not statistically significant. Considering also that the training data is relatively small in this comparison experiment 13 , we can conclude that our model has at least a comparable performance to KyTea for the task of pronunciation disambiguation, while achieving a superior performance on the task of pronunciation prediction for OOV words. A manual analysis of the results also showed that our model indeed has an advantage in outputting phonetically natural pronunciation sequences, partially resolving problems related to on/kun 14 and rendaku, as in keiyaku-12 27.6% of the instances in Wiki-Train is filtered out. This percentage is larger than the noise rate of 10% in this corpus, which Hatori and Suzuki (2011) reported, because the sole use of UniDic does not cover many single-kanji pronunciations, as mentioned later in this paragraph. 13 Since the translation probabilities in our model are based on unregularized frequency, our model is less powerful with small training data, while it is more scalable.
14 Pronunciations of kanji are classified into on and kun pronunciations (corresponding to their origin, Chinese and Model N1 N2 Q1 Q2 PN WP Proposed (D) 89.7 88.6 95.5 87.8 92.9 70.2 -wo/joint n-gram -5.5 -3.3 -1.5 -3.8 -4.4 -4.2 -wo/composed op. -3.9 -4.0 -2.6 -1.2 -1.8 -2.9 Table 4: Feature ablation results for the dictionarybased model trained with Wiki-Train, News-Train and the combined dictionary. All the losses in accuracy were statistically significant (p < 0.01).
gire (individually pronounced as keiyaku and kire; "contract expiration"). Although KyTea wrongly output keiyaku-kire to this instance, the proposed model was able to output the correct pronunciation by learning that the pronunciation of tends to be gire after the pronunciation ku, from other instances such asku-gire (segments in haiku). On the other hand, KyTea is better at capturing generalized context by using a charactertype feature, resolving instances such as " -" (katakana + mai; "brand rice"), while the proposed model wrongly output the most frequent pronunciation bei for . Table 4 shows the results of the feature ablation experiment of the proposed model. As we mentioned in Section 3.2.1, the advantage of the joint n-gram language model is twofold: incorporating smoothed context into word pronunciation disambiguation (which is the dominant problem in News-1/2), as well as incorporating singlekanji pronunciation dependencies into pronunciation prediction for OOV words (considered to be common in Name and Wiki). The improvement observed in these domains suggests that the joint n-gram probability successfully captured these two aspects. The use of composed operations showed large improvement particularly on News-1/2, proving its utility for the pronunciation disambiguation aspect of this task. Figure 3 shows the performance of the proposed model with respect to the number of News-Train sentences used for training. In this experiment, the model is first trained only with Wiki-Train; then, sentences from News-Train are incrementally added. This can be seen as a process for adapting a word-based model to a fully sentential, disambiguation-capable model. As expected, the accuracy is consistently improved in the news domain as more sentences are added, while the accuracy remains almost unchanged in the rest of the Japanese), each of which tends to be used consecutively. domains, without showing any negative effect by the additional out-of-domain training data. These results suggest that our model is robust and can adapt to new domains with a simple addition of training data.
Feature Ablation Experiments
Data Ablation Experiments
Conclusion
We have presented a unified approach to the task of Japanese pronunciation prediction. Based on the framework of phrasal SMT, our model seamlessly and robustly integrates the task of word pronunciation disambiguation and pronunciation prediction for OOV words. Its basic components are trained in an unsupervised manner, and work in the presence of noise in training data. The model also has potential to adapt to a new domain when additional training data is available. We have performed an extensive evaluation on various test sets, and showed that our model achieves the new state-of-the-art accuracy on the task of Japanese pronunciation prediction. Looking into the future, we would like to see if the proposed model is effective in a general task of transliteration within a sentential context, which is conceivable as an application of phonetic input (e.g., inputting Arabic using Roman text and converting it automatically into Arabic scripts). On the task of Japanese pronunciation prediction, we are also interested in incorporating class-based features, such as character type information and on/kun dependencies, by using both existing resources and clustering methods.
Figure 2 :
2Overview of the training.
Figure 3 :
3Performance (accuracy in %) of the proposed model with respect to the log of the number of additional training sentences from News-Train.
Table 2
2shows the performance of the proposed
model along with various baseline models. The
first two lines are the result of the off-the-shelf,
pre-trained systems. Mecab achieves around or
above 80% accuracy on five out of six test sets,
although the result on Wiki is below 60% because
This work was conducted during the first author's internship at Microsoft Research.
In UniDic(Den et al., 2007), the average number of pronunciations per kanji character is 2.3.
This is because each kanji character is a morpheme representing a meaning, and is worth an entry in dictionaries.
We have found that roughly 10% of these instances are invalid word-pronunciation pairs.5 We ran KyTea 0.13 with the built-in default model. For
News-1/2, the OOV rate in the table is the OOV word rate based on the KyTea's output. For the other test sets, the figures show the rate of the instances (words or phrases) that contain any OOV word, again based on the KyTea's output 6 This is because there exist different standards in how to pronounce them. For example, the literal pronunciation is preferred for text-to-speech applications, whereas just outputting numerals as such suits better for the training of Japanese input methods.
http://mecab.sourceforge.net/ 8 http://www.phontron.com/kytea/
AcknowledgementWe are grateful to Graham Neubig for providing us with detailed information on KyTea, and to anonymous reviewers for useful comments.
Investigations on joint-multigram models for grapheme-tophoneme conversion. Maximilian Bisani, Hermann Ney, Proceedings of the International Conference on Spoken Language Processing. the International Conference on Spoken Language ProcessingMaximilian Bisani and Hermann Ney. 2002. Investi- gations on joint-multigram models for grapheme-to- phoneme conversion. In Proceedings of the Interna- tional Conference on Spoken Language Processing.
Jointsequence models for grapheme-to-phoneme conversion. Maximilian Bisani, Hermann Ney, Speech Communication. 50Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conver- sion. Speech Communication, 50:434-451.
Conditional and joint models for grapheme-to-phoneme conversion. F Stanley, Chen, Proceedings of the European Conference on Speech Communication and Technology. the European Conference on Speech Communication and TechnologyStanley F. Chen. 2003. Conditional and joint models for grapheme-to-phoneme conversion. In Proceed- ings of the European Conference on Speech Commu- nication and Technology.
Discriminative substring decoding for transliteration. Colin Cherry, Hisami Suzuki, EMNLP. Colin Cherry and Hisami Suzuki. 2009. Discrim- inative substring decoding for transliteration. In EMNLP.
Incremental parsing with the perceptron algorithm. Michael Collins, Brian Roark, ACL. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL.
The development of an electronic dictionary for morphological analysis and its application to Japanese corpus linguistics. Yasuharu Den, Toshinobu Ogiso, Hideki Ogura, Atsushi Yamada, Nobuaki Minematsu, Japanese linguistics. 22Kiyotaka Uchimoto, and Hanae Koiso. in JapaneseYasuharu Den, Toshinobu Ogiso, Hideki Ogura, At- sushi Yamada, Nobuaki Minematsu, Kiyotaka Uchi- moto, and Hanae Koiso. 2007. The development of an electronic dictionary for morphological analysis and its application to Japanese corpus linguistics (in Japanese). Japanese linguistics, 22:101-122.
Toward a unified approach to statistical language modeling for chinese. Jianfeng Gao, Mingjing Li, Joshua T Goodman, Kai-Fu Lee, ACM Transactions on Asian Language Information Processing. 1Jianfeng Gao, Mingjing Li, Joshua T. Goodman, and Kai-Fu Lee. 2002a. Toward a unified approach to statistical language modeling for chinese. ACM Transactions on Asian Language Information Pro- cessing, 1:3-33.
Exploiting headword dependency and predictive clustering for language modeling. Jianfeng Gao, Hisami Suzuki, Yang Wen, EMNLP. Jianfeng Gao, Hisami Suzuki, and Yang Wen. 2002b. Exploiting headword dependency and predictive clustering for language modeling. In EMNLP.
Predicting word pronunciation in Japanese. Jun Hatori, Hisami Suzuki, CICLing 2011, Lecture Notes in Computer Science (6609). SpringerJun Hatori and Hisami Suzuki. 2011. Predicting word pronunciation in Japanese. In CICLing 2011, Lec- ture Notes in Computer Science (6609), pages 477- 492. Springer.
Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. Grzegorz Sittichai Jiampojamarn, Tarek Kondrak, Sherif, HLT-NAACL. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In HLT-NAACL.
Joint processing and discriminative training for letter-to-phoneme conversion. Sittichai Jiampojamarn, Colin Cherry, Grzegorz Kondrak, ACL. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In ACL.
Integrating joint n-gram features into a discriminative training framework. Sittichai Jiampojamarn, Colin Cherry, Grzegorz Kondrak, NAACL. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2010. Integrating joint n-gram fea- tures into a discriminative training framework. In NAACL.
Appliying conditional random fields to Japanese morphological analysis. Kevin Knight, Jonathan Graehl ; Taku Kudo, Kaoru Yamamoto, Yuji Matsumoto, EMNLP. Machine transliteration. Computational Linguistics, 24Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24. Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Appliying conditional random fields to Japanese morphological analysis. In EMNLP.
Unsupervised lexicon acquisition from speech and text. Gakuto Kurata, Shinsuke Mori, Nobuyasu Itoh, Masafumi Nishimura, Proceedings of ICASSP-2007. ICASSP-2007Gakuto Kurata, Shinsuke Mori, Nobuyasu Itoh, and Masafumi Nishimura. 2007. Unsupervised lexicon acquisition from speech and text. In Proceedings of ICASSP-2007.
A joint source-channel model for machine transliteration. Haizhou Li, Min Zhang, Jian Su, ACL. Kikuo Maekawa. 2008. Compilation of the KOTONOHA-BCCWJ corpus. 4Nihongo no kenkyu. Studies in JapaneseHaizhou Li, Min Zhang, and Jian Su. 2004. A joint source-channel model for machine transliteration. In ACL. Kikuo Maekawa. 2008. Compilation of the KOTONOHA-BCCWJ corpus (in Japanese). Ni- hongo no kenkyu (Studies in Japanese), 4:82-95.
An n-gram-based approach to phoneme and accent estimation for tts. Shinsuke Mori, Tetsuro Sasada, Graham Neubig, Tohru Nagano, Shinsuke Mori, and Masafumi NishimuraTransactions of Information Processing Society of Japan47Technical ReportLanguage model estimation from a stochastically tagged corpus. in JapaneseShinsuke Mori, Tetsuro Sasada, and Graham Neubig. 2010b. Language model estimation from a stochas- tically tagged corpus (in Japanese). Technical Re- port, SIG, Information Processing Society of Japan. Tohru Nagano, Shinsuke Mori, and Masafumi Nishimura. 2006. An n-gram-based approach to phoneme and accent estimation for tts (in Japanese). Transactions of Information Processing Society of Japan, 47:1793-1801.
Wordbased partial annotation for efficient corpus construction. Graham Neubig, Shinsuke Mori, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC. the Seventh International Conference on Language Resources and Evaluation (LRECGraham Neubig and Shinsuke Mori. 2010. Word- based partial annotation for efficient corpus con- struction. In Proceedings of the Seventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2010).
Minimum error rate training for statistical machine translation. Franz Josef Och, ACL. Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In ACL.
Domain adaptation of statistical kanakanji conversion system by automatic acquisition of contextual information with unknown words. Tetsuro Sasada, Shinsuke Mori, Tatsuya Kawahara, Proceedings of the 15th Annual Meeting of the Association for Natural Language Processing. the 15th Annual Meeting of the Association for Natural Language Processingin JapaneseTetsuro Sasada, Shinsuke Mori, and Tatsuya Kawa- hara. 2009. Domain adaptation of statistical kana- kanji conversion system by automatic acquisition of contextual information with unknown words (in Japanese). In Proceedings of the 15th Annual Meet- ing of the Association for Natural Language Pro- cessing.
A perspective on the next challenges for TTS research. Juergen Schroeter, Alistair Conkie, Ann Syrdal, Mark Beutnagel, Matthias Jilka, Volker Strom, Yeon-Jun Kim, Hong-Goo Kang, David Kapilow, Proceedings of the IEEE 2002 Workshop on Speech Synthesis. the IEEE 2002 Workshop on Speech SynthesisJuergen Schroeter, Alistair Conkie, Ann Syrdal, Mark Beutnagel, Matthias Jilka, Volker Strom, Yeon-Jun Kim, Hong-Goo Kang, and David Kapilow. 2002. A perspective on the next challenges for TTS re- search. In Proceedings of the IEEE 2002 Workshop on Speech Synthesis.
Substringbased transliteration. Tarek Sherif, Grzegorz Kondrak, ACL. Tarek Sherif and Grzegorz Kondrak. 2007. Substring- based transliteration. In ACL.
Word pronunciation disambiguation using the web. Eiichiro Sumita, Fumiaki Sugaya, NAACL. Eiichiro Sumita and Fumiaki Sugaya. 2006. Word pronunciation disambiguation using the web. In NAACL.
Microsoft Research IME Corpus. Hisami Suzuki, Jianfeng Gao, No. 2005-168MSR Technical ReportHisami Suzuki and Jianfeng Gao. 2005. Microsoft Research IME Corpus. MSR Technical Report No. 2005-168.
An Introduction to Japanese Phonology. Timothy J Vance, State University of New York PressTimothy J. Vance. 1987. An Introduction to Japanese Phonology. State University of New York Press.
Improvements in phrase-based statistical machine translation. Richard Zens, Hermann Ney, HLT-NAACL. Richard Zens and Hermann Ney. 2004. Improvements in phrase-based statistical machine translation. In HLT-NAACL.
Bayesian learning of noncompositional phrases with synchronous parsing. Hao Zhang, Chris Quirk, Robert C Moore, Daniel Gildea, ACL. Hao Zhang, Chris Quirk, Robert C. Moore, and Daniel Gildea. 2008. Bayesian learning of non- compositional phrases with synchronous parsing. In ACL. |
14,547,438 | The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015 | This paper describes the submission of the University of Edinburgh and the Johns Hopkins University for the shared translation task of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation (WMT 2015). We set up phrase-based statistical machine translation systems for all ten language pairs of this year's evaluation campaign, which are English paired with Czech, Finnish, French, German, and Russian in both translation directions.Novel research directions we investigated include: neural network language models and bilingual neural network language models, a comprehensive use of word classes, and sparse lexicalized reordering features. | [
8227591,
14259080,
11142668,
2561041,
16429074,
11533588,
3065236,
10807721,
5984042,
11706155,
2479536,
1675316,
4895939,
5989701,
7271623,
7417943,
10766958,
3510512,
8476273,
8170227,
237558774
] | The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 2015. 2015
Barry Haddow
School of Informatics
University of Edinburgh
Matthias Huck [email protected]@jhu.edu
School of Informatics
University of Edinburgh
Alexandra Birch [email protected]
School of Informatics
University of Edinburgh
Nikolay Bogoychev
School of Informatics
University of Edinburgh
Philipp Koehn
School of Informatics
University of Edinburgh
Center for Speech and Language Processing
The Johns Hopkins University
The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015
Proceedings of the Tenth Workshop on Statistical Machine Translation
the Tenth Workshop on Statistical Machine TranslationLisboa, PortugalAssociation for Computational LinguisticsSeptember 2015. 2015
This paper describes the submission of the University of Edinburgh and the Johns Hopkins University for the shared translation task of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation (WMT 2015). We set up phrase-based statistical machine translation systems for all ten language pairs of this year's evaluation campaign, which are English paired with Czech, Finnish, French, German, and Russian in both translation directions.Novel research directions we investigated include: neural network language models and bilingual neural network language models, a comprehensive use of word classes, and sparse lexicalized reordering features.
Introduction
The Edinburgh/JHU phrase-based translation systems for our participation in the WMT 2015 shared translation task 1 are based on the open source Moses toolkit (Koehn et al., 2007). We built upon Edinburgh's strong baselines from WMT submissions in previous years as well as our recent research within the framework of other evaluation campaigns and projects such as IWSLT 2 and EU-BRIDGE 3 (Birch et al., 2014;Freitag et al., 2014a;Freitag et al., 2014b).
We first discuss novel features that we integrated into our systems for the 2015 Edinburgh/JHU submission. Next we give a general system overview with details on our training pipeline and decoder configuration. We finally present empirical results for the individual language pairs and translation directions.
Novel Methods
Neural Network LM with NPLM
For some language pairs (notably French↔English and Finnish↔English) we experimented with feed-forward neural network language models using the NPLM toolkit (Vaswani et al., 2013). This toolkit enables such language models to be trained efficiently on large datasets, and provides a querying API which is fast enough to be used during decoding. NPLM is fully integrated into Moses, including appropriate wrapper scripts for training the language models within the Moses experiment management system.
Bilingual Neural Network LM
We also experimented with our re-implementation of the "joint" model by Devlin et al. (2014). Referred to as bilingual LM in Moses, this was previously employed in the Edinburgh IWSLT system submissions, although with limited success (Birch et al., 2014).
The idea of the bilingual LM is quite straightforward. We define a language model where each target token is conditioned on the previous (n − 1) target tokens (as in a standard n-gram language model) as well as its aligned source token, and a window of m tokens on either side of the aligned source token. At training time, the aligned source token is found from the automatic alignment, and at test time the alignment is supplied by the decoder. The bilingual LM is trained using a feedforward neural network and we use the NPLM toolkit for this.
Prior to submission we tested bilingual LMs on the French↔English tasks and on English→Russian task. For French↔English, we had resource issues 4 in training such large models so we randomly subsampled 10% of the data for training. Since we did not observe gains in translation quality, the bilingual LM was not integrated into our primary system submissions. In post-submission experiments, we tried training bilingual LM on a 10% domain-specific portion of the training data selected using modified Moore-Lewis (Moore and Lewis, 2010; Axelrod et al., 2011), but only observed a small improvement in translation performance.
Comprehensive Use of Word Classes
In Edinburgh's submission from the previous year, we used automatically generated word classes in additional language models and in additional operation sequence models . This year, we pushed the use of word classes into the remaining feature functions: the reordering model and the sparse word features.
We generated Och clusters (Och, 1999) -a variant of Brown clusters -using mkcls. We have to choose a hyper parameter: the number of clusters. Our experiments and also prior work (Stewart et al., 2014) suggest that instead of committing to a single value, it is beneficial to use multiple numbers and use them in multiple feature functions concurrently. We used 50, 200, 600, and 2000 clusters, hence having 4 additional interpolated language models, 4 additional operation sequence models, 4 additional lexicalized reordering models, and 4 additional sets of sparse features.
The feature functions for word classes were trained exactly the same way as the corresponding feature functions for words. For instance, this means that the word class language model required training of individual models on the subcorpora, and then interpolation.
We carried out a study to assess the contribution of the use of such word class feature functions. Table 1 summarizes the results. Use of word classes in each of the models yields small gains, except for the reordering model, where there is no observable difference. The biggest gains were observed in the language model. Note that the English-German baseline already included additional feature functions based on POS and morphological tags, and basically no additional gains were observed due to the class based feature functions.
Sparse Lexicalized Reordering
We implemented sparse lexicalized reordering features (Cherry, 2013) in Moses and evaluated them in English↔German setups. The experiments were conducted on top of the standard hierarchical lexicalized reordering model (Galley and Manning, 2008). We applied features based on Och clusters with 200 classes on both source and target side. Active feature groups are between, phrase, and stack.
In addition to optimizing the feature weights directly with k-best MIRA (Cherry and Foster, 2012), we also examined maximum expected BLEU training of the sparse lexicalized reordering features via stochastic gradient descent (Auli et al., 2014).
System Overview
Preprocessing
The training data was preprocessed using scripts from the Moses toolkit. We first normalized the data using the normalize-punctuation.perl script, then performed tokenization (using the -a option), and then truecasing. We did not perform any corpus filtering other than the standard Moses method, which removes sentence pairs with extreme length ratios.
Word Alignment
For word alignment we used either fast_align (Dyer et al., 2013) or MGIZA++ (Gao and Vogel, 2008), followed by the standard grow-diag-final-and symmetrization heuristic.
An empirical comparison of fast_align and MGIZA++ on the Finnish-English and English-Russian language pairs using the constrained data sets did not reveal any significant difference.
Language Model
We used all available monolingual data to train 5gram language models with modified Kneser-Ney smoothing (Chen and Goodman, 1998). Typically, language models for each monolingual corpus were first trained using either KenLM (Heafield et al., 2013) or the SRILM toolkit (Stolcke, 2002) and then linearly interpolated using weights tuned to minimize perplexity on the development set.
Baseline Features
We follow the standard approach to SMT of scoring translation hypotheses using a weighted linear combination of features. The core features of our de-en en-de cs-en en-cs ru-en en-ru avg ∆ Baseline (no clusters) 28 model are a 5-gram LM score, phrase translation and lexical translation scores, word and phrase penalties, and a linear distortion score. The phrase translation probabilities are smoothed with Good-Turing smoothing (Foster et al., 2006). We used the hierarchical lexicalized reordering model (Galley and Manning, 2008) with 4 possible orientations (monotone, swap, discontinuous left and discontinuous right) in both left-to-right and rightto-left direction. We also used the operation sequence model (OSM) (Durrani et al., 2013) with 4 count based supportive features. We further employed domain indicator features (marking which training corpus each phrase pair was found in), binary phrase count indicator features, sparse phrase length features, and sparse source word deletion, target word insertion, and word translation features (limited to the top K words in each language, typically with K = 50).
Tuning
Since our feature set (generally around 500 to 1000 features) was too large for MERT, we used k-best batch MIRA for tuning (Cherry and Foster, 2012). To speed up tuning we applied threshold pruning to the phrase table, based on the direct translation model probability.
Decoding
In decoding we applied cube pruning (Huang and Chiang, 2007) with a stack size of 5000 (reduced to 1000 for tuning), Minimum Bayes Risk decoding (Kumar and Byrne, 2004), a maximum phrase length of 5, a distortion limit of 6, 100best translation options and the no-reorderingover-punctuation heuristic (Koehn and Haddow, 2009).
Experimental Results
In this section we describe peculiarities of individual systems and present experimental results.
French↔English
Our submitted systems for the French-English language pair are quite similar for the two translation directions. We used all the constrained parallel data to build a phrase-based translation model and the language model was build from the target side of this data, the monolingual news data and the LDC GigaWord corpora. During system development we used the newsdiscussdev2015 for tuning and development testing, using 2-fold cross validation. For tuning the submitted system, and the post-submission experiments, we tuned on the whole of newsdiscussdev2015, and report cased BLEU on newsdiscusstest2015.
Prior to submission we experimented with bilingual LM and an NPLM-based neural network language model (Sections 2.2 and 2.1) but did not obtain positive results. These were trained on a randomly selected 10% portion of the parallel training data. We also experimented with classbased language models (using Och clusters from mkcls), including the 50 class language model in the English→French submission but not in the French→English one, since it helped in our development setup in the former but not the latter.
In the post-submission experiments (Table 2), we show the comparison of the baseline system (as described in Section 3) with systems enhanced with bilingual LM, NPLM and class-based language models. For the class-based language models, we tested with 50 Och clusters, 200 Och clusters, and with both class-based LMs. For the bilingual LM, we created both "combined" (a 5-gram on the target and a 9-gram on the source) and "source" (1-gram on the target and 15-gram on Table 2 that the bilingual LM has a minimal effect on BLEU, only showing an increase for one language pair, one configuration, and the margin of improvement is probably within the margin of tuning variation. We do not have a good explanation for the lack of success with bilingual LM, in contrast to (Devlin et al., 2014), however we note that all reports of improvements with this type of model are for distantly related language pairs. We also did not observe any improvement with the class-based language models for French→English, although we did observe small gains from English→French. Building an NPLM model for all data gives a reasonable improvement (+0.7) for the French target, but not the English. In fact French→English was the only language pair where NPLM did not improve BLEU after building the LM on all data. It is possible that the limited morphology of English means that the improved generalisation of the NPLM is not as helpful, and also that the conventional n-gram LM is already strong for this language pair.
Finnish↔English
For the Finnish-English language pair we built systems using only the constrained data, and systems using all the OPUS (Tiedemann, 2012) Table 3: Comparison of baseline with postsubmission experiments on class-based language models, bilingual LM and NPLM. Note that the submitted system for Finnish→English was the same as the baseline (but retuned).
allel data. Our baselines include this extra data, but we also show results just using the constrained parallel data. We did not employ the morphological splitting as in Edinburgh's syntax-based system (Williams et al., 2015) and consequently the English→Finnish systems performed poorly in development and we did not submit a phrase-based system for this pair.
Our development setup was similar to French↔English; we used the newsdev2015 for tuning and test during system development (in 2-fold cross-validation) then for the submission and subsequent experiments we used the whole of newsdev2015 for tuning. Also in common with our work on French↔English, we performed several post-submission experiments to examine the effect of class-based language models, bilingual LM and NPLM. We show the results in Table 3. For training bilingual LM and NPLM models we encountered some numerical issues, probably due to the large vocabulary size in Finnish. These were partially addressed by employing dropout to prevent overfitting (Srivastava et al., 2014), enabling us to train the models for at least 2 epochs.
We note that, as with French↔English, our application of bilingual LM did not result in significant improvement. Finnish and English are quite distantly related, but we can speculate that using words as a representation for Finnish is not appropriate. The NPLM, however, offers modest (+0.4) improvements over the baseline in both directions.
Czech↔English
The development of the Czech↔English systems followed the ideas in Section 2.3, i.e., with a focus on word classes (50, 200, 600 classes) for all component models. We combined the test sets from 2008 to 2012 for tuning. No neural language model or bilingual language model was used.
Russian↔English
To Russian. For the English→Russian system, we used all the parallel data specified in the task. The Wiki Headlines data was appended onto the combined parallel corpus. For the monolingual corpora, we used all the constrained track corpora except for Newscrawl 2008-2010 which were overlooked as they were much smaller than other resources. We trained word classes with three different settings (50, 200, and 600 clusters) on both source and target languages. On applying clusters, we trained 6-gram language models on the target side. We used all four factors (words and clusters) in both source and target languages for the the translation model and the OSM, but we used only the word factor for the alignment and the reordering models. We performed transliteration (Durrani et al., 2014c) after decoding for all three experimental conditions. We used newstest2012 for LM interpolation and batch MIRA model tuning. In Table 4, the only difference between the baseline system and the official submission is that the baseline has no cluster factors. The final model (BiLM source & combined & NPLM) is the same as the submitted system, apart from the fact that we applied two bilingual neural network models: one over the source and one over the source and target, and an NPLM language model over the target. This did not improve over the factored model and so was not submitted for the evaluation.
From Russian. The Russian→English system used the same settings as the Czech system, except for the addition of a factor over 2000 word classes and a smaller tuning set (just newstest2012).
German↔English
Our German-English training corpus comprises all permissible parallel data of the constrained track for this language pair. A concatenation of newssyscomb2009 and newstest2008-2012 served as tuning set. From German. For translation from German, we applied syntactic pre-reordering (Collins et al., 2005) and compound splitting (Koehn and Knight, 2003) in a preprocessing step on the source side. A rich set of translation factors was exploited in addition to word surface forms: Och clusters (50 classes), morphological tags, partof-speech tags, and word stems on the German side (Schmid, 2000), as well as Och clusters (50 classes), part-of-speech tags (Ratnaparkhi, 1996), and word stems (Porter, 1980) on the English side. The factors were utilized in the translation model and in OSMs. The lexicalized reordering model was trained on stems. Individual 7gram Och cluster LMs were trained with KenLM's --discount_fallback --prune '0 0 1' parameters, 5 then interpolated with the SRILM toolkit and added to the log-linear model as a second LM feature. Our 5-gram word LM was trained on all English data at once, also with pruning of singleton n-grams of order 3 and higher. We included the English LDC Gigaword Fifth Edition. Sparse lexical features (source word deletion, target word insertion, word translation) were limited to the top K = 200 words for German→English.
To German. Translation factors for the English→German translation direction are word surface forms, Och clusters (50 classes), morphological tags, and part-of-speech tags. Morphological tags were employed on the target side only, all other factors on both source and target side. The lexicalized reordering model was trained on word surface forms. We added an interpolated 7-gram Och cluster LM and a 7-gram LM over morphological tags. LMs were trained in a similar way as the ones for translation from German. Sparse phrase length features and sparse lexical features were not used for English→German. Sparse lexicalized reordering. We investigated sparse lexicalized reordering features (Section 2.4) on the German-English language pair in both translation directions. Two methods for learning the weights of the sparse lexicalized reordering feature set have been compared: (1.) direct tuning in MIRA along with all other features in the model combination (sparse LR (MIRA)), and (2.) separate optimization with stochastic gradient descent (SGD) with a maximum expected BLEU objective (sparse LR (SGD)). For the latter variant, we used the MT tuning set for training (13 573 sentence pairs) and otherwise followed the approach outlined by Auli et al. (2014). We tuned the baseline feature weights with MIRA before SGD training and ran two final MIRA iterations after it. SGD training was stopped after 80 epochs.
Empirical results for the German-English language pair are presented in Table 5. We observe minor gains of up to +0.2 points BLEU. The results are not consistent in the two translation directions: The MIRA-trained variant seems to perform better when translating from German, the SGD-trained variant when translating to German. However, in both cases the baseline score is almost identical to the best results with sparse lexicalized reordering features.
In future work we plan to adopt hypergraph MIRA, as well as larger training sets for maximum expected BLEU training. We also consider scaling the method to word surface forms in addition to Och clusters, and trying RPROP instead of SGD.
Conclusion
The Edinburgh/JHU team built phrase-based translation systems using the open source Moses toolkit for all language pairs of the WMT 2015 shared translation task. Our submitted system outputs ranked first according to cased BLEU on the newstest2015 evaluation set on six out of ten language pairs: 6 Czech→English, German→English, Finnish→English, Russian→ English, English→French, and English→Russian.
Table 1 :
1Use of additional feature functions based on Och clusters (see Section 2.3). The last four lines refer to ablation studies where one of the sets of clustered feature functions is removed from the comprehensive setup. Note that the word-based feature functions are used in all cases. BLEU scores on newstest2014 are reported.
LR (MIRA) 27.2 28.8 20.7 20.8 + sparse LR (SGD) 27.2 28.5 20.8 21.1System
de-en
en-de
2013 2014 2013 2014
Baseline
27.3 28.6 20.6 20.9
+ sparse
Table 5 :
5Experimental resultsfor German→English and English→German. We report cased BLEU scores on the newstest2013 and newstest2014 sets. Primary submission results are highlighted in bold.
http://www.statmt.org/wmt15/ 2 http://workshop2014.iwslt.org 3 http://www.eu-bridge.eu
These can now be addressed using the -mmap option to create a binarized version of the corpus which is then memory-mapped.
http://www.statmt.org/mtm14/uploads/ Projects/KenLMFunWithLanguageModel_ MTM2014p9.pdf
http://matrix.statmt.org/?mode=all& test_set[id]=21
AcknowledgementsThis project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements 645452 (QT21), 645487 (MMT), 644333 (TraMOOC) and 644402 (HimL).
Large-scale Expected BLEU Training of Phrasebased Reordering Models. Michael Auli, Michel Galley, Jianfeng Gao, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarMichael Auli, Michel Galley, and Jianfeng Gao. 2014. Large-scale Expected BLEU Training of Phrase- based Reordering Models. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1250-1260, Doha, Qatar, October.
Domain Adaptation via Pseudo In-Domain Data Selection. Amittai Axelrod, Xiaodong He, Jianfeng Gao, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UKAmittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain Adaptation via Pseudo In-Domain Data Selection. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 355-362, Edinburgh, Scotland, UK.
Edinburgh SLT and MT System Description for the IWSLT 2014 Evaluation. Alexandra Birch, Matthias Huck, Nadir Durrani, Nikolay Bogoychev, Philipp Koehn, Proc. of the International Workshop on Spoken Language Translation (IWSLT). of the International Workshop on Spoken Language Translation (IWSLT)Lake Tahoe, CA, USA, DecemberAlexandra Birch, Matthias Huck, Nadir Durrani, Niko- lay Bogoychev, and Philipp Koehn. 2014. Ed- inburgh SLT and MT System Description for the IWSLT 2014 Evaluation. In Proc. of the Interna- tional Workshop on Spoken Language Translation (IWSLT), pages 49-56, Lake Tahoe, CA, USA, De- cember.
An empirical study of smoothing techniques for language modeling. F Stanley, Joshua Chen, Goodman, Harvard UniversityTechnical reportStanley F. Chen and Joshua Goodman. 1998. An em- pirical study of smoothing techniques for language modeling. Technical report, Harvard University.
Batch Tuning Strategies for Statistical Machine Translation. Colin Cherry, George Foster, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaColin Cherry and George Foster. 2012. Batch Tun- ing Strategies for Statistical Machine Translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 427-436, Montréal, Canada, June.
Improved Reordering for Phrase-Based Translation using Sparse Features. Colin Cherry, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, GA, USAColin Cherry. 2013. Improved Reordering for Phrase- Based Translation using Sparse Features. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 22-31, Atlanta, GA, USA, June.
Clause Restructuring for Statistical Machine Translation. Michael Collins, Philipp Koehn, Ivona Kucerova, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Ann Arbor, MI, USAMichael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause Restructuring for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Lin- guistics (ACL'05), pages 531-540, Ann Arbor, MI, USA, June.
Fast and Robust Neural Network Joint Models for Statistical Machine Translation. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, John Makhoul, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MD, USA1Long Papers)Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and Robust Neural Network Joint Models for Statistical Machine Translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1370-1380, Baltimore, MD, USA, June.
Can Markov Models Over Minimal Translation Units Help Phrase-Based SMT?. Nadir Durrani, Alexander Fraser, Helmut Schmid, Hieu Hoang, Philipp Koehn, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, Bulgaria2Short Papers)Nadir Durrani, Alexander Fraser, Helmut Schmid, Hieu Hoang, and Philipp Koehn. 2013. Can Markov Models Over Minimal Translation Units Help Phrase-Based SMT? In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 399-405, Sofia, Bulgaria, August.
Edinburgh's Phrasebased Machine Translation Systems for WMT-14. Nadir Durrani, Barry Haddow, Philipp Koehn, Kenneth Heafield, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, MD, USANadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. 2014a. Edinburgh's Phrase- based Machine Translation Systems for WMT-14. In Proceedings of the Ninth Workshop on Statisti- cal Machine Translation, pages 97-104, Baltimore, MD, USA, June.
Investigating the Usefulness of Generalized Word Representations in SMT. Nadir Durrani, Philipp Koehn, Helmut Schmid, Alexander Fraser, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersDublin, IrelandNadir Durrani, Philipp Koehn, Helmut Schmid, and Alexander Fraser. 2014b. Investigating the Useful- ness of Generalized Word Representations in SMT. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguistics: Technical Papers, pages 421-432, Dublin, Ireland, August.
Integrating an unsupervised transliteration model into statistical machine translation. Nadir Durrani, Hassan Sajjad, Hieu Hoang, Philipp Koehn, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsGothenburg, SwedenShort Papers2Association for Computational LinguisticsNadir Durrani, Hassan Sajjad, Hieu Hoang, and Philipp Koehn. 2014c. Integrating an unsupervised translit- eration model into statistical machine translation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Lin- guistics, volume 2: Short Papers, pages 148-153, Gothenburg, Sweden, April. Association for Com- putational Linguistics.
A Simple, Fast, and Effective Reparameterization of IBM Model 2. Chris Dyer, Victor Chahuneau, Noah A Smith, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesGA, USAChris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A Simple, Fast, and Effective Reparame- terization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644-648, At- lanta, GA, USA, June.
Phrasetable Smoothing for Statistical Machine Translation. George Foster, Roland Kuhn, Howard Johnson, Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP). the Conference on Empirical Methods for Natural Language Processing (EMNLP)Sydney, AustraliaGeorge Foster, Roland Kuhn, and Howard Johnson. 2006. Phrasetable Smoothing for Statistical Ma- chine Translation. In Proceedings of the Conference on Empirical Methods for Natural Language Pro- cessing (EMNLP), pages 53-61, Sydney, Australia, July.
EU-BRIDGE MT: Combined Machine Translation. Markus Freitag, Stephan Peitz, Joern Wuebker, Hermann Ney, Matthias Huck, Rico Sennrich, Nadir Durrani, Maria Nadejde, Philip Williams, Philipp Koehn, Teresa Herrmann, Eunah Cho, Alex Waibel, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, MD, USAMarkus Freitag, Stephan Peitz, Joern Wuebker, Her- mann Ney, Matthias Huck, Rico Sennrich, Nadir Durrani, Maria Nadejde, Philip Williams, Philipp Koehn, Teresa Herrmann, Eunah Cho, and Alex Waibel. 2014a. EU-BRIDGE MT: Combined Ma- chine Translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 105-113, Baltimore, MD, USA, June.
Eunah Cho, Alex Waibel, Nicola Bertoldi, Mauro Cettolo, and Marcello Federico. Markus Freitag, Joern Wuebker, Stephan Peitz, Hermann Ney, Matthias Huck, Alexandra Birch, Nadir Durrani, Philipp Koehn, Mohammed Mediani, International Workshop on Spoken Language Translation. Lake Tahoe, CA, USACombined Spoken Language TranslationMarkus Freitag, Joern Wuebker, Stephan Peitz, Her- mann Ney, Matthias Huck, Alexandra Birch, Nadir Durrani, Philipp Koehn, Mohammed Mediani, Is- abel Slawik, Jan Niehues, Eunah Cho, Alex Waibel, Nicola Bertoldi, Mauro Cettolo, and Marcello Fed- erico. 2014b. Combined Spoken Language Trans- lation. In International Workshop on Spoken Lan- guage Translation, pages 57-64, Lake Tahoe, CA, USA, December.
A Simple and Effective Hierarchical Phrase Reordering Model. Michel Galley, Christopher D Manning, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HI, USAMichel Galley and Christopher D. Manning. 2008. A Simple and Effective Hierarchical Phrase Reorder- ing Model. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Pro- cessing, pages 848-856, Honolulu, HI, USA.
Parallel implementations of word alignment tool. Qin Gao, Stephan Vogel, SETQA-NLP '08Software Engineering, Testing, and Quality Assurance for Natural Language Processing. Stroudsburg, PA, USAQin Gao and Stephan Vogel. 2008. Parallel implemen- tations of word alignment tool. In Software Engi- neering, Testing, and Quality Assurance for Natural Language Processing, SETQA-NLP '08, pages 49- 57, Stroudsburg, PA, USA.
Scalable modified Kneser-Ney language model estimation. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, Philipp Koehn, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaKenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modi- fied Kneser-Ney language model estimation. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics, pages 690-696, Sofia, Bulgaria, August.
Forest Rescoring: Faster Decoding with Integrated Language Models. Liang Huang, David Chiang, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicLiang Huang and David Chiang. 2007. Forest Rescor- ing: Faster Decoding with Integrated Language Models. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, pages 144-151, Prague, Czech Republic.
Edinburgh's Submission to all Tracks of the WMT 2009 Shared Task with Reordering and Speed Improvements to Moses. Philipp Koehn, Barry Haddow, Proceedings of the Fourth Workshop on Statistical Machine Translation. the Fourth Workshop on Statistical Machine TranslationAthens, GreecePhilipp Koehn and Barry Haddow. 2009. Edinburgh's Submission to all Tracks of the WMT 2009 Shared Task with Reordering and Speed Improvements to Moses. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 160-164, Athens, Greece.
Empirical Methods for Compound Splitting. Philipp Koehn, Kevin Knight, Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL). the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL)Budapest, HungaryPhilipp Koehn and Kevin Knight. 2003. Empirical Methods for Compound Splitting. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 187-194, Budapest, Hungary, April.
Moses: Open Source Toolkit for Statistical Machine Translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, Evan Herbst, Proceedings of the ACL 2007 Demo and Poster Sessions. the ACL 2007 Demo and Poster SessionsPrague, Czech RepublicPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL 2007 Demo and Poster Sessions, pages 177-180, Prague, Czech Republic, June.
Minimum Bayes-Risk Decoding for Statistical Machine Translation. Shankar Kumar, William Byrne, HLT-NAACL 2004: Main Proceedings. Boston, MA, USAShankar Kumar and William Byrne. 2004. Minimum Bayes-Risk Decoding for Statistical Machine Trans- lation. In HLT-NAACL 2004: Main Proceedings, pages 169-176, Boston, MA, USA.
Intelligent Selection of Language Model Training Data. C Robert, William Moore, Lewis, Proceedings of the ACL 2010 Conference Short Papers. the ACL 2010 Conference Short PapersUppsala, SwedenRobert C. Moore and William Lewis. 2010. Intelligent Selection of Language Model Training Data. In Pro- ceedings of the ACL 2010 Conference Short Papers, pages 220-224, Uppsala, Sweden.
An Efficient Method for Determining Bilingual Word Classes. Franz Josef Och, Proceedings of the 9th Conference of the European Chapter of the Association for Computational Linguistics (EACL). the 9th Conference of the European Chapter of the Association for Computational Linguistics (EACL)Franz Josef Och. 1999. An Efficient Method for Deter- mining Bilingual Word Classes. In Proceedings of the 9th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 71-76.
An algorithm for suffix stripping. Program: electronic library and information systems. Martin Porter, 14Martin Porter. 1980. An algorithm for suffix strip- ping. Program: electronic library and information systems, 14(3):130-137.
A Maximum Entropy Part-Of-Speech Tagger. Adwait Ratnaparkhi, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingPhiladelphia, PA, USAAdwait Ratnaparkhi. 1996. A Maximum Entropy Part- Of-Speech Tagger. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Processing, Philadelphia, PA, USA, May.
LoPar: Design and Implementation. Bericht des Sonderforschungsbereiches "Sprachtheoretische Grundlagen für die Computerlinguistik" 149, Institute for Computational Linguistics. Helmut Schmid, University of StuttgartHelmut Schmid. 2000. LoPar: Design and Imple- mentation. Bericht des Sonderforschungsbereiches "Sprachtheoretische Grundlagen für die Computer- linguistik" 149, Institute for Computational Linguis- tics, University of Stuttgart.
Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 15Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Re- search, 15:1929-1958.
Coarse split and lump bilingual language models for richer source information in SMT. Darlene Stewart, Roland Kuhn, Eric Joanis, George Foster, Proceedings of the Eleventh Conference of the Association for Machine Translation in the Americas (AMTA). the Eleventh Conference of the Association for Machine Translation in the Americas (AMTA)1Darlene Stewart, Roland Kuhn, Eric Joanis, and George Foster. 2014. Coarse split and lump bilin- gual language models for richer source information in SMT. In Proceedings of the Eleventh Confer- ence of the Association for Machine Translation in the Americas (AMTA), volume 1, pages 28-41.
SRILM -an Extensible Language Modeling Toolkit. Andreas Stolcke, Proc. of the Int. Conf. on Spoken Language Processing (ICSLP). of the Int. Conf. on Spoken Language essing (ICSLP)Denver, CO, USA3Andreas Stolcke. 2002. SRILM -an Extensible Lan- guage Modeling Toolkit. In Proc. of the Int. Conf. on Spoken Language Processing (ICSLP), volume 3, Denver, CO, USA, September.
Parallel Data, Tools and Interfaces in OPUS. Jörg Tiedemann, Proc. of the Int. Conf. on Language Resources and Evaluation (LREC). of the Int. Conf. on Language Resources and Evaluation (LREC)Istanbul, TurkeyJörg Tiedemann. 2012. Parallel Data, Tools and In- terfaces in OPUS. In Proc. of the Int. Conf. on Language Resources and Evaluation (LREC), pages 2214-2218, Istanbul, Turkey, May.
Decoding with Large-Scale Neural Language Models Improves Translation. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, David Chiang, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, WA, USAAshish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with Large-Scale Neural Language Models Improves Translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1387-1392, Seattle, WA, USA.
Edinburgh's Syntax-Based Systems at WMT 2015. Philip Williams, Rico Sennrich, Maria Nadejde, Matthias Huck, Philipp Koehn, Proceedings of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation. the EMNLP 2015 Tenth Workshop on Statistical Machine TranslationLisbonPortugalPhilip Williams, Rico Sennrich, Maria Nadejde, Matthias Huck, and Philipp Koehn. 2015. Edin- burgh's Syntax-Based Systems at WMT 2015. In Proceedings of the EMNLP 2015 Tenth Workshop on Statistical Machine Translation, Lisbon, Portu- gal, September. |
17,558,767 | Question Prediction Language Model | This paper proposes the use of a language representation that specifies the relationship between terms of a sentence using question words. The proposed representation is tailored to help the search for documents containing an answer for a natural language question. This study presents the construction of this language model, the framework where it is used, and its evaluation. | [
2486369,
10197382
] | Question Prediction Language Model
Luiz Augusto Pizzato [email protected]
Centre for Language Technology
Macquarie University Sydney
Australia
Diego Mollá
Centre for Language Technology
Macquarie University Sydney
Australia
Question Prediction Language Model
This paper proposes the use of a language representation that specifies the relationship between terms of a sentence using question words. The proposed representation is tailored to help the search for documents containing an answer for a natural language question. This study presents the construction of this language model, the framework where it is used, and its evaluation.
Introduction
Although Information Retrieval (IR) can be helped by NLP techniques such as named entity (NE) recognition, phrase extraction and syntax parsing (Strzalkowski, 1999), they are not generally used due to their high complexity. One such task that people can perform somewhat easily whilst still being hard for computers is the answering of factoid questions based on textual content. The Question Answering (QA) Track of TREC (Voorhees, 2005) focuses on answering questions using the AQUAINT corpus (Graff, 2002), which contains 375 million words from three different sources of newswire data: Xinhua News Service (XIE) from People's Republic of China, the New York Times News Service (NYT), and the Associated Press Worldstream News Service (APW).
For the QA task, not only it is important to find an answer in a document, but also to find the documents that might contain the answer in the first place. Most QA systems take the approach of using off-the-shelf IR systems to return a list of documents that may contain an answer, and then processing the list of documents to look for the required answer. Normally the processing time for every question in these systems is long because of the sheer amount of work that is required after the list of document is returned.
Many QA systems focus on the input and output of IR systems. For example, Dumais et al. (2002) perform a passive-to-active voice transformation of the question, in an attempt to bring the IR query closer to the document it is expected to retrieve. Some IR work focuses on improving QA by passage retrieval re-ranking using word overlap measures. For instance, Tellex et al. (2003) compare a group of passage retrieval techniques and conclude that those that apply density-based metrics 1 are the most suitable to be used for QA. Some work has been done on IR models that specifically aid the QA task.
The work of Monz (2004) defines a weighting scheme that takes into consideration the distance of the query terms. Murdock and Croft (2004) propose a translation language model that defines the likelihood of the question being the translation of a certain document. Tiedemann (2005) uses a multi-layer index containing more linguistic oriented information and a genetic learning algorithm to determine the best parameters for querying those indexes when applied for the QA task. Tiedemann argues that since question answering is an all-natural language task, linguistic oriented IR will help finding better documents for QA.
In this paper we propose a language representa-1 Ranking of passages based on the number of query words and the proximity between them. tion that when used in the IR stage of a question answering system improves its results. As a consequence it helps to reduce the processing time due to a better retrieval set and because it has the capacity of giving answer cues. This paper is divided into five sections. The next section presents the Question Predication Language Model and some of its features. Section 3 introduces how the the model is used and how the necessary resources for its usage were built. Section 4 describes some experiments and present some preliminary results. Section 5 presents the concluding remarks and future work.
Question Prediction Language Model
We describe a language model that focuses on extracting a simple semantic representation of an English text that can be easily stored in digital databases and processed by Information Retrieval (IR) tools. We focus on extracting a particular kind of semantic that help us to find the location of a text that has some likelihood of answering a question. The model and its semantic are defined as Question Prediction (QP).
The Question Prediction Language Model (QPLM) represents sentences by specifying the semantic relationship among its components using question words. In this way, we focus on dividing the problem of representing a large sentence into small questions that could be asked about its components. In other words, we represent the relationship among key words of a sentence as short questions. For instance, the sentence "Jack eats ham" could be represented by the following two triples: W ho(eat, Jack) and W hat (eat, ham). Using this model it is possible to answer short questions that focus on relations existent inside a sentence context, such as "Who eats ham?" and "What does Jack eat?".
The QPLM represents sentences as semantic relations expressed by triples q(w, a) where q is a question word, w is the word that concerns the question word q and a is the word that answers the relation q about w. For instance the relation W ho(eat, Jack) tells us that the person who eats is Jack. The representation of our semantic relations as triples Q(w, a) is important because it allows the representation of Having the sentence of Figure 1 and removing a possible answer a from any relation triple, it is possible to formulate a complete question about this sentence that would require a as an answer. For instance, we can observe that removing the node John we obtain the question "Who asked for a flag to be placed in every school?" where W ho was extracted from the triple W ho(ask, John). The same is valid for other relations, such as removing word school to obtain question "Where did John asked for a flag to be placed?". The name Question Prediction for this model is due to its capability of generating questions regarding the sentence that has been modeled.
In this section, we have shown how our model represents the semantic information. In the next section we focus on the implementation of QPLM and its usage.
Building and using QPLM
As observed in Figure 2, a training set of QPLM triples was created using mapping rules from a corpus of semantic role labels. Using a syntactic parser and a NE recognizer with our training set, we were able to learn pattern rules that we further applied in the processing of the AQUAINT corpus.
PropBank (Palmer et al., 2005) is a corpus with annotated predicate-argument relations from the same newswired source of information as the Penn Treebank 2 . We used PropBank as our starting point because it comprises the same textual style, and the predicate-argument relations (also referred to as semantic roles) can be mapped to QPLM triples. We studied the possibility of using semantic role labeling tools to perform the semantic annotation, however our experiments using these tools showed us that they have not yet achieved a reasonable speed performance. For instance, the SwiRL semantic role labeling system 3 would take a couple of years to fully process the AQUAINT corpus. In contrast, our system takes a couple of days if all the necessary information is already at hand; adding the time required for syntactic parsing and NE recognition, the total processing period is not longer than two weeks.
Training corpus
PropBank is processed through a set of mapping rules from the predicate-argument relations to QPLM. Using a PropBank map as our training data gives us the benefit of a large training set, but at the same time it will only create relations that are present in PropBank, therefore excluding some relations that we wish to include. For instance, relations that do not involve any action, such as the ownership relation in (W hose(car, M aria)) and the quantity relation in (HowM any(country, twenty))), among others.
PropBank defines relations between predicate and arguments without properly defining their meaning. On the other hand, it does keep a format where the argument number 0 represents the agent acting upon something and argument number 1 represents patients or themes. PropBank was manually annotated according to the PropBank Marking Guide-3 http://swirl-parser.sourceforge.net/ lines (Babko-Malaya, October 2006). The guidelines represent an effort to build a consistent set of relations, however a closer look at the corpus shows that consistency is a hard task to achieve, particularly with the vaguely defined arguments number 3 onwards. For those cases the inclusion of a function tag proved to be useful 4 .
Observing how arguments and predicates relate to each other, we created a set of rules mapping from argument-predicate relations to the QPLM. The basic differences between both models is that the QPLM triple contains a label representing a more specific semantic relation, and that it associates only the head of the linked phrases. For instance, the sentence "The retired professor received a lifetime achievement award" is represented as:
(1) Semantic Roles:
[The retired professor] ARG0 [received] pred [a lifetime achievement award] ARG1 .
(2) QPLM: Who(receive, professor), What(receive, award) As can be observed in (1), semantic role labeling does not provide information about which is the main term (normally the head of a phrase) of each argument, while in (2), QPLM represents relations between the phrase heads. In order to find the phrase head, we applied a syntactic parser (Connexor 5 ) to PropBank sentences. However, the phrase heads are not always clearly defined (particularly when the syntactic parse tree is broken due to problems in the parser) creating an extra difficulty for the mapping process. When a syntactic path cannot be found between predicates and any of the words from the argument, we then try to find the head of the phrase by syntactically parsing the phrase by itself. If this also fails to provide us with a head, we simply use the first available non-stopword if possible.
The stage of finding the related phrases heads showed to be quite important, not only because we would be defining which words relate to each other, but also because if a broken parse tree is found, no rules could be learnt from the resulting QPLM triple. An analysis of the data showed us that 68% of the QPLM triples derived from PropBank were generated from an unbroken parse, while the rest used some of the other methods.
We understand that even though our model has similarities with Semantic Role Labeling, we are taking a step further in the sense of semantic representation. QPLM has a finer semantic representation meaning that a predicate argument relation in Prop-Bank might have different representations in QPLM. Our mapping rules takes into consideration not only the number of the argument but also the predicate involved and the POS or NE of the related words.
Even though we cover different aspects of Prop-Bank in our mapping, we observed that many predicates hold different meanings for the same arguments which creates a problem for our mapping strategy. This problem was not fixed because of the prohibitive amount of work needed to manually mark all the different meanings for the same predicate in different sentences. In these cases, where the same predicates and the same argument represent different semantics according to the QPLM, we chose the one most representative for the set of sentences using that predicate and argument. For instance, the argument number 3 of predicate spend for the majority of the cases represents a quantity of money that was spent (a HowMuch label), however we have one case where the argument is cash (a What label). This type of mapping compromises the accuracy of our conversion, however a randomly selected set of 40 documents was manually evaluated showing that nearly 90% of the QPLM triples were correctly converted.
After the mapping was finalized we obtained a training set of rules with 60,636 rules, and 39 types of semantic relations (Table 1). aboutwhat do outofwhat adv forwhat overwhat afterwhat fromwhat subj againstwhat how towhat aroundwhat howlong towhom aswhat howmuch underwhat atwhat howold what behindwhat intowhat when belowwhat inwhat where beneathwhat likewhat who betweenwhat obj whom beyondwhat ofwhat why bywhat onwhat withwhat
Rule learning
The PropBank corpus, after being automatically converted to QPLM triples, is used to learn the rules that are used to find the QPLM information of plain text. The QPLM annotation relies on the output of a syntactic parser and of a named-entity recognizer for its annotation and for the rule learning process. We are currently using Connexor for syntax parsing and LingPipe 6 to recognize NEs. Our semantic model uses pattern rules (PRules) created from the representation of the same sentence as syntactic parse trees, MUC style named entity, and a list of QPLM triples. Table 2 presents the different information that we use for training.
Having these representations at hand, a set of rules is learned using the following process (see Figure 3 for an example):
1. replace the part of speech information with the respective named entity category in the syntactic parse tree;
2. identify leaf-to-root links along the combined syntactic and named-entity (S+NE) path between w and a for every triple Q(w, a);
3. for the existing S+NE paths, replace w and a by a marker in both the triples and the paths, registering those as pattern rules (PRule);
repeat steps 2 to 3 for all triples and documents;
4. combine all PRules found, calculate their frequency of occurrence and group them by common triples. It is important to note that if we have a sentence such as "Jack eats", we would have a frequency of two (2×) for the pattern a person subj → w va . After computing all the training files we would have a resulting PRule file containing all possible S+NE paths that can generate the manually defined triples. If an S+NE path could not be found then a PRule cannot be generated and the current training triple is skipped.
Applying QPLM
Using the training corpus described above, we found all the PRules needed in order to generate the semantic triples when having an S+NE representation. The rules are grouped by QPLM triples, having their S+NE paths attached with a frequency value. This frequency value represents how many times an S+NE path was used to generated a PRule in the training corpus.
To convert S+NE files into QPLM, we start by applying those PRules that have the highest frequency values. These PRules are believed to be the most significant ones. Also it is important to observe that if an S+NE path generates different QPLM triples, we only need to apply the one with the higher frequency. For instance, if the pattern w person subj → a va is associated with the triple W ho(w, a) with frequency of 8 and with the triple W here(w, a) with a frequency of 2, the S+NE path will only generate the W ho triple. Because frequency is the decisive factor, in the previous example we have 20% of chance of wrongly assigning an incorrect semantic label.
We observed that more precise PRules could be created taking into account that some verbs constantly generate a different QPLM triple for the same S+NE path. These new PRules (which we refer to as FW) are defined with a fixed w becoming less frequent but at the same time more precise. The precision of FW rules combined with the generality of the previous ones (which we refer to as GN) assure us that we have a correct analysis of a known verb as well as fair guess of an unseen one. To ensure that known verbs are evaluated first by the more precise FW rules, we assign a much higher weight to those rules than GN ones. An evaluation using the combination of both types of rules has shown us that assigning a weight 800 times higher to FW than to GN gives us the best results.
We also observed that due to the large amount of learnt PRules, the process for creating the QPLM was slow. In order to improve the speed performance of the process, we decided to compromise our system precision and recall by removing the least important rules, i.e. those with a frequency equal to one. The lower number of PRules caused a decrease of recall which is more noticeable when taking into account the FW rules. Even though we experienced a decrease of precision, removing low frequent PRules causes the removal of abnormal PRules that were generated by parsing errors.
In the next section we describe the environment where QPLM was applied, followed by some experimental results.
Evaluation
It is possible to evaluate our model implementation on how well it performs the task of assigning the correct semantic labels to a certain text. However because the model was designed so it would improve the IR stages of a QA system, we believe that the most significant evaluation at this point is in terms of how well it helps us solving this specific problem.
Since we have not yet implemented lexical semantic substitution or any other IR techniques such as stemming or lemmatization, a comparison with a full-fledged state-of-the-art IR system is not relevant. The lack of these techniques makes it somewhat harder to boost the confidence on single sentences or windows of text if the proper context is not recorded. However, we have confidence that the model can help to provide cues of possible answers by substituting the partial match between a question word and document representation. For instance, if the question "Where will a flag be placed?" is presented and the sentence in Figure 1 is considered, a partial match school can be considered as a possible answer.
The IR model comparison
We have compared our QPLM triples with bag-ofwords (unigram) and syntactic dependency triples. In all three cases the indexing and retrieval methods were the same, only the indexed units were different. We have implemented an IR framework that can hold relational information such as n-grams, syntactic, semantic role label and QPLM. Our framework is implemented so that it supports fast indexing (hundreds of documents per second in a lowend desktop machine) and it retrieves using a vector space model. The framework allows distributed indexing and retrieval under other models but they are not yet implemented.
During the development and test phases of our framework, we have implemented different vector space models of retrieval. We have implemented the unigram model, along with syntactic relations, semantic role labeling and QPLM.
The unigram model associates words with documents as well as it adds the position of them within the document. The inclusion of position information allows the use of a ranking function based on proximity. We also implemented a syntactic model using the output of Connexor. The model associated words with documents as well as words with their respective heads and/or modifiers. Since we are using a vector space model, computing TF and IDF over this type of relation has a different meaning. TF for full matching triples would be how many of them with the same Syntactic − Relation, head and modif ier are found, while TF could also mean partial matches where one or two elements are not needed to match. IDF in this setup would be similar to TF but with the scope of the whole document set.
In the semantic role labeling model, a complete match of predicate and argument is not expected; because of this we only consider partial matches (if a word appears as the correct argument or predicate). However we expect a larger weight for a document when all the words properly match. In this model IDF and TF are calculated by taking into account the words found in the right context.
In the QPLM we have a very similar model to the syntactic relation one. However in QPLM not all the words will relate to each other, causing a large number of words to be missing in the final representation. To overcome this problem, we compute IDF and TF as the syntactic relations and we also add the TF/IDF weights from an unigram model. Because QPLM relations are much less frequent, when a match is found they will have higher weights than unigrams. Unigrams and QPLM are combined so that we do not discard relevant documents when they contain important keywords that are missing in the QPLM representation.
Evaluation over IR and QA
We have shown in the previous sections that the QPLM analysis relies on a syntactic parser and on a named-entity recognizer, in order to build a training set used to look for pattern rules and then to analyse text files into this model. We have not analysed the correlation among the performance of the syntactic parser nor the named entity recognizer, however we observed that our model has problems learning rules when these components offer poor results. As explained previously in section 3.1, if we cannot find a rule connecting two nodes into the same parse tree, we cannot learn a conversion rule. These cases account for 42,609 out of 135,537 rules, reducing in practical matters our training set to only 68% of its original size. Many of these cases are due to broken parse trees returned by Connexor. We have not yet experimented with different parsers, however a possible outcome of such experiment might be that having a broken structure and therefore losing the training instance is more desirable than having the full parse, but with the wrong dependencies.
We have also filtered out the pattern rules that have the same S+NE path but are not the most frequent one regarding their QPLM triple. By doing this we discard 12% of all the rules (20% of GN and 4% of FW). We do not use the rules when their frequency values are equal to one, this will cause an extra drop in the number of rules used to 44%. As expected the removal of low frequency rules have a stronger impact on FW rules than in GN rules (54% and 33% respectively).
This information is important because we can then predict what the upper limit of our performance is when measuring the recall using the set of rules we built as a training and testing set. According to the values presented, using all the rules with a frequency of 2 or more we have an upper limit of recall of 38%. A ten-fold evaluation over the training set has given us a value of 24% for recall.
When comparing the PropBank mapped files with the files analysed by our technique, it is possible to observe that the amount of QPLM triples in our semantic analysis is much larger than the ones mapped from PropBank. The reason is that PropBank only marks certain predicates in a sentence, while QPLM also provides relation among other verbs and other sentence words. Because of this we performed a manual evaluation of the precision of our technique over a set of 20 randomly selected documents and we found that 50% of the relations can be seen as correct. We also observed that many of the relations that were wrongly generated were due to some errouneous S+NE path. Filtering out this wrong pattern from the rule file will improve our precision. The important fact is that even though our performance over the analysis of the QPLM does not appear to be very high, the generated rules show to be very useful when applied to IR and the QA task.
We have retrieved one hundred documents for the set of 1449 questions of the TREC 2004QA track (Voorhees, 2005Voorhees and Dang, 2006) and verified the existence of the answer string in each of these documents. We have performed this retrieval process for the unigram, syntactic relations and QPLM models. Due to data storage constraints at this moment we only have the results for the XIE and APW newswire corpus.
The comparison between the three models shows that we can obtain better documents to answer nat- Table 3: Results for APW and XIE ural language questions if we take into consideration the same linguistic relations in the question and in the terms present in the documents. We measure our results using coverage and redundancy measures (Roberts and Gaizauskas, 2004). Coverage tells us how much of the question set we can answer using the top-N documents, while redundancy tells us how many documents per question contain an answer. These results are presented in Table 3.
As we observe in Table 3, for a large collection of documents and questions our system performs consistently better than unigram and syntactic relations. We have performed a paired t-test for statistic significance using the results of the individual questions for QPLM and unigrams showing that there is a 99% percent of chance that the improvement is not random for the results in the APW corpus. However, a paired t-test did not reject the null hypothesis in the test performed with the XIE corpus. This may be an indication that the XIE Newswire Corpus is written with a linguistic style that our system has not been able to take advantage of. Perhaps it strongly differs from the style present in our training set (PropBank) causing our rules not to be successfully used. Further work is needed to understand the main differences among these corpora. By understanding this we might find ways to adjust our system towards different textual styles.
Even though coverage and redundancy are good measures for evaluating a retrieval set for QA, we have observed that these measurements do not always relate to each other . For this reason we have applied the retrieval set to a QA system in order to observe if it does help to improve its results. Using the retrieval sets generated by the different models in the AnswerFinder QA system (van Zaanen et al., 2007) showed us that the QPLM performed 25% better than the unigram model and 9.3% better than the syntactic model. Even though AnswerFinder is not among the best performing QA systems, it does give us some insight on what a retrieval set should contain.
Concluding Remarks
In this paper we have presented a semantic model that represents the relations among sentence components by labeling them with question words. This model was built to assist the task of question answering, particularly at the IR stages. We understand that by providing documents that are better suited towards finding an answer for a natural language question, QA system would not only return better answers but also become faster.
The work presented here shows that the QPLM can be used effectively in IR frameworks, and a comparison with the unigram and syntactic model demonstrates that we are able to improve the overall IR results. We have already implemented the predicate-argument model in our IR framework and we plan to compare it with QPLM. Because the current semantic role labeling systems are impractically slow when applied to large corpora, the comparison will be done using a reduced number of documents.
In this work, we focused on the single impact of our technique on the retrieval stages of a QA system. In future work we will include different retrieval methods in our IR framework to enable a valid comparison with state-of-the-art IR systems. We also plan to manually study the PRules so as to identify the ones causing some drops in the precision and recall of our model, and to construct an automatic method that would help this process.
As explained, we had data storage constraints which made the evaluation more difficult. As future work we plan to distribute the retrieval process and to perform evaluations with the whole AQUAINT corpus and with the NYT documents. We also intend to evaluate the impact that different retrieval sets have in a broader range of QA systems.
Figure 1 :
1Graph Representation sentences as directed graphs of semantic relations. This representation has the capacity of generating questions about the sentence being analysed. Figure 1 shows such a representation of the sentence: "John asked that a flag be placed in every school".
Figure 2 :
2Creation and usage of pattern rules.
Figure 3 :
3Process example
Table 1 :
1QPLM Semantic Relations kick va ← obj ball nn ← det the det ballnn ← mod buyvp ← agt byprep ← pcomp Susannp Named Entities: <ENAMEX Type=NAME> John </ENAMEX> kicked the ball bought by <ENAMEX Type=NAME> Susan </ENAMEX>.Original: John kicked the ball bought by Susan.
QPLM: Who(kick, John), What(kick, ball), What(buy, ball),
Who(buy, Susan)
Parse Tree:
John np
subj →
Table 2 :
2Training Files
http://www.cis.upenn.edu/ treebank
A function tag is information attached to the arguments representing relations such as negation, location, time and direction. 5 http://www.connexor.com
http://www.alias-i.com/lingpipe/
Propbank annotation guidelines. O Babko-Malaya, O. Babko-Malaya. October 2006. Propbank annotation guidelines.
Web question answering: is more always better?. S Dumais, M Banko, E Brill, J Lin, A Ng, SIGIR '02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval. Tampere; FinlandACM PressS. Dumais, M. Banko, E. Brill, J. Lin, and A. Ng. 2002. Web question answering: is more always better? In SIGIR '02: Proceedings of the 25th annual interna- tional ACM SIGIR conference on Research and devel- opment in information retrieval, pages 291-298, Tam- pere, Finland. ACM Press.
The AQUAINT Corpus of English News Text. Linguistic Data Consortium. D Graff, PhiladelphiaD. Graff. 2002. The AQUAINT Corpus of English News Text. Linguistic Data Consortium, Philadelphia.
Minimal span weighting retrieval for question answering. C Monz, Proceedings of the SIGIR-2004 Workshop on Information Retrieval For Question Answering (IR4QA). the SIGIR-2004 Workshop on Information Retrieval For Question Answering (IR4QA)Sheffield, UKC. Monz. 2004. Minimal span weighting retrieval for question answering. In Proceedings of the SIGIR- 2004 Workshop on Information Retrieval For Question Answering (IR4QA), Sheffield, UK, July.
Simple translation models for sentence retrieval in factoid question answering. V Murdock, W B Croft, Proceedings of the SIGIR-2004 Workshop on Information Retrieval For Question Answering (IR4QA). the SIGIR-2004 Workshop on Information Retrieval For Question Answering (IR4QA)Sheffield, UKV. Murdock and W.B. Croft. 2004. Simple translation models for sentence retrieval in factoid question an- swering. In Proceedings of the SIGIR-2004 Work- shop on Information Retrieval For Question Answer- ing (IR4QA), Sheffield, UK, July.
The proposition bank: An annotated corpus of semantic roles. M Palmer, D Gildea, P Kingsbury, Computational Linguist. 311M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguist, 31(1):71-106.
Pseudo relevance feedback using named entities for question answering. L A Pizzato, D Molla, C Paris, Proceedings of the Australasian Language Technology Workshop. the Australasian Language Technology WorkshopSydneyL. A. Pizzato, D. Molla, and C. Paris. 2006. Pseudo rel- evance feedback using named entities for question an- swering. In Proceedings of the Australasian Language Technology Workshop 2006., Sydney.
Evaluating passage retrieval approaches for question answering. I Roberts, R J Gaizauskas, ECIR. Springer2997I. Roberts and R. J. Gaizauskas. 2004. Evaluating pas- sage retrieval approaches for question answering. In ECIR, volume 2997 of Lecture Notes in Computer Sci- ence, pages 72-84. Springer.
Natural Language Information Retrieval. T Strzalkowski, Kluwer Academic PublishersNorwell, MA, USAT. Strzalkowski. 1999. Natural Language Informa- tion Retrieval. Kluwer Academic Publishers, Norwell, MA, USA.
Quantitative evaluation of passage retrieval algorithms for question answering. S Tellex, B Katz, J Lin, A Fernandes, G Marton, SIGIR '03: Proceedings of the 26th annual international ACM SI-GIR conference on Research and development in informaion retrieval. TorontoACM PressS. Tellex, B. Katz, J. Lin, A. Fernandes, and G. Marton. 2003. Quantitative evaluation of passage retrieval al- gorithms for question answering. In SIGIR '03: Pro- ceedings of the 26th annual international ACM SI- GIR conference on Research and development in in- formaion retrieval, pages 41-47, Toronto. ACM Press.
Optimizing information retrieval in question answering using syntactic annotation. J Tiedemann, Proceedings of RANLP 2005. RANLP 2005Borovets, BulgariaJ. Tiedemann. 2005. Optimizing information retrieval in question answering using syntactic annotation. In Pro- ceedings of RANLP 2005, pages 540-546, Borovets, Bulgaria.
Answerfinder at trec. M Van Zaanen, D Molla, L A Pizzato, The Fifteenth Text REtrieval Conference. TRECM. van Zaanen, D. Molla, and L. A. Pizzato. 2007. An- swerfinder at trec 2006. In The Fifteenth Text RE- trieval Conference (TREC 2006).
Overview of the TREC 2005 question answering track. E M Voorhees, H T Dang, Text REtrieval Conference. E. M. Voorhees and H. T. Dang. 2006. Overview of the TREC 2005 question answering track. In Text RE- trieval Conference.
Overview of the TREC 2004 question answering track. E M Voorhees, Text REtrieval Conference. E. M. Voorhees. 2005. Overview of the TREC 2004 question answering track. In Text REtrieval Confer- ence. |
13,033,267 | Nordic and Baltic wordnets aligned and compared through "WordTies" | During the last few years, extensive wordnets have been built locally for the Nordic and Baltic languages applying very different compilation strategies. The aim of the present investigation is to consolidate and examine these wordnets through an alignment via Princeton Core WordNet and thereby compare them along the measures of taxonomical structure, synonym structure, and assigned relations to approximate to a best practice. A common web interface and visualizer "WordTies" is developed to facilitate this purpose. Four bilingual wordnets are automatically processed and evaluated exposing interesting differences between the wordnets. Even if the alignments are judged to be of a good quality, the precision of the translations vary due to considerable differences in hyponymy depth and interpretation of the synset. All seven monolingual and four bilingual wordnets as well as WordTies have been made available via META-SHARE through the META-NORD project. | [
9748257,
5687975,
456,
17893832,
12778684
] | Nordic and Baltic wordnets aligned and compared through "WordTies"
NODALIDA 2013
Bolette S Pedersen [email protected]
University of Copenhagen
140 2300Njalsgade, Copenhagen S
Lars Borin [email protected]
University of Gothenburg
Box 200405 30GothenburgSweden
Markus Forsberg [email protected]
University of Gothenburg
Box 200405 30GothenburgSweden
Neeme Kahusk [email protected]
Krister Lindén [email protected]
Jyrki Niemi [email protected]
Niklas Nisbeth [email protected]
University of Copenhagen
140 2300Njalsgade, Copenhagen S
Lars Nygaard
Heili Orav [email protected]
Eirikur Rögnvaldsson [email protected]
Mitchel Seaton [email protected]
University of Copenhagen
140 2300Njalsgade, Copenhagen S
Kadri Vider [email protected]
Kaarlo Voionmaa
University of Gothenburg
Box 200405 30GothenburgSweden
Nordic and Baltic wordnets aligned and compared through "WordTies"
Kaldera Language Technology
Tartu, ESTONIA; Oslo183NODALIDA 2013wordnetsmultilingual linkswordnet web interfaceNordic and Baltic languagesMETA-NORD
During the last few years, extensive wordnets have been built locally for the Nordic and Baltic languages applying very different compilation strategies. The aim of the present investigation is to consolidate and examine these wordnets through an alignment via Princeton Core WordNet and thereby compare them along the measures of taxonomical structure, synonym structure, and assigned relations to approximate to a best practice. A common web interface and visualizer "WordTies" is developed to facilitate this purpose. Four bilingual wordnets are automatically processed and evaluated exposing interesting differences between the wordnets. Even if the alignments are judged to be of a good quality, the precision of the translations vary due to considerable differences in hyponymy depth and interpretation of the synset. All seven monolingual and four bilingual wordnets as well as WordTies have been made available via META-SHARE through the META-NORD project.
1
Wordnets as a multilingual action in the Nordic and Baltic countries Wordnets (cf. Fellbaum 1998, Vossen 1998) have emerged as one of the basic standard lexical resources in the language technology (LT) field. They encode fundamental semantic relations among words, relations that further in many cases have counterparts in relations among concepts in formal ontologies. According to the BLARK (Basic Language Resource Kit) scheme, wordnets along with treebanks, are central resources when building language enabled applications. The semantic proximity metrics among words and concepts defined by a wordnet are considered useful in applications such as information access systems and authoring tools because in addition to identical words, the occurrence of words with similar (more general or more specific) meanings contribute to measuring of the similarity of content or context or recognizing the meaning. Based on this impact, there is a crucial need for continuously comparing and improving such lexical semantic resources in order to approximate to a best practice. This is particularly relevant if we foresee an integration of them in cross-lingual LT.
For the Nordic and Baltic languages, an extensive development of wordnets has taken place during the last five years, excluding here the Estonian wordnet, which has existed for more than a decade as part of the EuroWordNet project. During the META-NORD project (2011-2013 these wordnets have been further consolidated via extensions, validations and documentation. Thus, the relevance of cross-lingual alignment and comparison of wordnets in this region has emerged only recently. Wordnets or wordnetlike resources of a considerable size exist now for the Finnish, Danish, Estonian, Swedish, Icelandic and the Norwegian languages. They have not, like several previous wordnet projects such as EuroWordNet, BalkaNet, Asian Wordnet, and IndoWordNet (cf. Section 2) been built as part of collaborative projects, but have rather emerged locally through national projects and initiatives. The META-NORD project thus poses a unique opportunity for actually coordinating a Nordic-Baltic action on wordnets and investigating the results of the very different compilation strategies that have been applied.
The aim of this investigation can be expressed as threefold:
To facilitate browsing, alignment and comparison of the Nordic and Baltic wordnets through the development of an intuitive and easy-to-extend web interface. To estimate the perspective of alignment via Princeton Core WordNet by generating four bilingual wordnets and evaluating them. Via this alignment to perform a comparison of the involved wordnets along the measures of taxonomical structure, synset structure, and relational structure.
The web interface, WordTies, is documented in Section 4. Four pilot bilingual wordnets have been produced semi-automatically via established links to Princeton Core Wordnet: Danish-Swedish, Danish-Finnish, Estonian-Finnish, Finnish-Swedish. An evaluation of these linked resources is included in Section 5 and a further comparison of a selected set of the wordnets is given in Section 6. All seven monolingual and four bilingual wordnets as well as WordTies have been made available via META-SHARE: www.meta-share.org under a variety of open source licenses.
2
Related work
Broadly speaking, wordnets can be compiled applying two different approaches: the merge vs. the expand method (cf. Rigau & Agirre 2002). By an expand method is meant that translations are made from Princeton WordNet and further customised to the target language, by a merge approach is meant that the wordnet is built monlingually and then (eventually) merged with Princeton WordNet.
The EuroWordNet project, which was concerned with the compilation of wordnets for a series of European languages (Vossen 1998), launched the idea of compiling and expanding wordnets via a so-called Inter Lingual Index (ILI) constituted by Princeton WordNet 1.5, cf. Peters et al. 1998. A successor of EuroWordNet was the BalkaNet project, where several wordnets were built in the Balkan area and aligned simultaneously. These projects all provide valuable reference points for a best practice within the expand approach, for example, BalkaNet uses a validation system based on word sense disambiguation for pinpointing wrong interlingual alignments, incomplete or missing synsets in one or another of the wordnets (Tufis, Ion & Ide 2004). Other works included mapping algorithms for aligning, tuning and validating wordnets as presented in Daudé, Padró & Rigau 1999, & Daudé, Padró & Rigau 2003 and several others. More recent collaborative wordnet projects include MultiWordNet (http://multiwordnet.itc.it) which relates Italian and Princeton wordnets, Asian WordNet which also applies the expand method for several Asian languages through a common management interface (Robkop et al. 2010), and IndoWordNet which include a series of Indian languages (Bhattacharyya 2010). Last but not least should be mentioned a recent initiative, Open Multilingual WordNet http://casta-net.jp/~kuribayashi/multi/ which aligns wordnets available through the Global WordNet Association's WordNet Grid (http://www.globalwordnet.org/gwa/gwa_grid.html).
In contrast, several recent European wordnets that have typically been compiled on a more local basis apply the merge technique (cf. Derwojedowa 2008, Pedersen et al. 2009) applying monolingual language resources such as existing dictionaries and corpora as the initial source.
There are obvious risks related to both approaches. An expand approach based on Princeton WordNet runs the high risk of being biased towards the conceptual structure of the English language. However, with thorough customizations to the target language these risks can be reduced. A merge approach may reflect the target language better since it is based on more linguistic grounds (corpora and existing lexica) for that particular language. On the other hand, such wordnets typically differ so much from Princeton WordNet in structure that a merge becomes indeed very hard and extremely complex. These differences originate partly from different language cultures, partly from different levels of specialization depending on the source material used. For instance, a typical feature of wordnets based on monolingual lexica is that they adopt a perspective which is more geared towards the layman and therefore typically not so deep in taxonomical structure (cf. Pedersen et al. 2010).
3
Status of wordnets in the Nordic and Baltic countries 3.1 About META-NORD During the last decade, linguistic resources have grown rapidly for all EU languages, including lesser-resourced languages such as the Nordic and Baltic ones. However they have typically been located in different places, have developed in different standards and in many cases were not well documented. The META-NORD project has aimed to establish an open linguistic infrastructure in the Baltic and Nordic countries to serve the needs of the industry and research communities. The project, which was completed in January 2013, has focused on 8 European languages -Danish, Estonian, Finnish, Icelandic, Latvian, Lithuanian, Norwegian and Swedisheach with less than 10 million speakers. The project has provided descriptions of the national landscapes in these countries via the META-NET White Paper Series "Europe's Languages in the Digital Age" and has assembled, linked across languages, and made widely available close to 500 language resources and tools of different types via the common network META-SHARE http://www.meta-share.org/. META-SHARE is a network of repositories of language data, tools and related web services documented with metadata, aggregated in central inventories allowing for uniform search and access to resources. The horizontal action on wordnets constitutes one of several cross-language initiatives in the project. In the following a brief status of each of the involved wordnets is given.
Estonian wordnet
The Estonian wordnet was built as part of the EuroWordNet project and thus used the expand method as a starting point. Base concepts from English were translated into Estonian as a first basis for a monolingual extension. The extensions have been compiled manually from Estonian monolingual dictionaries and other monolingual resources. In this sense, EstWN applies a hybrid method including both expand and monolingual techniques. EstWN includes nouns, verbs, adjectives and adverbs; as well as a set of multiword units, cf Kahusk et al. 2012. The database currently (Jan 2013) contains approx. 59 000 concepts are interlinked by175,000 relations and work is still in progress. The database is available under a CC-NY-NC license. The database can be accessed partly via WordTies, partly at http://www.cl.ut.ee/teksaurus and www.keeleveeb.ee.
Finnish wordnet
FinnWordNet is compiled using the expand method and supplemented with monolingual localisations (see http://www.ling.helsinki.fi/cgi-bin/fiwn/search). FinnWordNet contains nouns, verbs, adjectives and adverbs grouped by meaning into synonym sets representing concepts. Version 1.0 of FinnWordNet was created by translating the word senses in the Princeton WordNet 3.0 (Lindén & Carlson 2010). To ensure quality, the word senses were translated by professional translators. This approach allowed a very rapid and cost-efficient creation of an extensive Finnish wordnet directly aligned with the Princeton WordNet providing a translation relation between English and Finnish.
It is often claimed that translating a wordnet from English is somehow problematic, so to dispel such doubts several rounds of evaluations were performed, only to discover very few translation or concept problems (cf. Lindén et al. 2012). During the evaluation some missing common Finnish words and concepts were added to FinnWordNet from a large corpus of Finnish as well as from Wiktionary and Wikipedia. The resulting FinnWordNet 2.0 has 120,449 concepts containing 208,645 word senses and linked to each other with 265,690 relations. It thus surpasses Princeton WordNet in the number of concepts and word senses. FinnWordNet is licensed under the Creative Commons Attribution (CC-BY) 3.0 licence. As a derivative of the Princeton WordNet, FinnWordNet is also subject to the Princeton WordNet licence.
Danish Wordnet
DanNet has been constructed using the merge approach where the wordnet is built on monolingual grounds and thereafter merged with Princeton WordNet. It currently contains 66,308 concepts which are interlinked by 326,564 relations (see also Pedersen et al. 2009). The wordnet has been compiled as a collaboration between the University of Copenhagen and the Danish Society for Language and Literature and is based on Den Danske Ordbog (Hjorth & Kristensen 2003). Furthermore, the Danish version of the SIMPLE lexicons (cf. Lenci et al. 2001) has influenced the construction of DanNet in the sense that it includes also qualia information such as the telic and the agentive role (purpose and origin). Qualia roles are encoded in DanNet in terms of relations such as used_for and made_by as well as by means of features such as SEX and CONNOTATION. DanNet is licensed under the Princeton WordNet licence.
Swedish wordnet (Swesaurus)
Swesaurus , Borin & Forsberg 2011) is a Swedish wordnet developed at Språkbanken, University of Gothenburg. It is being built by reusing lexical-semantic relations collected from a number of pre-existing, freely available lexical resources: SALDO (Borin & Forsberg 2009), SDB (Järborg 2001), Synlex (Kann & Rosell 2006), and Swedish Wiktionary. A novel feature of Swesaurus is its fuzzy synsets derived from the graded synonymy relations of Synlex. Swesaurus and several other lexical resources are available for download and inspection at http://spraakbanken.gu.se/karp. Swesaurus is an integral part of a large and diverse lexical macroresource compiled in the Swedish FrameNet++ project . It includes 13,724 senses and is licensed under a CC-BY license. Due to its slightly different structure, Swesaurus is currently only partly visible through WordTies.
Norwegian Wordnet
A Norwegian Wordnet (NWN) has been developed as a part of The Norwegian Language Bank (Språkbanken). It consists of around 50,000 synsets, for both Norwegian Nynorsk and Norwegian Bokmål and covers more than 90 per cent of the senses of open word classes in running newspaper text. Both wordnets are available via META-SHARE (http://www.nb.no/clarin/repository/search/?q=ordnett) under the Princeton WordNet License. The compilation is based on the Danish wordnet (DanNet), and thus NWN contains the same lexical relations and much of the same semantic analysis as DanNet. The data format and licence are also identical.
Semantically, Danish and Norwegian are very closely related, and word senses are mostly equivalent (though the frequency with which the senses are used often varies). Some synsets are dropped: some are only relevant for Danish society, and do not have natural equivalent in Norwegian. These synsets are almost exclusively infrequent and "peripheral" in the DanNet (i.e. they are leaf nodes in the synset graph). A partial semantic annotation of a Norwegian corpus has been developed to ensure that the most frequent senses for Norwegian text are covered. Using this method, it has been possible to create a very extensive wordnet for a fraction of the cost for development from scratch, and without the quality problems associated with translation from for example English.
Icelandic wordnet (MerkOr)
The semantic database MerkOr, which constitutes the Icelandic wordnet, has been developed using a monolingual approach with automatic methods for the extraction of semantic information from texts. Both pattern-based and statistical methods are used, as well as a hybrid methodology.
The structure of the database is not based on hierarchies, like the Princeton WordNet, but rather on clusters of strongly related words and semantic relations often describing common sense knowledge and associations. The database contains about 110,000 words, primarily nouns, but also a number of verbs and adjectives. About 2.93 million relations between these words are listed in the database which also contains 305 semantic clusters lists of words that belong to the same semantic field. The database is distributed under the GNU Lesser General Public License and can be queried online at http://merkor.skerpa.com. This wordnet is not yet made available via WordTies.
WordTies: A common web interface for viewing aligned wordnets
WordTies (wordties.cst.dk) is a web interface developed to visualize monolingual wordnets as well as their alignments with the other wordnets, cf. Figure 1. In this browser the user can chose either of the (currently four) relevant wordnets as a source language and see how a concept is linked to its sister wordnets.
Figure 1: Introductory screen of WordTies
WordTies builds on a monolingual browser, AndreOrd, which was built to browse DanNet, cf. Johannsen & Pedersen (2011). In this browser, the semantic relations are made available in a more graphical fashion compared to what is found in most other wordnet browsers which tend to focus primarily on visualizing the hyponymy structure of the wordnet. The particular choice of graph very compactly encodes large numbers of relationseach represented by its own colourand thus gives a good overview of the general structure of the wordnet. In order to make room for all relations in the graphalso the inherited ones -, only one representative sense is visualized per synset. However, all senses are presented below the graph. By clicking on a related synset in the graph the user can dynamically move around in the wordnet. For illustration, see Figure 2 where Danish has been chosen as the source and the Danish concept håb ('hope') has been looked up and aligned with Estonian, Swedish and Finnish wordnets.
A click on either of these links will bring the users into these particular wordnets and enable them to browse the wordnet and view the established relations as well as its taxonomical structure, as seen in Figure 3 where we see that the Finnish wordnet has a much deeper taxonomical structure (expert perspective) than the Danish (layman perspective) of the concept tree. In this way, the web interface eases comparison and evaluation of the wordnets. Major changes to the AndreOrd source-code were model changes including the addition of instance, source model classes and modification to alignment and its relations. These three model classes handle the relational structure and data used to enable the multilingual relations (connections), facilitating a link between application instances via a wordnet's imported Princeton Core WordNet relations. WordTies can be dynamically extended to include more wordnets. There are two compulsory steps for import, firstly to calculate and update the hyponym count for each synset record, and secondly to import alignments to Princeton Core WordNet. Optionally, the import alignments script can be used to import multilingual alignments, alignments to other wordnet synsets via Princeton Core WordNet. The application is able to have a customised locale, and language files. Currently, Danish and English are supported languages with the application. An index page is customisable based on the locale, and new language support can be easily added with valid translation of labels. Currently multi-locale support is not included with the application, a single locale is set for the application instance to operate in. Other customisations available, include filter values and path names (routes), and custom colour mappings for the relations graph.
Alignment and evaluation of bilingual wordnets
Four bilingual wordnets have been automatically processed on the basis of each wordnet's links to Princeton WordNet. In other words, English has functioned as an interlingua in a triangulation method, and a central aim has been to examine to which extent this strategy influenced the quality of the bilingual translations.
However, since the Nordic and Baltic wordnets were built locally using both the expand and merge techniques as we have seen, they differ in the extent to which they were bilingually linked before the META-NORD project was initiated. Therefore, a first task was to ensure a common linked coverage of all the involved wordnets. To this end, all wordnets were manually linked to Princeton Core WordNet containing 5,000 core synsets. 1 Princeton Core WordNet is a recent, semi-automatically compiled list of 5,000 "core" word senses in WordNet corresponding approximately to the 5,000 most frequently used word senses, followed by some manual filtering and adjustment. This set of basic concepts is considered to be deduced on better statistical grounds than the previously applied "base concepts" used in the EuroWordNet and SIMPLE projects. Further, Princeton Core WordNet is characterized by being relatively coarse-grained compared to the full Princeton WordNet and thus much better suited for alignment tasks.
For the evaluation, a top 1000 set of this 5,000 synset intersection with a POS ratio of 6:2:2 for nouns, verbs, and adjectives, respectively was generated. The extract was also based on provided frequency data from Swedish and Finnish. Even if one-to-one synset alignments are by far the most frequent ones, one-to-many and many-to-one synset alignments occur as well. Valid relations to Princeton Core WordNet include eq_synonym, which are by the most frequent one, eq_has_hyponym, as well as eq_has_hyperonym, allowing thus in some cases for alignments to more or less specific synsets. All in all, four linked wordnets were processed and evaluated. As can be read, the semi-automatic alignments are judged to be of a relatively good quality even if translations are in several cases not 100% precise. An average of 2.2% errors and 7.0% slight mismatches is reported on. However, there are some clear divergences to be commented on in Table 1: the evaluation of Estonian-Finnish reports on 53 errors and 226 slight mismatches whereas no errors and only 7 mismatches are reported on for Finnish-Danish. Since the evaluations are made by different partners with the necessary bilingual language skills, part of this divergence is due to somewhat different interpretations of the concepts 'slight mismatches'. Furthermore, the different nature of the wordnets seems to have influenced the evaluations to a certain extent. Thus, some evaluators have focused on a good definition-to-definition match between two synsets, whereas others have applied a somewhat stricter criterion by evaluating the exact sense-to-sense correspondence. For instance, the Estonian-Finnish evaluator has registered differences in synonyms or differences in specificity as slight mismatches, influenced by the fact that not all senses in two aligned synset represent fully precise translations of each other. See Table 2 for comments on particular alignments between Estonian and Finnish.
An additional explanation to the divergences is that since the Finnish and Estonian languages are very close, the Estonian evaluator expected to see exact matches sense-tosense in the translations between Finnish and Estonian, irrespective of the fact that the wordnets were built from more or less different starting points. The phenomenon with very close words (and mismatching translations) could presumably also have been observed between other close language pairs such as Swedish and Danish, but since these two wordnet are characterized by having less senses per synset, synonym mismatch is not observed to the same extent. The evaluator of the two "extremes" with respect to senses per synset, Danish and Finnish wordnets, reports on few mismatches, influenced by the fact that focus has here been more directed towards the definition-todefinition alignment.
Not surprisingly, wordnets that have been compiled via translations from Princeton WordNet have many senses per synsets (just as Princeton WordNet), whereas wordnets that are monolingually compiled and rather based on synonymy registrations in conventional dictionaries, have much less; (see also Section 5). As can be seen, it has proven difficult to 'neutralize' such differences initial to the evaluations.
With regards to alignment errors, there does not appear to be a systematic bias, some are due to false friends, others however, seem to be just random errors introduced during the linking to Princeton Core WordNet, as in the following, where the English synset has been linked to a too specific sense of 'waste' in DanNet than what was actually indicated by the English gloss, cf. FinnWordNet has the highest average of hyponymy depth, relating well to our intuition of this wordnet being more expert oriented at least in the fields of botany and zoology (see also Figure 3). In contrast, EstWN and DanNet which rely more on monolingual dictionaries and the genus proximums given in the definitions of these, have less depth. This fact can also be illustrated by extracting the path to the top from a botanical concept like tree in Danish and Finnish, respectively: trae (tree) has 4 super-concepts (plante → organisme →fysisk genstand→entitet) (plant → organism → physical entity→entity) puu (tree) has 9 super-concepts (puumainen kasvi → putkilokasvi → kasvi → eliö → elävä olio → kokonaisuus → esine → fyysinen entiteetti → entiteetti) (woody plant → vascular plant → plant → organism → animate thing →whole → object → physical entity → entity) Figure 5: Differences in number of relations in Finnish and Danish, respectively, attached to the concept candle (kynttilä, stearinlys). For example, Danish includes relations such as used_for=light and is_made_of=stearin whereas the Finnish wordnet includes only hyponyms, parts and hyperonym.
The number and selection of relations in the wordnets also differ; some have included only Princeton relations, others include EuroWordNet relations (i.e. Estonian, Danish, Norwegian) and others again have adapted qualia-inspired (Pustejovsky 1995) relations also from the SIMPLE project (Lenci et al. 2001), such as the used_for and made_by relations in the Danish and Norwegian wordnets. This extension of the relation set to include also purpose and origin is again influenced by sense definitions in conventional dictionaries where it is typically expressed for which purpose a given artifact is made and eventually how it is made (i.e. baked, grown, cooked, produced), see Figure 5 for such differences in number of relations between Danish and Finnish wordnets.
Conclusion and further steps
Apart from consolidating, extending and providing richer documentation for the Nordic and Baltic wordnets, the META-NORD multilingual wordnet initiative has ensured an alignment and comparison of the most mature of these wordnets and have made them all easily accessible through META-SHARE. Central aims have been to understand better the different nature of the lexical-semantic resources in order to approximate to a best practice, to test the perspective of linking them and to make them visible in an intuitive way in a common web interface. Four core bilingual wordnets have been compiled, made visible and evaluated with diverging, but still promising, results. The evaluations and comparisons have exposed a considerable variety of the wordnets wrt. taxonomical structure, structure of the synset (many or few senses per synset) and number of relations attached to each synset, a variety which proves to originate from the different compilation strategies used for the different Nordic and Baltic languages. As we have shown, the two compilation strategies (expand versus merge) have considerable impact on how the lexical-semantic information is represented and on the depth of the lexical hierarchies. In spite of these differences, an alignment through Princeton Core WordNet has proven feasible.
Three wordnets were not fully mature when the META-NORD project started and have therefore not yet been aligned or made visible in WordTies, namely the Icelandic and Norwegian wordnets; the plan is to include them during 2013.
Figure 2 :
2A Danish synset look-up (håb (hope)) with multilingual alignments to English, Finnish, Estonian and Swedish wordnets.
Figure 3 :
3Graphical views of the taxonomical differences of the concept tree in the Danish and Finnish wordnets, respectively. DanNet includes three major (layman) subtypes of trees: deciduous trees, coniferous trees, and fruit trees, whereas the Finnish wordnet includes further subtypes based on a specialist, botanical structuring.
Figure 4 :
4Overview and scripts used in WordTies.
Table 1
1sums up the evaluation results.Slight
mismatches
Linking
errors
Danish-Swedish
2.3
2.4
Finnish-Swedish
2.6
1.1
Finnish-Danish
0.7
0
Estonian-Finnish
22.6
5.3
Table 1: Percentage of errors and mismatches in bilingual wordnets
Table 2 :
2Estonian-Finnish translations that are considered to be slight mismatches.
Table 3 .
3WN
synset
DA
SV
WN gloss
Comment
waste%1:27:0
0::
{spildprodukt
_0}
avfall..1
any materials
unused and rejected
as worthless or
unwanted
DArrefers to byproducts of
production; should be the more
general 'affald'.
Table 3 :
3Example of Danish-Swedish link which is considered an errorVia the evaluations presented in Section 5 and by browsing the wordnets in WordTies, further insights have been achieved wrt. the very diverse characteristics of the selected wordnets in terms of taxonomical differences, different understandings of the synset, and differences in compiling semantic relations.First of all, we can observe some differences in average hyponym depth, number of senses per synset and average number of relations connected to a synset as shown inTable 4.6
Further comparison of selected wordnets
DanNet
FinnWordNet EstWN
Hyponym
depth/SynSet
4.38
7.49
5.93
Word Senses/SynSet 1.09
1.74
1.65
Relations/SynSet
4.97
2.21
2.91
Table 4 :
4Hypnonym depth, word sense per synset and relations per synset for Danish, Finnish and Estonian wordnets
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 148 of 474]
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 149 of 474]
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 150 of 474]
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 152 of 474]
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 153 of 474]
http://wordnetcode.princeton.edu/standoff-files/core-wordnet.txt Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 155 of 474]
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 156 of 474]
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 157 of 474]
Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013); Linköping Electronic Conference Proceedings #85 [page 162 of 474]
Semi-automatic Methods for WordNet Construction. G Rigau, E Agirre, Tutorial at 2002 International WordNet Conference. Mysore, IndiaRigau, G. and Agirre, E. (2002). Semi-automatic Methods for WordNet Construction. Tutorial at 2002 International WordNet Conference, Mysore, India.
P Bhattacharyya, Proceedings of LREC 2010. LREC 2010VallettaELRABhattacharyya, P. (2010) IndoWordNet. Proceedings of LREC 2010. Valletta: ELRA.
The past meets the present in Swedish FrameNet++. L Borin, D Danélls, M Forsberg, D Kokkinakis, M T Gronostaj, Proceedings of the 14th EURALEX International Congress. the 14th EURALEX International CongressLeeuwardenEURALEXBorin, L., Danélls, D., Forsberg, M., Kokkinakis, D. and Gronostaj, M.T. (2010). The past meets the present in Swedish FrameNet++. In Proceedings of the 14th EURALEX International Congress, pp. 269-281. Leeuwarden: EURALEX.
All in the family: A comparison of SALDO and WordNet. L Borin, M Forsberg, Proceedings of the Nodalida 2009 Workshop on WordNets and other Lexical Semantic Resources -between Lexical Semantics, Lexicography, Terminology and Formal Ontologies. the Nodalida 2009 Workshop on WordNets and other Lexical Semantic Resources -between Lexical Semantics, Lexicography, Terminology and Formal OntologiesOdenseNEALTBorin, L. and Forsberg, M. (2009). All in the family: A comparison of SALDO and WordNet. In Proceedings of the Nodalida 2009 Workshop on WordNets and other Lexical Semantic Resources -between Lexical Semantics, Lexicography, Terminology and Formal Ontologies, pp. 7-12. Odense: NEALT.
Beyond the synset: Swesaurus -a fuzzy Swedish wordnet. L Borin, M Forsberg, Workshop on Re-thinking synonymy: Semantic sameness and similarity in languages and their description. HelsinkiBorin, L. and Forsberg, M. (2010). Beyond the synset: Swesaurus -a fuzzy Swedish wordnet. In Workshop on Re-thinking synonymy: Semantic sameness and similarity in languages and their description. Helsinki.
L Borin, M Forsberg, Swesaurus -ett svenskt ordnät med fria tyglar. LexicoNordica. 18Borin, L. and Forsberg, M. (2011). Swesaurus -ett svenskt ordnät med fria tyglar. LexicoNordica vol. 18, pp. 17-39.
The hunting of the BLARK -SALDO, a freely available lexical database for Swedish language technology. L Borin, M Forsberg, L Lönngren, Resourceful language technology. Festschrift in honor of Anna Sågvall Hein. Joakim Nivre, Mats Dahllöf and Beáta MegyesiUppsalaUppsala UniversityActa Universitatis Upsaliensis: Studia Linguistica Upsaliensia 7Borin, L., Forsberg, M. and Lönngren, L. (2008). The hunting of the BLARK -SALDO, a freely available lexical database for Swedish language technology. Joakim Nivre, Mats Dahllöf and Beáta Megyesi (eds.), Resourceful language technology. Festschrift in honor of Anna Sågvall Hein, pp. 21-32. Acta Universitatis Upsaliensis: Studia Linguistica Upsaliensia 7. Uppsala: Uppsala University.
Words, concepts and relations in the construction of the polish WordNet. M Derwojedowa, M Piasecki, S Szpakowicz, M Zawislawska, B Broda, Global WordNet Conference. Szeged, HungaryDerwojedowa, M., Piasecki, M., Szpakowicz, S., Zawislawska, M. and Broda, B. (2008). Words, concepts and relations in the construction of the polish WordNet. In Global WordNet Conference 2008, pp. 162-177. Szeged, Hungary.
Validation and Tuning of Wordnet Mapping Techniques. J Daudé, L Padró, G Rigau, Proceedings of the International Conference on Recent Advances on Natural Language Processing (RANLP'03). the International Conference on Recent Advances on Natural Language Processing (RANLP'03)Borovets, BulgariaDaudé J., Padró L. and Rigau G. (2003). Validation and Tuning of Wordnet Mapping Techniques. Proceedings of the International Conference on Recent Advances on Natural Language Processing (RANLP'03). Borovets, Bulgaria.
Mapping Multilingual Hierarchies Using Relaxation Labeling. J Daudé, L Padró, G Rigau, Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC'99). Maryland, USDaudé J., Padró L. and Rigau G. (1999). Mapping Multilingual Hierarchies Using Relaxation Labeling. Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC'99). Maryland, US.
WordNet -An Electronic Lexical Database. C Fellbaum, The MIT PressCambridge, MassachusettsFellbaum, C. (ed) (1998). WordNet -An Electronic Lexical Database. Cambridge, Massachusetts: The MIT Press.
Den Danske Ordbog. E Hjorth, K Kristensen, Gyldendal, DenmarkHjorth, E. and Kristensen, K. (2003). Den Danske Ordbog. Gyldendal, Denmark.
Roller i Semantisk databas. J Järborg, GU-ISS-01-3Research Reports from the Department of Swedish. University of Gothenburg: Dept. of SwedishJärborg, J. (2001). Roller i Semantisk databas. Research Reports from the Department of Swedish, No. GU-ISS-01-3. University of Gothenburg: Dept. of Swedish.
Andre ord" -a wordnet browser for the Danish wordnet, DanNet. A Johannsen, B S Pedersen, Proceedings of the 19th Nordic Conference of Computational Linguistics. the 19th Nordic Conference of Computational LinguisticsRiga, Latvia11University of TartuProceedings from 18th Nordic Conference of Computational LinguisticsJohannsen, A. and Pedersen, B.S. (2011). "Andre ord" -a wordnet browser for the Danish wordnet, DanNet. In Proceedings from 18th Nordic Conference of Computational Linguistics, NODALIDA 2011, Riga, Latvia. Nothern Association for Language Technology, Vol. 11 pp. 295-298, University of Tartu. Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013);
Linköping Electronic Conference Proceedings #85. page 161 of 474Linköping Electronic Conference Proceedings #85 [page 161 of 474]
Free construction of a free Swedish dictionary of synonyms. V Kann, M Rosell, Proceedings of the 15th NODALIDA conference. the 15th NODALIDA conferenceJoensuuUniversity of Eastern FinlandKann, V. and Rosell, M. (2006). Free construction of a free Swedish dictionary of synonyms In Proceedings of the 15th NODALIDA conference, pp. 105-110. Joensuu: University of Eastern Finland.
FinnWordNet och det finska samhället. N Martola, Symposium om onomasiologiske ordbøker i Norden. Schaeffergården, CopenhagenMartola, N. (2011). FinnWordNet och det finska samhället. In: Symposium om onomasiologiske ordbøker i Norden. Schaeffergården, Copenhagen.
Cross-linking Experience of Estonian WordNet. N Kahusk, H Orav, K Vare, 10.3233/978-1-61499-133-5-96Human Language Technologies -The Baltic Perspective: The Fifth International Conference on Human Language Technologies -The Baltic perspective. Tartu, EstoniaIOS PressArvi, Tavast; Kadri Muischnek; Mare, Koit). Online accessKahusk, N., Orav, H. and Vare, K. (2012). Cross-linking Experience of Estonian WordNet. In: Human Language Technologies -The Baltic Perspective: The Fifth International Conference on Human Language Technologies -The Baltic perspective. Tartu, Estonia, October 4-5, 2012. (Ed. Arvi, Tavast; Kadri Muischnek; Mare, Koit). IOS Press, pp. 96-102. Online access: doi:10.3233/978-1-61499-133-5-96
SIMPLE: A general framework for the development of multilingual lexicons. A Lenci, N Bel, F Busa, N Calzolari, E Gola, M Monachini, A Ogonowski, I Peters, W Peters, N Ruimy, M Villegas, A Zampolli, International Journal of Lexicography. 13Lenci, A., Bel, N., Busa, F., Calzolari, N., Gola, E., Monachini, M., Ogonowski, A., Peters, I., Peters, W., Ruimy, N., Villegas, M. and Zampolli, A. (2000). SIMPLE: A general framework for the development of multilingual lexicons. International Journal of Lexicography, vol. 13, pp. 249-263
FinnWordNet -WordNet på finska via översättning. K Lindén, L Carlson, LexicoNordica -Nordic Journal of Lexicography. 17Lindén, K. and Carlson, L. (2010). FinnWordNet -WordNet på finska via översättning. LexicoNordica -Nordic Journal of Lexicography, vol. 17, pp. 119-140
Shall We Play the Festschrift Game? Essays on the Occasion of Lauri Carlson's 60th Birthday. K Lindén, J Niemi, M Hyvärinen, 978-3-642-30773-7Diana Santos, Krister Lindén and Wanjiku Ng'ang'aSpringerBerlin, HeidelbergExtending and Updating the Finnish WordnetLindén, K., Niemi, J. and Hyvärinen, M. (2012) Extending and Updating the Finnish Wordnet. In Diana Santos, Krister Lindén and Wanjiku Ng'ang'a (eds.), Shall We Play the Festschrift Game? Essays on the Occasion of Lauri Carlson's 60th Birthday, pp. 67-98. Springer: Berlin, Heidelberg. ISBN 978-3-642-30773-7.
. B S Pedersen, S Nimb, J Asmussen, N Sørensen, L Trap-Jensen, H Lorentzen, Pedersen, B.S, Nimb, S., Asmussen, J., Sørensen, N., Trap-Jensen, L. and Lorentzen, H. (2009).
DanNet -the challenge of compiling a WordNet for Danish by reusing a monolingual dictionary. Language Resources and Evaluation. Computational Linguistics Series. DanNet -the challenge of compiling a WordNet for Danish by reusing a monolingual dictionary. Language Resources and Evaluation, Computational Linguistics Series, pp. 269- 299.
Merging specialist taxonomies and folk taxonomies in wordnets. B S Pedersen, S Nimb, A Braasch, the Danish wordnet In: Proceedings from the Seventh International Conference on Language Resources and Evaluation. MaltaPedersen, B.S., Nimb, S. and Braasch, A. (2010). Merging specialist taxonomies and folk taxonomies in wordnets. -a case study of plants, animals and foods in the Danish wordnet In: Proceedings from the Seventh International Conference on Language Resources and Evaluation, pp. 3181-3186. Malta.
Cross-lingual Alignment of Wordnets with an Inter-Lingual-Index. W Peters, P Vossen, P Díes-Orzas, G Adriaens, EuroWordNet -A Multilingual Database with Lexical Semantic Networks. Kluwer Academic PublishersPeters, W., Vossen, P., Díes-Orzas, P. and Adriaens, G. (1998). Cross-lingual Alignment of Wordnets with an Inter-Lingual-Index. In: EuroWordNet -A Multilingual Database with Lexical Semantic Networks, pp. 149-179. Kluwer Academic Publishers.
The Generative Lexicon. J Pustejovsky, MIT PressCambridge, MassachusettsPustejovsky, J. (1995). The Generative Lexicon. Cambridge, Massachusetts: MIT Press.
WNMS: Connecting Distributed Wordnet in the Case of Asian WordNet. K Robkop, S Thoongsup, T Charoenpron, V Sornlertlamvanich, H Isahara, Proceedings of the 5th International Conference of the Global WordNet Association. the 5th International Conference of the Global WordNet AssociationMumbai, IndiaRobkop, K., Thoongsup, S., Charoenpron, T., Sornlertlamvanich, V. and Isahara, H.. (2010). WNMS: Connecting Distributed Wordnet in the Case of Asian WordNet. In: Proceedings of the 5th International Conference of the Global WordNet Association (GWC 2010), Mumbai, India.
Word Sense Disambiguation as a Wordnets Validation Method in BalkaNet. D Tufiş, R Ion, N Ide, Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004). the 4th International Conference on Language Resources and Evaluation (LREC 2004)LisbonELRATufiş, D., Ion, R. and Ide, N. (2004). Word Sense Disambiguation as a Wordnets Validation Method in BalkaNet. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), pp. 1071-1074. Lisbon: ELRA
EuroWordNet: A Multilingual Database with Lexical Semantic Networks. P Vossen, Kluwer Academic PublishersDordrechtVossen, P. (ed.) (1998). EuroWordNet: A Multilingual Database with Lexical Semantic Networks. Dordrecht: Kluwer Academic Publishers. |
226,262,286 | Unsupervised Stance Detection for Arguments from Consequences | Social media platforms have become an essential venue for online deliberation where users discuss arguments, debate, and form opinions. In this paper, we propose an unsupervised method to detect the stance of argumentative claims with respect to a topic. Most related work focuses on topic-specific supervised models that need to be trained for every emergent debate topic. To address this limitation, we propose a topic independent approach that focuses on a frequently encountered class of arguments, specifically, on arguments from consequences. We do this by extracting the effects that claims refer to, and proposing a means for inferring if the effect is a good or bad consequence. Our experiments provide promising results that are comparable to, and in particular regards even outperform BERT. Furthermore, we publish a novel dataset of arguments relating to consequences, annotated with Amazon Mechanical Turk. | [
286464,
15908763,
18151048,
14068874,
1013580,
1918254,
14254034,
52009139,
11902548,
53083029,
13964436,
1587,
2845337,
2300698,
10432955
] | Unsupervised Stance Detection for Arguments from Consequences
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 16-20, 2020. 2020
Jonathan Kobbe
University of Mannheim
Ioana Hulpuş
University of Mannheim
Heiner Stuckenschmidt
University of Mannheim
Unsupervised Stance Detection for Arguments from Consequences
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
the 2020 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 16-20, 2020. 202050
Social media platforms have become an essential venue for online deliberation where users discuss arguments, debate, and form opinions. In this paper, we propose an unsupervised method to detect the stance of argumentative claims with respect to a topic. Most related work focuses on topic-specific supervised models that need to be trained for every emergent debate topic. To address this limitation, we propose a topic independent approach that focuses on a frequently encountered class of arguments, specifically, on arguments from consequences. We do this by extracting the effects that claims refer to, and proposing a means for inferring if the effect is a good or bad consequence. Our experiments provide promising results that are comparable to, and in particular regards even outperform BERT. Furthermore, we publish a novel dataset of arguments relating to consequences, annotated with Amazon Mechanical Turk.
Introduction
In the context of decision making it is crucial to compare positive and negative effects that result from a potential decision. Indeed, arguing for or against something because of its possible consequences is a frequent form of argumentation (Reisert et al., 2018;Al-Khatib et al., 2020). In this paper, we address the classical stance detection problem paying special attention to such arguments.
Stance detection, also called stance classification, is the task to decide whether a text is in favor of, against, or unrelated to a given topic. This problem is related to opinion mining, but while opinion mining focuses on the sentiment polarity explicitly expressed by a text, stance detection aims to determine the position that the text holds with respect to a topic that is generally more abstract and might not be mentioned in the text. As such, in stance detection, texts can transmit a negative sentiment or opinion, but be in favor of the targeted topic. For example, the text Holocaust denial psychologically harms Holocaust survivors expresses a negative opinion, but its stance towards Criminalization of Holocaust denial is positive. 1 Recently, the problem of stance detection has received growing attention from the scientific community, as shown by the recent survey of Küçük and Can (2020). Most approaches tackle this problem by learning stance classification models for each topic. While this can achieve good results, new models need to be trained for each new topic of interest, generally entailing large annotation studies.
While we admit that a one-size-fits-all approach to stance detection is currently unfeasible, we take a different perspective. Rather than targeting topicdependent models, we target a subclass of arguments. Specifically, we focus on arguments that have been classified by Walton et al. (2008) under the argument from consequences scheme. They contain a premise of the form If A is brought about, then good (bad) consequences will (may plausibly) occur, and a conclusion A should (not) be brought about. In most real-life arguments of this type, the consequences are expressed, but the interpretation that they are good or bad, as well as the conclusion, are most often implicit. The task of stance detection is then to determine if the argument is against or in favor of A. Our solution to find the stance of such arguments revolves around extracting and analyzing cause-effect relations in order to infer if the consequences are good or bad.
We conducted an Amazon Mechanical Turk (AMT) study, in which we crowdsourced annotations for 1894 arguments extracted from Debatepedia. We compared our system's performance to a sentiment analysis baseline and a fine-tuned BERT model. The results show that our results are comparable and, in some settings, even better than BERT's. 2 Aside from not needing annotated training data, we stress the advantage of our approach for providing human-understandable explanations to the results, and to provide, as a byproduct, cause-effect relations between concepts brought up in arguments.
The paper is structured as follows. Section 2 positions our contributions with respect to related literature. Section 3 presents our proposed approach. Section 4 describes our crowdsourced dataset, which we use in Section 5 to evaluate our approach. Lastly, Section 6 concludes the paper.
Related Work
Stance detection has been studied on various types of formal texts such as congressional debates (Thomas et al., 2006) and company-internal discussions (Murakami and Raymond, 2010). However, like most recent related work on the topic, we are particularly interested in informal texts from online social media.
The vast majority of previous approaches proposes supervised methods, using traditional machine learning algorithms (Somasundaran and Wiebe, 2010;Anand et al., 2011;Hasan and Ng, 2013;Faulkner, 2014;Addawood et al., 2017) and more recently, various deep neural networks architectures (Sun et al., 2018;Du et al., 2017;Dey et al., 2018;Ghosh et al., 2019). These approaches, most of which have been triggered by a recent SemEval shared task 3 , learn topic-specific models. Thus, new topics require new models whose training entails large user annotation studies. In contrast, we propose a fully unsupervised, topic-independent method, and rather target a particular but frequent class of claims, those that refer to consequences.
Among the unsupervised approaches, the most prominent one is this of Somasundaran and Wiebe (2009), which got extended by and . However, they focus on non-ideological topics (usually products, e.g., iPhone vs. Galaxy). In contrast, we target ideological topics (e.g., Gay Marriage, Abortion) whose stance is harder to detect due to less fre-quent use of sentiment words and a wider variety of brought up issues and arguments (Rajendran et al., 2016;Wang et al., 2019). On the one hand, these works extract topic aspects (e.g., screen resolution, battery) and polarities towards these aspects, a step that is unfeasible for ideological topics. On the other hand, like these works, we also use syntactic rules, but not for pairing aspects to opinions, but for extracting triples that correspond to statements about effects over opinion words.
Another class of stance detection approaches uses the context of the post, such as its relations to other posts in the debate, the network of authors, or the author's identity (Hasan and Ng, 2013;Sridhar et al., 2014;Addawood et al., 2017;Bar-Haim et al., 2017b). By contrast, we target claim-topic pairs in isolation.
Another aspect that sets our work apart from most related work is that, except for the approaches that target tweets, most focus on longer texts while we consider short, one-sentence claims. In this regard, but not only, the stance detection work that is closest to ours is the partly supervised system of Bar-Haim et al. (2017a). They also propose a topicindependent solution to stance detection for short claims without considering context, but they do not specifically address arguments from consequences. While they follow a similar sequence of steps as we do, they propose different approaches for each step. For instance, they propose a supervised approach to detect the target of a claim's opinion, while we do it in an unsupervised manner. They focus primarily on detecting contrastive relations between phrases, while our focus is on detecting effects. In this last regard, the works can be considered complementary.
Regarding the analysis of arguments from consequences, Reisert et al. (2018) provide and use scheme dependent templates to analyze the structure of arguments. Their work is rather conceptual and focuses on annotations. Very recently, Al-Khatib et al. (2020) built, on similar intuitions as ours, an approach for creating argumentation knowledge graphs based on cause-effect relations. Their work comes to reinforce the usefulness of addressing arguments from consequences.
To sum up, our contribution is three-fold: (i) we propose a fully unsupervised approach for stance detection, focusing on arguments that refer to consequences; (ii) we define rules over grammatical dependencies that exploit sentiment as well as ef-fect words in order to determine good and bad consequences; (iii) we publish a new stance detection dataset that labels claims that refer to consequences, and which was crowdsourced on AMT.
Our Approach
Given an argumentative claim and a topic, our task is to detect the stance that the claim has with respect to the topic. Statements such as the claim or topic usually express a positive (favorable) or negative (unfavorable) position to a concept that we call the target. As such, the target is a phrase that belongs to the statement. In the example shown Topic: Medical marijuana dispensaries Claim: Legalizing medical marijuana does not increase use and abuse Table 1, the target of both topic and claim is medical marijuana. Our solution starts by first determining the stance of the claim and of the topic towards their respective targets T c and T t . We then use these stances and the semantic relation between the targets to determine the claim's stance towards the topic. The overarching intuition behind our approach is that when the stance of a statement towards its target is favorable, the text either highlights the desirable consequences of the target being brought about (e.g., Electing an EU president directly will increase accountability), or it highlights the negative consequences if the target is not brought about (e.g., Sinking organic blooms can render the deep sea anoxic).
At the core of our approach resides what we call the effect triple. The effect triple is a triple of the form < (T, dir ), (P, eff ), (O, sent) >. The (T, dir ) pair represents the target T of the statement and if the statement refers to a magnification (dir = 1) (e.g. legalizing medical marijuana), or a reduction (dir = −1) of the target (e.g. banning medical marijuana). The (P, eff ) pair represents the predicate P that has T as the subject, together with the effect eff that it has over the object O. The effect can be positive (eff = +1) or negative (eff = −1). Lastly, the (O, sent) pair represents the object over which T has the effect P . We expect the sentiment of an object to reflect whether it is generally regarded as a good thing (sent = +1) or a bad thing (sent = −1).
Our approach's core idea is to distill such an effect triple from the claim and use it to infer the claim's stance towards T c . We further determine (T t , dir) to infer the topic's stance towards T t . Using these stances, together with the relation between the claim's and the topic's target, we finally decide the claim's stance with respect to the topic. We now describe the lexicons we use as well as each of these steps in more detail.
Lexicons
For determining dir , eff , and sent, we use an effect verb lexicon and a sentiment lexicon that we describe in the following.
The ECF Effect Lexicon To identify verbs and nominalized verbs that indicate effects on their direct objects, we extend the connotation frames (Rashkin et al., 2016). The connotation frames lexicon consists of a list of 947 verbs, manually annotated with values in the [−1, 1] range, indicating if the verb implies a positive or negative effect over its object. We consider the entries with scores in the range [−0.1, 0.1] as a neutral effect (e.g., use, say, seem), and we filter them out. We call the 845 remaining words in the lexicon effect words. We extend the list of effect words by adding all words in the same WordNet (Fellbaum, 2010) synset as the effect words, as long as there is no contradiction. A contradiction occurs when a new candidate effect word shares a synset with both a negative and a positive effect word. This way, we obtain 2508 effect words. We call this lexicon the extended connotation frames lexicon (ECF). As ECF only contains verbs, we use it via the stems of the words, mainly to also get the effects of nominalized verbs. In our experiments, we compare the performance of this lexicon with +/-EffectWordNet (Choi and Wiebe, 2014)(EWN).
The Sentiment Lexicon
In order to determine if the object of the effect is something good or bad, we combine several commonly used sentiment lexicons: (i) the MPQA lexicon 4 (Wilson et al., 2005), (ii) the opinion lexicon of Hu and Liu (2004), and (iii) the sentiment lexicon of Toledo-Ronen et al. (2018) (uni-and bigrams, using a threshold of ±0.2). The composed lexicon contains sentiment values in the range [−1, 1].
For many words, the polarities of their sentiment and of their effect are the same (e.g., kill, love). Still, there are important exceptions, such as reduce, which has neutral sentiment but indicates a negative effect, or conquer, which has a slightly positive sentiment but indicates a negative effect.
Effect Triple Extraction
Target Identification To detect the targets of the claim (T c ) and topic (T t ), we assume that T c is semantically related to the topic, or more specifically, to T t . Thus, we identify T c and T t simultaneously by following three strategies. The use of the second and third strategies is conditioned on the previous strategies to have failed to identify a pair of targets. First, we look for a pair of nouns that are identical or have the same lemma. We use Stanford Core NLP (Manning et al., 2014) for POS tagging and lemmatizing. Second, we look for a pair consisting of an acronym (e.g., ICC) and a word sequence whose first letters form the acronym (e.g., International Criminal Court). Third, we look for pairs of nouns that are synonyms or antonyms according to Thesaurus.plus 5 .
Besides returning T c and T t , we also return a value r = +1 if the two targets have been found to be synonyms and r = −1 if they are antonyms. Thus, first and second strategies only return r = 1 while the third strategy returns 1 or −1.
Target Direction Determination As described earlier, each target is accompanied by a dir value which indicates if the statement refers to a phenomenon of amplification or reduction of the target. We detect this by searching for a word whose object is the target by using Patterns 1 and 2 shown in Table 2. The word is then looked-up in the effect lexicon. If a negative effect is found, then dir = −1, otherwise dir = 1. We call the word the target effector, or just effector. In the claim in Table 1, the effector is legalizing and expresses an amplification of the target (dir = 1).
Detecting Predicates and Their Effects
Effect words are commonly used in arguments from consequences to express a (potential) effect that the target has or might have over another object. For example, in the claim in Table 1, the effect word increase expresses a positive effect that the (amplified) target has over the objects use, abuse.
We detect this effect of the target by using Pattern 3 to find a predicate whose subject is either the target or its effector, and by looking up this predicate in the effect lexicon. We thereby set eff to 1 or −1, depending on if the effect is positive or negative. In our running example, the (P, eff ) pair becomes (increase, −1) because of the negation, as we explain below.
Telling good from bad The last effect triple component we detect is (O, sent). To this end, we search the dependency graph for instantiations of Patterns 1 or 2, where P is the predicate that has been detected to express the target's effect. If such an object is found, we use the sentiment lexicon by first searching for the exact word and, if not available, for the word's lemma. We set sent to −1 if the word bears a negative sentiment or to 1 otherwise. In our example, the (O, sent) pair becomes (abuse, −1) because the word use is neutral per se.
The sentiment of a word is overwritten by the sentiment of its modifiers, as shown in Pattern 4 in Table 2. In the provided example in the table, one can see that the modifier terrorist dominates the sentiment of the positive word haven. Consequently, both terrorist haven and terrorist attack are considered generally bad.
Negation We deal with negations for each effect triple component. We identify negations by looking for Patterns 5, 6, and 7, as shown in Table 2. Patterns 5 and 6 make use of a manually created list of all negative English prepositions 6 . The existence of a negation affecting the target, predicate, or object toggles the sign of the corresponding valuedir, eff or sent, respectively.
Inferring the Stance Towards the Target
To infer the stance that a statement expresses towards its target, we use the intuition that the stance is unfavorable when the text expresses negative consequences of the target, and positive otherwise. Thus, we define that the stance towards the target is positive in exactly the following four cases: (i) the target's amplification implies a positive effect over something good (dir = eff = sent = +1); (ii) the target's amplification implies a negative effect over something bad (dir = +1, eff = sent = −1); (iii) the target's reduction implies a negative effect over something good (dir = eff = −1, sent = +1); (iv) the target's reduction implies a positive effect over something bad (dir = +1, eff = −1, sent = +1). Hence, the stance is favorable towards the target if the multiplication of the three components' values is +1. Consequently, we define the stance of a statement towards the target as s = dir ·eff ·val and interpret s = 1 as In favor and s = −1 as Against.
Inferring the Stance of the Claim Towards the Topic
The steps above can be executed analogously for the claim and the topic. However, due to the nature of the text expressing the topic, we only aim to extract an effect triple from the claim. For the topic, we detect its target and set the stance to its corresponding dir value. We denote the stances of the claim and topic towards their respective targets as s c and s t . To infer the claim's stance towards the topic, we need to consider the relation between T c and T t , i.e., the value of r as described in Section 3.2. We then define the final result of the analysis as Π = s c · s t · r. Table 3 presents further examples of how our approach detects the stance of the claim towards the topic. As illustrated in the examples, the straightforward interpretability of the stance detection process can be easily used for producing human-readable explanations for the returned results. This is particularly relevant for helping users get more control over the process, particularly in light of subsequent applications on top of stance detection.
Alternative Strategies
We denote the process in which all the previous steps are fulfilled and an effect triple is extracted as TPO. However, due to a variety of reasons that we analyze in Section 5.4, we might fail to extract a complete effect triple. One such case is when an adjective expresses an effect, for instance, Holocaust denial is discriminatory. For that reason, if we identify T and P , but not O, we set eff to the sentiment polarity of P , and sent to +1 by default. We refer to this strategy as TP.
Another potential situation is that the system detects (P, eff ) and (O, sent), but it can not relate them to T . One cause can be that we fail to identify T . If so, dir = +1 by default. Another cause can be that T is found, but we can not infer its relation to P . In this case, we consider that the identified target is the subject of P and set (T, dir ) accordingly. We refer to this strategy as PO.
Lastly, if all above strategies fail to create an effect triple, we use a heuristic: if T was found, dir is set accordingly. Otherwise dir = 1 by default. For the remaining words in the statement, we check their sentiment score, still using Pattern 4, toggling the sign if it is negated. The sum of the sentiment scores is then multiplied with dir. The stance is considered favorable or not depending on the sign of the result. We refer to this strategy as Heuristic.
Dataset Generation
To evaluate our approach, we need stance annotated topic-claim pairs, as well as annotations if the topicclaim pair refers to a consequence or not.
Data Collection
To create such a corpus, we run an AMT crowdsourcing study, where we annotate claims and topics extracted from Debatepedia 7 . We only use the 236 Featured Debate Digest articles as they are of higher quality. They contain more than 10,000 arguments labeled by their author as either pro or con the debate's topic. Usually, the arguments start with a bolded, one-sentence summary, which serves as the argument's claim. We exclusively use these claims and pair them to the debate's topic. We exclude 16 debates whose topics contain vs or or (e.g. Democrats vs. Republicans), and 30 debates without a title question. To create a balanced dataset that covers a large variety of topics, we randomly selected 5 pro and 5 con arguments of each debate. If a debate contains less than 5 pro and 5 con arguments, we select the maximum equal number of pro and con arguments. We obtain 190 different topics and 1894 arguments.
Crowdsourcing Study
The annotation task consisted of the debate's topic, one of its claims, and two questions. The first question was to select the stance of the claim towards the topic, out of the following choices: in favor, against, neither and I don't know. Although we have the original arguments' stances, this question helps us check how clear the claim is when taken out of the debate's context. The second question was whether the claim refers to a consequence related to the topic, with possible answers yes, no and I don't know. Each topic-claim pair was annotated Figure 1: Reliability of annotators according to MACE:
The higher the score, the more reliable the annotator is.
by 10 annotators living in the US with a HIT approval rate greater than 98% and more than 10,000 approved HITs in total. Overall, 277 annotators worked on the task. Table 4 shows the inter-annotator agreement per number of valid annotations, i.e., annotations that are not I don't know. Since we have many annotators, Fleiss κ is particularly low on consequence annotation, but still indicates higher agreement than random. To give an agreement estimate less sensitive to individual outliers, we also compute κ as the Fleiss kappa between two "experts", where each expert brings together half of the number of annotators and its annotation is decided with MACE (Hovy et al., 2013). Figure 1 shows the reliability of individual annotators. Although there is a weak correlation among the reliability of the two tasks (Pearson .41), some annotators are quite reliable in annotating stances, but highly unreliable in annotating consequences. This indicates that the latter task was unclear to some of the annotators. To understand why the annotators usually disagree, we investigated such instances and identified several possible reasons:
Agreement and Reliability
Complexity In the topic-claim pair Criminalization of Holocaust denial -Danger of public accepting holocaust denial should be fought by logic, both topic and claim have a negative stance towards holocaust denial, which suggests the label in favor. Still, by proposing a different solution than criminalization, the claim is against the topic.
Missing Background Knowledge Many arguments involve non-trivial background knowledge: Israeli military assault in Gaza -Hamas was first to escalate conflict following end of ceasefire.
Ambiguity According to the pair 2009 US economic stimulus -Stimulus risks being too small not too large, a small stimulus is bad while an appropriate stimulus is good.
Ethical Judgement Different judgments on what is good and bad can lead to different stance labels: Ban on human reproductive cloning -Cloning will involve the creation of children for predetermined roles.
Lack of Conceptual Clarity Especially deciding whether the claim refers to a consequence related to the topic can be a matter of judgment. For example, in Health insurance mandates -Insurance mandates violate the rights of employers, the violation of rights can be seen as a consequence or as a property of insurance mandates.
Final Dataset
To account for unreliable annotators, we compute the annotation result with MACE. As such, we find that for 81.36% of the annotated arguments, the stance label obtained via MACE is the same as the original stance label. By comparison, the majority vote matches 79.30% of the original stance labels. Since disagreements between the MACE annotation and the original stance might indicate that the claim's stance is unclear outside the debate's context, we exclude from the dataset all such pairs. For example, the original label of the pair Is Wikipedia valuable? -Wikipedia is online and interactive, unlike other encyclopedias is con, because, in its context, it was discussed whether Wikipedia is an encyclopedia or not. In contrast, the result of our annotation is pro. Since the original labels are only pro or con, all pairs that our study determined as neither are removed. This filter resulted in a total of 1502 pairs, out of which 822 have been annotated to relate to consequences. conseq other debate wiki pro con pro con pro con pro con 376 446 370 310 746 756 1195 1199 We report results both on the 822 pairs that relate to consequences, denoted by conseq, and on the rest of the pairs, denoted by other, as well as on their union, denoted by debate.
For checking the performance of the systems on an independent dataset, we also use the claim stance dataset 8 published by Bar-Haim et al. (2017a). This dataset contains 55 topics of idebate 9 and 2394 manually collected claims from Wikipedia. We denote this dataset by wiki. As Bar-Haim et al. (2017a,b) do, when working with this dataset, we use only the topic's target and not the entire topic to ensure comparability. Table 5 shows the class distribution of the datasets.
Compared systems
We evaluate our system with the effect lexicon lexicon that we describe in Section 3.1 (ECF), as well as with the +/-EffectWordNet (EWN). For comparison, we implement two other approaches:
sent As a baseline, we use a system that simply sums up all the sentiment scores in the claim. For the wiki dataset, the sign is switched if the topic sentiment is negative.
BERT As state of the art, we use BERT (Devlin et al., 2019), which was recently shown to outperform a series of alternative stance detection systems (Ghosh et al., 2019). We fine-tune BERT using the large, uncased pre-trained weights. 10 Just as Schiller et al. (2020), we set the number of epochs to 5 and the batch size to 16. The input are topicclaim pairs. We perform 10-fold cross-validation with a train-dev-test ratio of (70/20/10), ensuring that each topic exclusively occurs in one set.
Results and Discussion
The results that compare our system to BERT and the sentiment detection baseline are presented in conseq other debate wiki pro con mac acc pro con mac acc pro con mac acc pro con mac acc sent
. .63 Table 6: Experimental results. F1 scores per stance class (pro and con), macro-F1 (mac), and Accuracy (acc). For BERT, we show the mean of the respective cross-validation results and their standard deviation. Table 6. First, as expected, our system performs better on arguments related to consequences than on other arguments, with a macro-F1 difference of 10pp between conseq and other. Further, our system with both lexicon settings consistently outperforms the sent baseline, but its macro-F1 score is outperformed by BERT on conseq and wiki, and its accuracy is outperformed by BERT on all datasets. This is not surprising, given that we use BERT pre-trained and then fine-tuned to our data. Interestingly, our system with ECF achieves better results than BERT in terms of macro F1 score on the arguments that are not related to consequences (other), and on the complete debate dataset. This indicates that our method can deal reasonably well with arguments that are not from consequences.
Concerning the two stance classes, with both lexicon settings, our system is better than BERT at predicting the pro class in arguments from consequences, but is outperformed on the con class. Another interesting result is that on conseq, our system has a quite similar performance on the pro and con classes with both lexicon settings . In contrast, BERT's performance varies drastically, with a difference of approximately 17pp in favor of the con class. BERT's high variability is also indicated by the high standard deviation on the 10 folds. For comparison, we also computed the F1 macro standard deviation of our system with ECF when run on the same 10 folds, and the values lie between .03 on debate and .07 on conseq. This indicates that our unsupervised approach is more robust with more predictable performance.
Concerning the two effect lexicons, our system performs consistently better when using ECF than when using EWN. Our analysis indicates that the high coverage of the EWN lexicon comes at the expense of accuracy. Therefore, in the following, we will only refer to our system using ECF.
Regarding the two datasets debate and wiki, BERT outperforms our system, with quite a high margin particularly on the wiki data. The accuracy that Bar-Haim et al. (2017a,b) report on the wiki data, when no context features are used, is .68 which is lower than BERT's (.70) but higher than ours (.65 for evaluating on the dedicated test set). This is not surprising given that the data contains general arguments. Nevertheless, as our approach only targets a subclass of these arguments, the results are quite promising. Unfortunately, Bar-Haim et al. (2017a,b)'s system is proprietary and we could not evaluate it on our conseq data. Table 7 provides further insights into our solution. First, on all Debatepedia based datasets, we find a target in more than .75 of the data instances, and overall, the results are slightly better when a target is found. Most of the targets are found by word similarity and the fewest by the acronym. The results obtained on the instances where the target was found by synonym/antonym relations are significantly lower than those obtained when the target was found with the other two strategies. This indicates that the approach is sensitive to semantic drift in target identification.
Overall, we identify a potential consequence (TPO/TP/PO) for .6 of the arguments in conseq.
While the results are quite good on all datasets when we detect a complete effect triple (TPO), they are overtaken by results of the TP cases. Together, the instances solved with TPO and TP strategies amount to .44 of the conseq dataset but to much lower on the other datasets (e.g., only .17 on the wiki). The performance on the PO cases is comparable to the performance on the Heuristic cases, and significantly lower than when TPO or TP could be applied. Depending on the dataset, the system needed to apply the Heuristic strategy on .4 to .61 of the instances. Our efforts for future work are directed towards helping the system make sense of more of the claims so that the number of times it needs to fallback to PO and Heuristic are reduced.
Error Analysis
To better understand the limitations of our approach, we analyzed the errors on the conseq data and found several reasons for wrong predictions:
Incomplete list of patterns Some arguments cannot be meaningfully analyzed with our current list of patterns. We plan to extend this list with more complex patterns, while we are also working on automatically learning such patterns from data.
Conceptual errors We assume that positive effects on something negative result in something negative (e.g., War in Iraq has helped terrorist recruitment.). However, this is not always the case (e.g., Privatizing social security helps the poor.).
Finding the targets As shown in Table 7, we often fail to detect targets. For example, our target detection strategies fail on the claim-topic pair Standardized tests ensure students learn essential information. -No Child Left Behind Act. In this specific case, there is a hypernym relation between the topic and Standardized tests. Further, we found that our straightforward approach to identifying targets and the relations between them is one of the core reasons for our approach's poorer performance on the wiki data compared to the debate data. Improving the target finding strategy by leveraging additional semantic knowledge is one of the core directions for our future work.
Missing / wrong lexicon entries For many words, we are missing an entry in our lexicons, or the entry exists but is questionable. For instance, in the sentiment lexicon, Palestinian is annotated with a negative sentiment. Also, sometimes the effect on the object seems to be mixed up with the word's overall effect. For example, solve has a pos-itive effect on the object in both ECF and EWN lexicons, but arguably when a problem is solved, it undergoes a reduction (e.g. Reforestation, [...] can help solve global warming).
Ambiguity Some words have a positive or negative effect depending on the sense with which they are used (e.g., push vs. push for). In the effect lexicon, we have only one entry per word. In the EWN, there are multiple senses, but we always use the most probable effect. Word sense disambiguation is required for these cases, which is known to be very challenging for verbs. However, a potential solution could be to annotate VerbNet frames with effects, but this is outside the scope of this work.
Text parsing errors As our method relies on the output of the dependency parser, the Lemmatizer, the POS tagger, and the Stemmer, their errors naturally propagate.
Conclusion and Future Work
We propose a fully unsupervised method to detect the stance of arguments from consequences in online debates. The method exploits grammatical dependencies and lexicons to identify effect words and their impact. For our evaluation, we annotated arguments from Debatepedia regarding their stance and whether they involve consequences or not. The results we obtained are motivating. Our method is comparable to BERT while being more robust.
Besides the future extensions of this approach that we mentioned in our results discussion and error analysis, this work opens several interesting research paths. Mainly, its good performance on the claims that refer to consequences reinforces our intuition that designing systems tailored for particular argumentation schemes might be a good alternative to topic-specific models. Therefore, we plan to complement this work with approaches for other frequently applied schemes such as arguments by expert opinion and arguments by example.
Table 1 :
1Example of topic-claim pair in
Table 2 :
2Dependency graph patterns. * ∈ {dobj, nsubjpass, cobj, csubjpass, nmod, xcomp}; ∈ {nsubj, csubj}; † ∈ {amod, nn, advmod}; NegP stands for negative preposition
Table 3 :
3Worked out Examples
Table 4 :
4Fleiss' Kappa dependent on the number of
valid annotations
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
Stance
Consequence
Table 5 :
5Class distributions5 Evaluation
5.1 Data
Table 7 :
7Evaluation of the target identification and stance detection strategies; r denotes the rate of data instances.
All arguments presented in this paper are from http: //www.debatepedia.org.
Our data and source code are publicly available at https://github.com/dwslab/StArCon. 3 http://alt.qcri.org/semeval2016/task6
We used an American English dictionary to correct orthographic mistakes resp. to add American English versions of British English words.
We use only the synonyms and antonyms shown at https://thesaurus.plus/thesaurus/xxx where xxx is a placeholder for concrete words
Those are except, less, minus, opposite, sans, unlike, versus, without, w/o, vice, instead (of), lack.
http://www.debatepedia.org
Available at https://www.research.ibm.com/ haifa/dept/vst/debating_data.shtml 9 https://idebate.org/ 10 We worked with the original release: https:// github.com/google-research/bert
AcknowledgmentsThis work has been funded by the Deutsche Forschungsgemeinschaft (DFG) within the project ExpLAIN, Grant Number STU 266/14-1, as part of the Priority Program "Robust Argumentation Machines (RATIO)" (SPP-1999).
Stance classification of twitter debates: The encryption debate as a use case. Aseel Addawood, Jodi Schneider, Masooda Bashir, 10.1145/3097286.3097288Proceedings of the 8th International Conference on Social Media & Society, #SMSociety17. the 8th International Conference on Social Media & Society, #SMSociety17New York, NY, USAAssociation for Computing MachineryAseel Addawood, Jodi Schneider, and Masooda Bashir. 2017. Stance classification of twitter debates: The encryption debate as a use case. In Proceedings of the 8th International Conference on Social Media & Society, #SMSociety17, New York, NY, USA. Asso- ciation for Computing Machinery.
End-to-end argumentation knowledge graph construction. Khalid Al-Khatib, Yufang Hou, Henning Wachsmuth, Charles Jochim, Francesca Bonin, Benno Stein, 10.1609/aaai.v34i05.6231Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020). the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020)Khalid Al-Khatib, Yufang Hou, Henning Wachsmuth, Charles Jochim, Francesca Bonin, and Benno Stein. 2020. End-to-end argumentation knowledge graph construction. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020).
Cats rule and dogs drool!: Classifying stance in online debate. Pranav Anand, Marilyn Walker, Rob Abbott, Jean E Fox Tree, Robeson Bowmani, Michael Minor, Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011). the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011)Portland, OregonAssociation for Computational LinguisticsPranav Anand, Marilyn Walker, Rob Abbott, Jean E. Fox Tree, Robeson Bowmani, and Michael Minor. 2011. Cats rule and dogs drool!: Classifying stance in online debate. In Proceedings of the 2nd Work- shop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011), pages 1- 9, Portland, Oregon. Association for Computational Linguistics.
Stance classification of context-dependent claims. Roy Bar-Haim, Indrajit Bhattacharya, Francesco Dinuzzo, Amrita Saha, Noam Slonim, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, Spain1Long Papers. Association for Computational LinguisticsRoy Bar-Haim, Indrajit Bhattacharya, Francesco Din- uzzo, Amrita Saha, and Noam Slonim. 2017a. Stance classification of context-dependent claims. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 251-261, Valencia, Spain. Association for Computational Lin- guistics.
Improving claim stance classification with lexical knowledge expansion and context utilization. Roy Bar-Haim, Lilach Edelstein, Charles Jochim, Noam Slonim, 10.18653/v1/w17-5104Proceedings of the 4th Workshop on Argument Mining. Association for Computational Linguistics. the 4th Workshop on Argument Mining. Association for Computational LinguisticsRoy Bar-Haim, Lilach Edelstein, Charles Jochim, and Noam Slonim. 2017b. Improving claim stance clas- sification with lexical knowledge expansion and con- text utilization. In Proceedings of the 4th Work- shop on Argument Mining. Association for Compu- tational Linguistics.
+/-EffectWordNet: Sense-level lexicon acquisition for opinion inference. Yoonjung Choi, Janyce Wiebe, 10.3115/v1/D14-1125Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsYoonjung Choi and Janyce Wiebe. 2014. +/- EffectWordNet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1181-1191, Doha, Qatar. Association for Computational Linguistics.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.
Topical stance detection for twitter: A twophase lstm model using attention. Kuntal Dey, Ritvik Shrivastava, Saroj Kaushik, 10.1007/978-3-319-76941-7_40Advances in Information Retrieval. ChamSpringer International PublishingKuntal Dey, Ritvik Shrivastava, and Saroj Kaushik. 2018. Topical stance detection for twitter: A two- phase lstm model using attention. In Advances in Information Retrieval, pages 529-536, Cham. Springer International Publishing.
Stance classification with target-specific neural attention. Jiachen Du, Ruifeng Xu, Yulan He, Lin Gui, 10.24963/ijcai.2017/557Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural at- tention. In Proceedings of the Twenty-Sixth Inter- national Joint Conference on Artificial Intelligence, IJCAI-17, pages 3988-3994.
Automated classification of stance in student essays: An approach using stance target information and the wikipedia link-based measure. Adam Faulkner, Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014. the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014Adam Faulkner. 2014. Automated classification of stance in student essays: An approach using stance target information and the wikipedia link-based mea- sure. Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2014, pages 174-179.
Princeton university: About wordnet. Christiane Fellbaum, Christiane Fellbaum. 2010. Princeton university: About wordnet.
Stance detection in web and social media: A comparative study. Shalmoli Ghosh, Prajwal Singhania, Siddharth Singh, Koustav Rudra, Saptarshi Ghosh, 10.1007/978-3-030-28577-7_4Experimental IR Meets Multilinguality, Multimodality, and Interaction. ChamSpringer International PublishingShalmoli Ghosh, Prajwal Singhania, Siddharth Singh, Koustav Rudra, and Saptarshi Ghosh. 2019. Stance detection in web and social media: A comparative study. In Experimental IR Meets Multilinguality, Multimodality, and Interaction, pages 75-87, Cham. Springer International Publishing.
Unsupervised Stance Classification in Online Debates. Subrata Ghosh, Konjengbam Anand, Sailaja Rajanala, Manish Bharath Reddy, Singh, 10.1145/3152494.3152497Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, CoDS-COMAD '18. the ACM India Joint International Conference on Data Science and Management of Data, CoDS-COMAD '18New York, NY, USA; Goa, IndiaACMSubrata Ghosh, Konjengbam Anand, Sailaja Rajanala, A Bharath Reddy, and Manish Singh. 2018. Unsu- pervised Stance Classification in Online Debates. In Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, CoDS-COMAD '18, pages 30-36, New York, NY, USA. ACM. Event-place: Goa, India.
Extralinguistic constraints on stance recognition in ideological debates. Saidul Kazi, Vincent Hasan, Ng, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaShort Papers2Association for Computational LinguisticsKazi Saidul Hasan and Vincent Ng. 2013. Extra- linguistic constraints on stance recognition in ideo- logical debates. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 816-821, Sofia, Bulgaria. Association for Computational Lin- guistics.
Learning whom to trust with MACE. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, Eduard Hovy, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, GeorgiaAssociation for Computational LinguisticsDirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130, Atlanta, Georgia. Association for Computational Linguistics.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, 10.1145/1014052.1014073Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04. the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04New York, NY, USAAssociation for Computing MachineryMinqing Hu and Bing Liu. 2004. Mining and sum- marizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04, page 168-177, New York, NY, USA. Association for Computing Machinery.
Debate stance classification using word embeddings. Anand Konjengbam, Subrata Ghosh, Nagendra Kumar, Manish Singh, 10.1007/978-3-319-98539-8_29Big Data Analytics and Knowledge Discovery. ChamSpringer International PublishingAnand Konjengbam, Subrata Ghosh, Nagendra Kumar, and Manish Singh. 2018. Debate stance classifica- tion using word embeddings. In Big Data Analytics and Knowledge Discovery, pages 382-395, Cham. Springer International Publishing.
Stance detection: A survey. Dilek Küçük, Fazli Can, 10.1145/3369026ACM Comput. Surv. 531Dilek Küçük and Fazli Can. 2020. Stance detection: A survey. ACM Comput. Surv., 53(1).
The Stanford CoreNLP Natural Language Processing Toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, 10.3115/v1/P14-5010Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsMarylandAssociation for Computational LinguisticsBaltimoreChristopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.
SemEval-2016 task 6: Detecting stance in tweets. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, Colin Cherry, 10.18653/v1/S16-1003Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsSaif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31- 41, San Diego, California. Association for Computa- tional Linguistics.
Support or oppose? classifying positions in online debates from reply activities and opinion expressions. Akiko Murakami, Rudy Raymond, Coling 2010: Posters. Beijing, ChinaAkiko Murakami and Rudy Raymond. 2010. Sup- port or oppose? classifying positions in online de- bates from reply activities and opinion expressions. In Coling 2010: Posters, pages 869-875, Beijing, China. Coling 2010 Organizing Committee.
Contextual stance classification of opinions: A step towards enthymeme reconstruction in online reviews. Danushka Pavithra Rajendran, Simon Bollegala, Parsons, 10.18653/v1/W16-2804Proceedings of the Third Workshop on Argument Mining (ArgMining2016). the Third Workshop on Argument Mining (ArgMining2016)Berlin, GermanyAssociation for Computational LinguisticsPavithra Rajendran, Danushka Bollegala, and Simon Parsons. 2016. Contextual stance classification of opinions: A step towards enthymeme reconstruction in online reviews. In Proceedings of the Third Work- shop on Argument Mining (ArgMining2016), pages 31-39, Berlin, Germany. Association for Computa- tional Linguistics.
Connotation Frames: A Data-Driven Investigation. Sameer Hannah Rashkin, Yejin Singh, Choi, 10.18653/v1/P16-1030Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany1Association for Computational LinguisticsHannah Rashkin, Sameer Singh, and Yejin Choi. 2016. Connotation Frames: A Data-Driven Investigation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 311-321. Association for Com- putational Linguistics. Event-place: Berlin, Ger- many.
Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Argument Templates. Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui, Proceedings of the 5th Workshop on Argument Mining. the 5th Workshop on Argument MiningBrussels, BelgiumAssociation for Computational LinguisticsPaul Reisert, Naoya Inoue, Tatsuki Kuribayashi, and Kentaro Inui. 2018. Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Ar- gument Templates. In Proceedings of the 5th Work- shop on Argument Mining, pages 79-89, Brussels, Belgium. Association for Computational Linguis- tics.
Stance detection benchmark: How robust is your stance detection?. Benjamin Schiller, Johannes Daxenberger, Iryna Gurevych, Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2020. Stance detection benchmark: How robust is your stance detection?
Detecting stance in tweets and analyzing its interaction with sentiment. Parinaz Sobhani, Saif Mohammad, Svetlana Kiritchenko, 10.18653/v1/S16-2021Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics. the Fifth Joint Conference on Lexical and Computational SemanticsBerlin, GermanyAssociation for Computational LinguisticsParinaz Sobhani, Saif Mohammad, and Svetlana Kir- itchenko. 2016. Detecting stance in tweets and ana- lyzing its interaction with sentiment. In Proceedings of the Fifth Joint Conference on Lexical and Com- putational Semantics, pages 159-169, Berlin, Ger- many. Association for Computational Linguistics.
Recognizing Stances in Online Debates. Swapna Somasundaran, Janyce Wiebe, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPStroudsburg, PA, USA; Singapore1Association for Computational Linguistics. Event-place: SuntecSwapna Somasundaran and Janyce Wiebe. 2009. Rec- ognizing Stances in Online Debates. In Proceed- ings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 -Volume 1, ACL '09, pages 226- 234, Stroudsburg, PA, USA. Association for Com- putational Linguistics. Event-place: Suntec, Singa- pore.
Recognizing stances in ideological on-line debates. Swapna Somasundaran, Janyce Wiebe, Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in TextLos Angeles, CAAssociation for Computational LinguisticsSwapna Somasundaran and Janyce Wiebe. 2010. Rec- ognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text, pages 116-124, Los Ange- les, CA. Association for Computational Linguistics.
Collective stance classification of posts in online debate forums. Dhanya Sridhar, Lise Getoor, Marilyn Walker, 10.3115/v1/W14-2715Proceedings of the Joint Workshop on Social Dynamics and Personal Attributes in Social Media. the Joint Workshop on Social Dynamics and Personal Attributes in Social MediaBaltimore, MarylandAssociation for Computational LinguisticsDhanya Sridhar, Lise Getoor, and Marilyn Walker. 2014. Collective stance classification of posts in online debate forums. In Proceedings of the Joint Workshop on Social Dynamics and Personal At- tributes in Social Media, pages 109-117, Baltimore, Maryland. Association for Computational Linguis- tics.
Stance detection with hierarchical attention network. Qingying Sun, Zhongqing Wang, Qiaoming Zhu, Guodong Zhou, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsQingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierar- chical attention network. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 2399-2409, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Get out the vote: Determining support or opposition from congressional floor-debate transcripts. Matt Thomas, Bo Pang, Lillian Lee, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingSydney, AustraliaAssociation for Computational LinguisticsMatt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceed- ings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327-335, Sydney, Australia. Association for Computational Linguistics.
Learning sentiment composition from sentiment lexicons. Orith Toledo-Ronen, Roy Bar-Haim, Alon Halfon, Charles Jochim, Amir Menczel, Ranit Aharonov, Noam Slonim, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsOrith Toledo-Ronen, Roy Bar-Haim, Alon Halfon, Charles Jochim, Amir Menczel, Ranit Aharonov, and Noam Slonim. 2018. Learning sentiment com- position from sentiment lexicons. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 2230-2241, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.
. Douglas Walton, Christopher Reed, Fabrizio Macagno, 10.1017/cbo9780511802034Argumentation Schemes. Cambridge University PressDouglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cam- bridge University Press.
A survey on opinion mining: from stance to product aspect. Rui Wang, Deyu Zhou, Mingmin Jiang, Si Jiasheng, Yang Yang, 10.1109/ACCESS.2019.2906754IEEE Access. Rui Wang, Deyu Zhou, Mingmin Jiang, Si Jiasheng, and Yang Yang. 2019. A survey on opinion mining: from stance to product aspect. IEEE Access, PP:1- 1.
Recognizing Contextual Polarity in Phraselevel Sentiment Analysis. Theresa Wilson, Janyce Wiebe, Paul Hoffmann, 10.3115/1220575.1220619Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05. the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05Stroudsburg, PA, USA; British Columbia, CanadaVancouverTheresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in Phrase- level Sentiment Analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05, pages 347-354, Stroudsburg, PA, USA. Association for Computational Linguistics. Event- place: Vancouver, British Columbia, Canada. |
3,242,607 | Extending corpus-based identification of light verb constructions using a supervised learning framework | Light verb constructions (LVCs), such as "make a call" in English, can be said to be complex predicates in which the verb plays only a functional role. LVCs pose challenges for natural language understanding, as their semantics differ from usual predicate structures. We extend the existing corpus-based measures for identifying LVCs between verb-object pairs in English, by proposing using new features that use mutual information and assess other syntactic properties. Our work also incorporates both existing and new LVC features into a machine learning approach. We experimentally show that using the proposed framework incorporating all features outperforms previous work by 17%. As machine learning techniques model the trends found in training data, we believe the proposed LVC detection framework and statistical features is easily extendable to other languages. | [
332033,
2858590
] | Extending corpus-based identification of light verb constructions using a supervised learning framework
Yee Fan Tan [email protected]
Department of Computer Science
National University of Singapore
3 Science Drive 2117543Singapore
Min-Yen Kan
Department of Computer Science
National University of Singapore
3 Science Drive 2117543Singapore
Hang Cui [email protected]
Department of Computer Science
National University of Singapore
3 Science Drive 2117543Singapore
Extending corpus-based identification of light verb constructions using a supervised learning framework
Light verb constructions (LVCs), such as "make a call" in English, can be said to be complex predicates in which the verb plays only a functional role. LVCs pose challenges for natural language understanding, as their semantics differ from usual predicate structures. We extend the existing corpus-based measures for identifying LVCs between verb-object pairs in English, by proposing using new features that use mutual information and assess other syntactic properties. Our work also incorporates both existing and new LVC features into a machine learning approach. We experimentally show that using the proposed framework incorporating all features outperforms previous work by 17%. As machine learning techniques model the trends found in training data, we believe the proposed LVC detection framework and statistical features is easily extendable to other languages.
Introduction
Many applications in natural language processing rely on the relationships between words in a document. Verbs play a central role in many such tasks; for example, the assignment of semantic roles to noun phrases in a sentence heavily depends on the verb that link the noun phrases together (as in "Pierre Vinken/SUBJ, will join/PRED, the board/OBJ").
However, verb processing is difficult because of many phenomena, such as normalization of actions, verb particle constructions and light verb constructions. Applications that process verbs must handle these cases effectively. We focus on the identification of light verb constructions (also known as support verb constructions) in English, as such constructions play a prominent and productive role in many other languages (Butt and Geuder, 2001;Miyamoto, 2000). Although the exact definition of a LVC varies in the literature, we use the following operational definition:
A light verb construction (LVC) is a verb-complement pair in which the verb has little lexical meaning (is "light") and much of the semantic content of the construction is obtained from the complement.
Examples of LVCs in English include "give a speech", "make good (on)" and "take (NP) into account". In the case in which the complement is a noun, it is often a deverbal noun and, as such, can usually be paraphrased using the object's root verb form without (much) loss in its meaning (e.g., take a walk → walk, make a decision → decide, give a speech → speak).
We propose a corpus-based approach to determine whether a verb-object pair is a LVC. Note that we limit the scope of LVC detection to LVCs consisting of verbs with noun complements. Specifically, we extend previous work done by others by examining how the local context of the candidate construction and the corpus-wide frequency of related words to the construction play an influence on the lightness of the verb.
A second contribution is to integrate our new features with previously reported ones under a machine learning framework. This framework optimizes the weights for these measures automatically against a training corpus in supervised learning, and attests to the significant modeling im-provements of our features on our corpus. Our corpus-based evaluation shows that the combination of previous work and our new features improves LVC detection significantly over previous work.
An advantage gained by adopting a machine learning framework is that it can be easily adapted to other languages that also exhibit light verbs. While we perform evaluations on English, light verbs exist in most other languages. In some of these languages, such as Persian, most actions are expressed as LVCs rather than single-word verbs (Butt, 2003). As such, there is currently a unmet demand for developing an adaptable framework for LVC detection that applies across languages. We believe the features proposed in this paper would also be effective in identifying light verbs in other languages.
We first review previous corpus-based approaches to LVC detection in Section 2. In Section 3, we show how we extend the use of mutual information and employ context modeling as features for improved LVC detection. We next describe our corpus processing and how we compiled our gold standard judgments used for supervised machine learning. In Section 4, we evaluate several feature combinations before concluding the paper.
Related Work
With the recent availability of large corpora, statistical methods that leverage syntactic features are a current trend. This is the case for LVC detection as well. Grefenstette and Teufel (1995) considered a similar task of identifying the most probable light verb for a given deverbal noun. Their approach focused on the deverbal noun and occurrences of the noun's verbal form, arguing that the deverbal noun retains much of the verbal characteristics in the LVCs. To distinguish the LVC from other verbobject pairs, the deverbal noun must share similar argument/adjunct structures with its verbal counterpart. Verbs that appear often with these characteristic deverbal noun forms are deemed light verbs. They approximate the identification of argument/adjunct structures by using the preposition head of prepositional phrases that occur after the verb or object of interest.
Let n be a deverbal noun whose most likely light verb is to be found. Denote its verbal form by v , and let P be the set containing the three most frequently occurring prepositions that occur after v . The verb-object pairs that are not followed by a preposition in P are filtered out. For any verb v, let g(v, n) be the count of verb-object pairs v-n that remain after the filtering step above. Grefenstette and Teufel proposed that the light verb for n be returned by the following equation:
GT95(n) = arg max v g(v, n)(1)
Interestingly, Grefenstette and Teufel indicated that their subsequent experiments suggested that the filtering step may not be necessary.
Whereas the GT95 measure centers on the deverbal object, Dras and Johnson (1996) also consider the verb's corpus frequency. The use of this complementary information improves LVC identification, as it models the inherent bias of some verbs to be used more often as light verbs than others. Let f (v, n) be the count of verb-object pairs occurring in the corpus, such that v is the verb, n is a deverbal noun. Then, the most probable light verb for n is given by: Stevenson et al. (2004)'s research examines evidence from constructions featuring determiners. They focused on expressions of the form v-a-n and v-det-n, where v is a light verb, n is a deverbal noun, a is an indefinite determiner (namely, "a" or "an"), and det is any determiner other than the indefinite. Examples of such constructions are "give a speech" and "take a walk". They employ mutual information which measures the frequency of co-occurrences of two variables, corrected for random agreement. Let I(x, y) be the mutual information between x and y. Then the following measure can be used:
DJ96(n) = arg max v f (v, n) n f (v, n) (2)SFN04(v, n) = 2 × I(v, a-n) − I(v, det-n),(3)
where higher values indicate a higher likelihood of v-a-n being a light verb construction. Also, they suggested that the determiner "the" be excluded from the development data since it frequently occurred in their data.
Recently, Fazly et al. (2005) have proposed a statistical measure for the detection of LVCs. The probability that a verb-object pair v-n (where v is a light verb) is a LVC can be expressed as a product of three probabilities: (1) probability of the object n occurring in the corpus, (2) the probability that n is part of any LVC given n, and (3) the probability of v occurring given n and that v-n is a LVC. Each of these three probabilities can then be estimated by the frequency of occurrence in the corpus, using the assumption that all instances of v -a-n is a LVC, where v is any light verb and a is an indefinite determiner.
To summarize, research in LVC detection started by developing single measures that utilized simple frequency counts of verbs and their complements. From this starting point, research has developed in two different directions: using more informed measures for word association (specifically, mutual information) and modeling the context of the verb-complement pair.
Both the GT95 and DJ96 measures suffer from using frequency counts directly. Verbs that are not light but occur very frequently (such as "buy" and "sell" in the Wall Street Journal) will be marked by these measures. As such, given a deverbal noun, they sometimes suggest verbs that are not light. We hypothesize that substituting MI for frequency count can alleviate this problem.
The SFN04 metric adds in the context provided by determiners to augment LVC detection. This measure may work well for LVCs that are marked by determiners, but excludes a large portion of LVCs that are composed without determiners. To design a robust LVC detector requires integrating such specific contextual evidence with other general evidence.
Building on this, Fazly et al. (2005) incorporate an estimation of the probability that a certain noun is part of a LVC. However, like SFN04, LVCs without determiners are excluded.
Framework and Features
Previous work has shown that different measures based on corpus statistics can assist in LVC detection. However, it is not clear to what degree these different measures overlap and can be used to reinforce each other's results. We solve this problem by viewing LVC detection as a supervised classification problem. Such a framework can integrate the various measures and enable us to test their combinations in a generic manner. Specifically, each verb-object pair constitutes an individual classification instance, which possesses a set of features f 1 , . . . , f n and is assigned a class label from the binary classification of {LV C, ¬LV C}.
In such a machine learning framework, each of the aforementioned metrics are separate features.
In our work, we have examined three different sets of features for LVC classification: (1) base, (2) extended and (3) new features. We start by deriving three base features from key LVC detection measures as described by previous work -GT95, DJ96 and SFN04. As suggested in the previous section, we can make alternate formulations of the past work, such as to discard a pre-filtering step (i.e. filtering of constructions that do not include the top three most frequent prepositions). These measures make up the extended feature set. The third set of features are new and have not been used for LVC identification before. These include features that further model the influence of context (e.g. prepositions after the object) in LVC detection.
Base Features
These features are based on the original previous work discussed in Section 2, but have been adapted to give a numeric score. We use the initials of the original authors without year of publication to denote our derived base features.
Recall that the aim of the original GT95 and DJ96 formulae is to rank the possible support verbs given a deverbal noun. As each of these formulae contain a function which returns a numeric score inside the arg max v , we use these functions as two of our base features:
GT(v, n) = g(v, n) (4) DJ(v, n) = f (v, n) n f (v, n)(5)
The SFN04 measure can be used without modification as our third base feature, and it will be referred to as SFN for the remainder of this paper.
Extended Features
Since Grefenstette and Teufel indicated that the filtering step might not be necessary, i.e., f (v, n) may be used instead of g(v, n), we also have the following extended feature:
FREQ(v, n) = f (v, n)(6)
In addition, we experiment with the reverse process for the DJ feature, i.e., to replace f (v, n) in the function for DJ with g(v, n), yielding the following extended feature:
DJ-FILTER(v, n) = g(v, n) n g(v, n) (7)
In Grefenstette and Teufel's experiments, they used the top three prepositions for filtering. We further experiment with using all possible prepositions.
New Features
In our new feature set, we introduce features that we feel better model the v and n components as well as their joint occurrences v-n. We also introduce features that model the v-n pair's context, in terms of deverbal counts, derived from our understanding of LVCs.
Most of these new features we propose are not good measures for LVC detection by themselves. However, the additional evidence that they give can be combined with the base features to create a better composite classification system.
Mutual information: We observe that a verb v and a deverbal noun n are more likely to appear in verb-object pairs if they can form a LVC. To capture this evidence, we employ mutual information to measure the co-occurrences of a verb and a noun in verb-object pairs. Formally, the mutual information between a verb v and a deverbal noun n is defined as
I(v, n) = log 2 P (v, n) P (v)P (n) ,(8)
where P (v, n) denotes the probability of v and n constructing verb-object pairs. P (v) is the probability of occurrence of v and P (n) represents the probability of occurrence of n. Let f (v, n) be the frequency of occurrence of the verb-object pair v-n and N be the number of all verb-object pairs in the corpus. We can estimate the above probabilities using their maximum likelihood estimates:
P (v, n) = f (v,n) N , P (v) = È n f (v,n) N and P (n) = È v f (v,n) N .
However, I(v, n) only measures the local information of co-occurrences between v and n. It does not capture the global frequency of verbobject pair v-n, which is demonstrated as effective by Dras and Johnson (1996). As such, we need to combine the local mutual information with the global frequency of the verb-object pair. We thus create the following feature, where the log function is used to smooth frequencies:
MI-LOGFREQ = I(v, n) × log 2 f (v, n) (9)
Deverbal counts: Suppose a verb-object pair vn is a LVC and the object n should be a deverbal noun. We denote v to be the verbalized form of n. We thus expect that v-n should express the same semantic meaning as that of v . However, verb-object pairs such as "have time" and "have right" in English scored high by the DJ and MI-LOGFREQ measures, even though the verbalized form of their objects, i.e., "time" and "right", do not express the same meaning as the verb-object pairs do. This is corroborated by Grefenstette and Teufel claim that if a verb-object pair v-n is a LVC, then n should share similar properties with v . Based on our empirical analysis on the corpus using a small subset of LVCs, we believe that:
1. The frequencies of n and v should not differ very much, and 2. Both frequencies are high given the fact that LVCs occur frequently in the text.
The first observation is true in our corpus where light verb and verbalized forms are freely interchangable in contexts. Then, let us denote the frequencies of n and v to be f (n) and f (v ) respectively. We devise a novel feature based on the hypotheses:
min(f (n), f (v )) max(f (n), f (v )) × min(f (n), f (v )) (10)
where the two terms correspond to the above two hypotheses respectively. A higher score from this metric indicates a higher likelihood of the compound being a LVC. Light verb classes: Linguistic studies of light verbs have indicated that verbs of specific semantic character are much more likely to participate in LVCs (Wang, 2004;Miyamoto, 2000;Butt, 2003;Bjerre, 1999). Such characteristics have been shown to be cross-language and include verbs that indicate (change of) possession (Danish give, to give, direction (Chinese guan diao to switch off), aspect and causation, or are thematically incomplete (Japanese suru, to do). As such, it makes sense to have a list of verbs that are often used lightly. In our work, we have predefined a light verb list for our English experiment as exactly the following seven verbs: "do", "get", "give", "have", "make", "put" and "take", all of which have been studied as light verbs in the literature. We thus define a feature that considers the verb in the verb-object pair: if the verb is in the predefined light verb list, the feature value is the verb itself; otherwise, the feature value is another default value.
One may ask whether this feature is necessary, given the various features used to measure the frequency of the verb. As all of the other metrics are corpus-based, they rely on the corpus to be a representative sample of the source language. Since we extract the verb-object pairs from the Wall Street Journal section of the Penn Treebank, terms like "buy", "sell", "buy share" and "sell share" occur frequently in the corpus that verb-object pairs such as "buy share" and "sell share" are ranked high by most of the measures. However, "buy" and "sell" are not considered as light verbs. In addition, the various light verbs have different behaviors. Despite their lightness, different light verbs combined with the same noun complement often gives different semantics, and hence affect the lightness of the verb-object pair. For example, one may say that "make copy" is lighter than "put copy". Incorporating this small amount of linguistic knowledge into our corpus-based framework can enhance performance.
Other features: In addition to the above features, we also used the following features: the determiner before the object, the adjective before the object, the identity of any preposition immediately following the object, the length of the noun object (if a phrase) and the number of words between the verb and its object. These features did not improve performance significantly, so we have omitted a detailed description of these features.
Evaluation
In this section, we report the details of our experimental settings and results. First, we show how we constructed our labeled LVC corpus, used as the gold standard in both training and testing under cross validation. Second, we describe the evaluation setup and discuss the experimental results obtained based on the labeled data.
Data Preparation
Some of the features rely on a correct sentence parse. In order to minimize this source of error, we employ the Wall Street Journal section in the Penn Treebank, which has been manually parsed by linguists. We extract verb-object pairs from the Penn Treebank corpus and lemmatize them using WordNet's morphology module. As a filter, we require that a pair's object be a deverbal noun to be considered as a LVC. Specifically, we use Word-Net to check whether a noun has a verb as one of its derivationally-related forms. A total of 24,647 candidate verb-object pairs are extracted, of which 15,707 are unique.
As the resulting dataset is too large for complete manual annotation given our resources, we sample the verb-object pairs from the extracted set. As most verb-object pairs are not LVCs, random sampling would provide very few positive LVC instances, and thus would adversely affect the training of the classifier due to sparse data. Our aim in the sampling is to have balanced numbers of potential positive and negative instances. Based on the 24,647 verb-object pairs, we count the corpus frequencies of each verb v and each object n, denoted as f (v) and f (n). We also calculate the DJ score of the verb-object pair DJ(v, n) by counting the pair frequencies. The data set is divided into 5 bins using f (v) on a linear scale, 5 bins using f (n) on a linear scale and 4 bins using DJ(v, n) on a logarithmic scale. 1 We cross-multiply these three factors to generate 5 × 5 × 4 = 100 bins. Finally, we uniformly sampled 2,840 verb-object pairs from all the bins to construct the data set for labeling.
Annotation
As noted by many linguistic studies, the verb in a LVC is often not completely vacuous, as they can serve to emphasize the proposition's aspect, its argument's semantics (cf., θ roles) (Miyamoto, 2000), or other function (Butt and Geuder, 2001). As such, previous computational research had proposed that the "lightness" of a LVC might be best modeled as a continuum as opposed to a binary class (Stevenson et al., 2004). We have thus annotated for two levels of lightness in our annotation of the verb-object pairs. Since the purpose of the work reported here is to flag all such constructions, we have simplified our task to a binary decision, similar to most other previous corpus-based work.
A website was set up for the annotation task, so that annotators can participate interactively. For each selected verb-object pair, a question is constructed by displaying the sentence where the verb-object pair is extracted, as well as the verbobject pair itself. The annotator is then asked whether the presented verb-object pair is a LVC given the context of the sentence, and he or she will choose from the following options: (1) Yes,
(2) Not sure, (3) No. The following three sentences illustrate the options.
(1) Yes -A Compaq Computer Corp. spokeswoman said that the company hasn't made a decision yet, although "it isn't under active consideration."
(2) Not Sure -Besides money, criminals have also used computers to steal secrets and intelligence, the newspaper said, but it gave no more details.
(3) No -But most companies are too afraid to take that chance.
The three authors, all natural language processing researchers, took part in the annotation task, and we asked all three of them to annotate on the same data. In total, we collected annotations for 741 questions. The average correlation coefficient between the three annotators is r = 0.654, which indicates fairly strong agreement between the annotators. We constructed the gold standard data by considering the median of the three annotations for each question. Two gold standard data sets are created:
• Strict -In the strict data set, a verb-object pair is considered to be a LVC if the median annotation is 1.
• Lenient -In the lenient data set, a verbobject pair is considered to be a LVC if the median annotation is either 1 or 2.
Each of the strict and lenient data sets have 741 verb-object pairs.
Experiments
We have two aims for the experiments: (1) to compare between the various base features and the extended features, and (2) to evaluate the effectiveness of our new features. Using the Weka data mining toolkit (Witten and Frank, 2000), we have run a series of experiments with different machine learning algorithms. However, since our focus of the experiments is to determine which features are useful and not to evaluate the machine learners, we report the results achieved by the best single classifier without additional tuning, the random forest classifier (Breiman, 2001). Stratified ten-fold cross-validation is performed. The evaluation criteria used is the F 1 -measure on the LV C class, which is defined as
F 1 = 2P R P + R ,(11)
where P and R are the precision and recall for the LV C class respectively.
Base and Extended Features
Regarding the first aim, we make the following comparisons: We first present the results for the base features and the extended features in Table 1. From these results, we make the following observations:
• Overall, DJ and DJ-FILTER perform better than GT and FREQ. This is consistent with the results by Dras and Johnson (1996).
• The results for both GT/FREQ and DJ show that filtering using preposition does not impact performance significantly. We believe that the main reason for this is that the filtering process causes information to be lost. 163 of the 741 verb-object pairs in the corpus do not have a preposition following the object and hence cannot be properly classified using the features with filtering.
• The SFN metric does not appear to work with our corpus. We suspect that it requires a far larger corpus than our corpus of 24,647 verbobject pairs to work. Stevenson et al. (2004) have used a corpus whose estimated size is at least 15.7 billion, the number of hits returned in a Google search for the query "the" as of February 2006. The large corpus requirement is thus a main weakness of the SFN metric.
New Features
We now evaluate the effectiveness of our class of new features. Here, we do not report results of classification using only the new features, because these features alone are not intended to constitute a stand-alone measure of the lightness. As such, we evaluate these new features by adding them on top of the base features. We first construct a full feature set by utilizing the base features (GT, DJ and SFN) and all the new features. We chose not to add the extended features to the full feature set because these extended features are not independent to the base features. Next, to show the effectiveness of each new feature individually, we remove it from the full feature set and show the performance of classifier without it. Table 2: F 1 -measures of the various feature combinations for our evaluation. Table 2 shows the resulting F 1 -measures when using various sets of features in our experiments. 2 We make the following observations:
• The combinations of features outperform the individual features. We observe that using individual base features alone can achieve the highest F 1 -measure of 0.491 on the strict data set and 0.616 on the lenient data set respectively. When applying the combination of all base features, the F 1 -measures on both data sets increased to 0.537 and 0.676 respectively.
Previous work has mainly studied individual statistics in identifying LVCs while ignoring the integration of various statistics. The results demonstrate that integrating different statistics (i.e. features) boosts the performance of LVC identification. More importantly, we employ an off-the-shelf classifier without special parameter tuning. This shows that generic machine learning methods can be applied to the problem of LVC detection. It provides a sound way to integrate various features to improve the overall performance.
• Our new features boost the overall performance. Applying the newly proposed features on top of the base feature set, i.e., using the full feature set, gives F 1 -measures of 0.576 and 0.689 respectively (shown in bold) in our experiments. These yield a significant increase (p < 0.1) over using the base features only. Further, when we remove each of the new features individually from the full feature set, we see a corresponding drop in the F 1 -measures, of 0.011 (deverbal counts) to 0.044 (light verb classes) for the strict data set, and 0.013 (deverbal counts) to 0.049 (light verb classes) for the lenient data set. It shows that these new features boost the overall performance of the classifier. We think that these new features are more task-specific and examine intrinsic features of LVCs. As such, integrated with the statistical base features, these features can be used to identify LVCs more accurately. It is worth noting that light verb class is a simple but important feature, providing the highest F 1 -measure improvement compared to other new features. This is in accordance with the observation that different light verbs have different properties (Stevenson et al., 2004).
Conclusions
Multiword expressions (MWEs) are a major obstacle that hinder precise natural language processing (Sag et al., 2002). As part of MWEs, LVCs remain least explored in the literature of computational linguistics. Past work addressed the problem of automatically detecting LVCs by employing single statistical measures. In this paper, we experiment with identifying LVCs using a machine learning framework that integrates the use of various statistics. Moreover, we have extended the existing statistical measures and established new features to detect LVCs. Our experimental results show that the integrated use of different features in a machine learning framework performs much better than using any of the features individually. In addition, we experimentally show that our newly-proposed features greatly boost the performance of classifiers that use base statistical features. Thus, our system achieves state-of-the-art performance over previous approaches for identifying LVCs. As such, we suggest that future work on automatic detection of LVCs employ a machine learning framework that combines complementary features, and examine intrinsic features that characterize the local context of LVCs to achieve better performance.
While we have experimentally showed the effectiveness of the proposed framework incorporating existing and new features for LVC detection on an English corpus, we believe that the features we have introduced are generic and apply to LVC detection in other languages. The reason is threefold:
1. Mutual information is a generic metric for measuring co-occurrences of light verbs and their complements. Such co-occurrences are often an obvious indicator for determining light verbs because light verbs are often coupled with a limited set of complements. For instance, in Chinese, directional verbs, such as xia (descend) and dao (reach), which are often used lightly, are often co-located with a certain class of verbs that are related to people's behaviors.
2. For LVCs with noun complements, most of the semantic meaning of a LVC is expressed by the object. This also holds for other languages, such as Chinese. For example, in Chinese, zuo xuanze (make a choice) and zuo jueding (make a decision) has the word zuo (make) acting as a light verb and xuanze (choice) or jueding (decision) acting as a deverbal noun (Wang, 2004). Therefore, the feature of deverbal count should also be applicable for other languages.
3. It has been observed that in many languages, light verbs tend to be a set of closed class verbs. This allows us to use a list of predefined verbs that are often used lightly as a feature which helps distinguish between light and non-light verbs when used with the same noun complement. The identity of such verbs has been shown to be largely independent of language, and corresponds to verbs that transmit information about possession, direction, aspect and causation.
Feature
Table 1 :
1F 1 -measures of base features and ex-
tended features.
Binning is the process of grouping measured data into data classes or histogram bins.
For the strict data set, the base feature set has a precision and recall of 0.674 and 0.446 respectively, while the full feature set has a precision and recall of 0.642 and 0.523 respectively. For the lenient data set, the base feature set has a precision and recall of 0.778 and 0.598 respectively, while the full feature set has a precision and recall of 0.768 and 0.624 respectively.
Event structure and support verb constructions. In 4th Student Session of European Summer School on Logic, Language and Information 1999. T Bjerre, Universiteit Utrecht PressT. Bjerre. 1999. Event structure and support verb constructions. In 4th Student Session of European Summer School on Logic, Language and Informa- tion 1999. Universiteit Utrecht Press, Aug, 1999.
Random forests. Machine Learning. L Breiman, 45L. Breiman. 2001. Random forests. Machine Learn- ing, 45(1):5-32, Oct, 2001.
On the (semi)lexical status of light verbs. M Butt, W Geuder, Semi-lexical Categories. Mouton de GruyterM. Butt and W. Geuder. 2001. On the (semi)lexical status of light verbs. In Semi-lexical Categories, pages 323-370. Mouton de Gruyter.
The light verb jungle. M Butt, Workshop on Multi-Verb Constructions. M. Butt. 2003. The light verb jungle. In Workshop on Multi-Verb Constructions.
Death and lightness: Using a demographic model to find support verbs. M Dras, M Johnson, 5th International Conference on the Cognitive Science of Natural Language Processing. M. Dras and M. Johnson. 1996. Death and light- ness: Using a demographic model to find support verbs. In 5th International Conference on the Cog- nitive Science of Natural Language Processing.
Automatically distinguishing literal and figurative usages of highly polysemous verbs. A Fazly, R North, S Stevenson, ACL 2005 Workshop on Deep Lexical Acquisition. A. Fazly, R. North, and S. Stevenson. 2005. Automat- ically distinguishing literal and figurative usages of highly polysemous verbs. In ACL 2005 Workshop on Deep Lexical Acquisition, pages 38-47.
A corpus-based method for automatic identification of support verbs for nominalizations. G Grefenstette, S Teufel, EACL '95. G. Grefenstette and S. Teufel. 1995. A corpus-based method for automatic identification of support verbs for nominalizations. In EACL '95.
The Light Verb Construction in Japanese. The role of the verbal noun. T Miyamoto, John Benjamins. T. Miyamoto. 2000. The Light Verb Construction in Japanese. The role of the verbal noun. John Ben- jamins.
Multiword expressions: A pain in the neck for NLP. I Sag, T Baldwin, F Bond, A Copestake, D Flickinger, Lecture Notes in Computer Science. 2276I. Sag, T. Baldwin, F. Bond, A. Copestake, and D. Flickinger. 2002. Multiword expressions: A pain in the neck for NLP. In Lecture Notes in Computer Science, volume 2276, Jan, 2002.
Statistical measures of the semi-productivity of light verb constructions. S Stevenson, A Fazly, R North, 2nd ACL Workshop on Multiword Expressions: Integrating Processing. S. Stevenson, A. Fazly, and R. North. 2004. Statisti- cal measures of the semi-productivity of light verb constructions. In 2nd ACL Workshop on Multiword Expressions: Integrating Processing, pages 1-8.
A corpus-based study of mandarin verbs of doing. Concentric: Studies in Linguistics. L Wang, 30L. Wang. 2004. A corpus-based study of mandarin verbs of doing. Concentric: Studies in Linguistics, 30(1):65-85, Jun, 2004.
Data Mining: Practical machine learning tools with Java implementations. I Witten, E Frank, Morgan KaufmannI. Witten and E. Frank. 2000. Data Mining: Practical machine learning tools with Java implementations. Morgan Kaufmann. |
258,378,311 | An In-depth Analysis of Implicit and Subtle Hate Speech Messages | The research carried out so far in detecting abusive content in social media has primarily focused on overt forms of hate speech. While explicit hate speech (HS) is more easily identifiable by recognizing hateful words, messages containing linguistically subtle and implicit forms of HS (as circumlocution, metaphors and sarcasm) constitute a real challenge for automatic systems. While the sneaky and tricky nature of subtle messages might be perceived as less hurtful with respect to the same content expressed clearly, such abuse is at least as harmful as overt abuse. In this paper, we first provide an in-depth and systematic analysis of 7 standard benchmarks for HS detection, relying on a fine-grained and linguistically-grounded definition of implicit and subtle messages. Then, we experiment with state-of-the-art neural network architectures on two supervised tasks, namely implicit HS and subtle HS message classification. We show that while such models perform satisfactory on explicit messages, they fail to detect implicit and subtle content, highlighting the fact that HS detection is not a solved problem and deserves further investigation.Jane Frank. 1990. You call that a rhetorical question?:Forms and functions of rhetorical questions in conversation. Journal of Pragmatics, 14(5):723-738. | [
233189561,
235097499,
67856299,
12719479
] | An In-depth Analysis of Implicit and Subtle Hate Speech Messages
1989-2005 May 2-6, 2023
Nicolas Ocampo [email protected]
Universite Côte d'Azur
CNRS
I3SInriaFrance
Ekaterina Sviridova [email protected]@univ-cotedazur.fr
Universite Côte d'Azur
CNRS
I3SInriaFrance
Elena Cabrio
Universite Côte d'Azur
CNRS
I3SInriaFrance
Serena Villata [email protected]
Universite Côte d'Azur
CNRS
I3SInriaFrance
An In-depth Analysis of Implicit and Subtle Hate Speech Messages
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
the 17th Conference of the European Chapter of the Association for Computational Linguistics1989-2005 May 2-6, 2023
The research carried out so far in detecting abusive content in social media has primarily focused on overt forms of hate speech. While explicit hate speech (HS) is more easily identifiable by recognizing hateful words, messages containing linguistically subtle and implicit forms of HS (as circumlocution, metaphors and sarcasm) constitute a real challenge for automatic systems. While the sneaky and tricky nature of subtle messages might be perceived as less hurtful with respect to the same content expressed clearly, such abuse is at least as harmful as overt abuse. In this paper, we first provide an in-depth and systematic analysis of 7 standard benchmarks for HS detection, relying on a fine-grained and linguistically-grounded definition of implicit and subtle messages. Then, we experiment with state-of-the-art neural network architectures on two supervised tasks, namely implicit HS and subtle HS message classification. We show that while such models perform satisfactory on explicit messages, they fail to detect implicit and subtle content, highlighting the fact that HS detection is not a solved problem and deserves further investigation.Jane Frank. 1990. You call that a rhetorical question?:Forms and functions of rhetorical questions in conversation. Journal of Pragmatics, 14(5):723-738.
Introduction
The rising mass of communication through social media further exacerbates harmful consequences of online hate speech. As a result, social media have faced mounting pressure from civil rights groups demanding to ramp up their enforcement of antihate speech policies, so that to monitor and limit this kind of content. In the latest years, numerous methods have been developed to automatically identify this type of utterances expressing hateful or abusive content on social media using Natural Language Processing methods. A variety of datasets have also been built, exemplifying various manifestations of this harmful content (Poletto et al., 2021). However, most of the research carried out so far on this topic has focused on overt forms of hate speech. Explicit hate speech is more easily identifiable by recognizing a clearly hateful word or phrase. Only recently, a few works (Hartvigsen et al., 2022;Wiegand et al., 2022Wiegand et al., , 2021aElSherief et al., 2021;Jurgens et al., 2019;Waseem et al., 2017) have started to focus on implicitness, where circumlocution, metaphor, or stereotypes are used to intentionally convey hatred towards a particular group. In those messages, hatefulness can be captured only by understanding their global meaning, as well as contextual information.
In this paper, we carry out an in-depth analysis of implicit HS in standard benchmarks for HS detection. Additionally, we define the notion of Subtle HS that puts forward hateful meanings elusively relying on human perception and through the use of complex syntactic structures. In our study, we collect messages from 7 available datasets for HS detection that cover different topics and are extracted from different social media platforms, and we enrich them with the following three-layer annotation: HS/non HS, Explicit/Implicit and Subtle/Non Subtle. We also provide a fine-grained annotation for implicit HS messages with 18 implicit properties such as irony, exaggeration, metaphor, and rhetorical question, among others. The newly created resource named ISHate (Implicit and Subtle Hate speech) provides a rich and variegate benchmark for pushing forward research on implicit and subtle hateful messages, and constitutes a challenging test-bed to evaluate computational approaches. 1 Additionally, we evaluate SOTA and competitive baseline classifiers to detect both implicit and subtle HS in ISHate, showing that current methods fail to effectively detect implicit and subtle HS messages due to their peculiar nature. NOTE: This paper contains examples of language which may be offensive to some readers. They do not represent the views of the authors.
Related Work
In the latest years, there has been significant research on abusive language and hate speech detection using Natural Language Processing (NLP) methods (e.g., Xu et al. (2012); Dadvar et al. (2013); Poletto et al. (2021); Bohra et al. (2018); Corazza et al. (2020); Zampieri et al. (2019a); Caselli et al. (2020Caselli et al. ( , 2021). A few works focus on subtypes of HS, such as Warner and Hirschberg (2012) that tackles the recognition of antisemitism, or Waseem and Hovy (2016); Badjatiya et al. (2017); Gambäck and Sikdar (2017) that investigate predictive features to identify HS in the form of racism and sexism. In this context, several challenges and shared tasks have also been organized over the years, that made datasets and resources for multiple languages available (for a survey, see Poletto et al. (2021)). Research studies carried out so far have mostly focused on overt forms of hate speech, while very few works address the issue of implicit and subtle HS (ElSherief et al., 2021). However, several works show awareness of the problem. For instance, Warner and Hirschberg (2012) and Xu et al. (2012) discuss systems' limitations in identifying HS messages which are ambiguous, have patterns of emotional speech or lack context. Zhang and Luo (2018) and Corazza et al. (2020) highlight the complexity of recognizing hateful messages when the meaning is conveyed through sarcasm, stereotypes, complex syntactic structure, or non-explicit lexical patterns.
Among the few studies that attempted to address the issues of implicit and subtle detection, Caselli et al. (2020) defines a shared task to detect implicit and explicit abusive messages from AbusEval, a reannotated dataset based on OLID/OffensEval (Zampieri et al., 2019a). Benikova et al. (2018) paraphrases German HS tweets obtaining implicit and explicit messages to study classification methods. Dadvar et al. (2013) shows how taking user context improves cyberbullying detection with neither explicit profanities nor apparent neutral emotions. Jurgens et al. (2019) and Waseem et al. (2017) explain why explicitness, implicitness, and subtlety are typologies of abusiveness and encourage researchers to develop proactive technologies in this area. ElSherief et al. (2021) introduces a taxonomy of implicit hate speech and a benchmark corpus with fine-grained labels for each message. Hartvigsen et al. (2022) proposes a large-scale approach to automatically generate benign and implicit HS statements through the language model GPT3. Wiegand et al. (2021aWiegand et al. ( , 2022 proposes resources to tackle implicitly abusive comparisons and abusive remarks, which are two subtypes of implicitness. Wiegand et al. (2021b) explains the key issues in the implicit abuse detection, as well as possible future directions to explore. Even though these studies set the basis to explore and model the issue of implicit HS, there is still large room for improvement, both in terms of creating adequate resources to investigate these aspects and in terms of computational approaches to address them.
HS, Implicitness and Subtlety
Hate Speech is defined as a direct attack against people -rather than concepts or institutionsbased on protected characteristics (PC): race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and severe disease (Meta, 2022). We encompass in the concept refugees, migrants, immigrants, asylum seekers from the most severe attacks, and occupations when they are referenced along with PC, though commentary and criticism of immigration policies are excluded from HS. Following this definition, HS differs from related concepts mainly since it is specifically oriented to groups of people with PC as the main target (Poletto et al., 2021). In the following, we provide clear and operational definitions of explicit, implicit and subtle HS. Reported examples are extracted from the White Supremacy Forum (WSF) (de Gibert et al., 2018) and the CONAN (Chung et al., 2019) datasets, described in Section 4.
Explicit Hate Speech
Explicit HS is unambiguous in its potential to be abusive/hateful, such as language containing racial or homophobic slurs. Explicit HS uses words whose literal definition (taken from the dictionary) is hateful (ElSherief et al., 2021;Waseem et al., 2017;Caselli et al., 2020), as in Example 1.
1. Negros are so dumb . (WSF)
Implicit Hate Speech
According to the definitions of ElSherief et al. (2021), implicit HS does not immediately denote abuse/hate. Implicitness goes beyond word-related meaning, implying figurative language use such as irony, sarcasm, etc. (ElSherief et al., 2021;Benikova et al., 2018), generally hiding the real meaning, making it more difficult to grasp and undermining the collection of hateful messages (Hartvigsen et al., 2022;Waseem et al., 2017). Therefore, we deal with figurative meaning intended by the author rather than literal meaning of words that may be extracted from dictionaries.
In our extensive study on hate speech datasets, we identified 18 typical properties of implicitness relying on linguistics features (listed in Table 2). Among the most significant ones we have irony (Frenda et al., 2022), sarcasm (Potamias et al., 2020), black humor, metaphor (Choi et al., 2021Gao et al., 2018), exaggeration (Troiano et al., 2018, rhetorical question, sentiment (Li et al., 2021), inference, lack of context (Dadvar et al., 2013), and lack of extralinguistic knowledge. Most of the implicit HS messages contain combinations of several properties, as in:
2. Global Jewry did not want Brexit and are doing everything they can to sabotage it. (CO-NAN)
Example 2 illustrates an interlacement of irony and exaggeration. Global Jewry represents a wordplay mixing Global Jury and Jews adding humorous content to the perception of Jewish people as a nation, while doing everything is hyperbolic pointing out that Jewish are extremely occupied by Brexit. The whole message is ironical as it emphasizes that Israel has its own interest in other country's affairs.
3. I also , unfortuantely , spent 7-8th grade in a 95 % monkey house (WSF)
In Example 3 there is an overlap of irony, exaggeration, and metaphor (underlined). 95 % looks more like an exaggeration than a fact, while monkey house refers to the group of students and in the metaphor due to their skin color they are compared with monkeys.
4. Are you sure that Islam is a peaceful religion?
Example 4 interlaces rhetorical question, inference, and sentiment. The rhetorical question doubts about the peacefulness of Islam, expressing at the same time the author's take on it. As well as that, we may read negative inference between the lines.
Subtle Hate Speech
Subtle Hate Speech concerns hateful messages that are so delicate or elusive as to be difficult to analyze or describe, and that depend on an indirect method to deliver the meaning (Merriam-Webster, 2022). However, literal meanings (Roland et al., 1968Bertrand, 1905) are of prime importance in subtle messages by contrast to implicit messages where we go beyond literal meanings (Waseem et al., 2017). Although implicitness and subtlety differ a lot at this point, we still rely on language users' perception (Grice, 1967(Grice, , 1989Korta and Perry, 2020) to understand implicit and subtle messages (Benikova et al., 2018). Despite the challenges in characterizing human perception schematically, in our study, the use of elements such as negations with positive clauses, conditionals, connectors, unrelated constructions, word order, and circumlocution can greatly affect the subtlety of a message. Cases of HS that convey the meaning directly and straightforwardly are non-subtle. As subtlety is even less studied than implicitness in the domain of HS (Breitfeller et al., 2019), and only considered as a related concept of implicitness in previous works (Jurgens et al., 2019;ElSherief et al., 2021;Hartvigsen et al., 2022), we analyze their interrelation, particularly whether one of these concepts is an umbrella term for the other.
First, we study whether explicit HS can convey the meaning in a subtle way, to accept or reject the idea that implicitness is a general term for subtlety. As can be seen from the example below, a message can be explicit and subtle at the same time:
5. As a brit my knowledge of american law is somewhat lacking but even i know that this holder groid has committed treason (WSF)
Example 5 is explicit as it literally conveys hateful meanings with the expression holder groid, but it is subtle as the intended hateful meaning is disguised by the first and second underlined constructions, pushing out the attention from the explicit hateful expression.
On the other side, we investigate whether subtlety is an umbrella concept for implicitness (whether all implicit messages are subtle by default). The following example shows an implicit non-subtle HS message:
6. I 'm either in North Florida or Nigeria sometimes i can't tell the difference. (WSF)
According to the definitions of explicitness and implicitness, this message is not explicit since it does not contain any hateful words, thus, it does not directly convey hateful meaning. However, it is hateful despite being more elaborated, as it includes certain figures of speech that trigger hateful meaning. The underlined expression in Example 6 represents antithesis, while the message itself is ironical. This message also lacks contextual information and extralinguistic knowledge. Although being implicit, this message is not subtle as the meaning is delivered plainly without any delusion or delicacy. Based on these considerations, we consider implicitness and subtlety as two separate concepts. To illustrate the difference between implicit non-subtle messages presented above and implicit subtle cases, consider the following example: 7. I think it is a bit late to think to look after the saety and the future of white people in Southfrica . (WSF)
We label it as implicit HS as it does not express hateful meaning explicitly via offensive words and it is ironic. It is also subtle as it starts with an introductory construction that makes the message more sophisticated and adds extra weight to it. As can be expected, it is also possible to find non-hateful subtle or implicit sentences (such as ironic or sentimental texts). However, our work focuses on exploring implicitness and subtlety in the context of hate speech only, therefore those more general cases are not investigated.
The ISHate Dataset
Relying on the fine-grained definitions of HS provided in the previous section (explicit, implicit and subtle HS), we collect and enrich 7 available standard datasets for HS detection. As a result, we create the first benchmark for implicit and subtle HS detection on social media messages extracted from different sources.
Data Collection
Nearly all available resources of user-generated HS content are retrieved with a keyword-based approach, and mainly relying on a list of words with negative polarity (Poletto et al., 2021). However, with this strategy it is possible to extract mainly explicit HS expressions (as in the AbusEval dataset, Caselli et al. (2020)). Given that our study focuses on implicit and subtle HS, we prefer to explore resources collected from communities of users that are potentially prone to hate speech, or resources manually created using a systematic approach. In the following, we list the considered resources: White Supremacy Forum Dataset (WSF) (de Gibert et al., 2018), that contains HS messages from Stormfront, scraped from the most influential white supremacist forum on the Web. The database is arranged in sub-forums and conversation threads. HatEval (Basile et al., 2019), which is among the most well-known benchmark for HS detection. A combined approach is applied to collect hateful and misogynous tweets by monitoring potential victims of hate accounts, downloading the history of identified haters, and filtering Twitter streams with both neutral and derogatory keywords. Implicit Hate Corpus (IHC) (ElSherief et al., 2021), annotated with explicit HS, implicit HS, and non-HS labels obtained from online hate groups on Twitter. The authors focused on eight ideological clusters of U.S., as Black Separatists, White Nationalist and Neo-Nazi. From this dataset we only extracted messages labeled as implicit HS, as it is one of our target categories. ToxiGen (Hartvigsen et al., 2022), a dataset with benign and implicit toxic messages against minority groups. ToxiGen is machine-generated through the GPT3 language model and prompt programming. Similarly to IHC, we only extracted messages which were automatically labeled as implicit HS and human-validated as toxic by the authors. We did not consider unfinished generated sentences which make a part of implicit messages. YouTube Video Comments Dataset (YouTube) (Hammer, 2017), that consists of YouTube comments posted under videos related to religion and politics. Differently from the other resources, the messages are annotated as "violent" or "clean". CONAN (Chung et al., 2019), a dataset of HS messages and counter-narratives (CN) pairs for CN generation. Two native English speakers were asked to write 50 prototypical short texts, which NGO could later use to write their hate texts and counternarratives. We believe that messages for which a CN can be provided might be richer in implicit content since a slur-based explicit HS message might produce very poor argumentative CN. Multi-Target CONAN (MCONAN) (Fanton et al., 2021), a dataset of English HS/CN pairs comprising several hate targets. It is collected using a Human-in-the-Loop approach. A generative lan-guage model is refined iteratively by using data from the previous loops to generate new samples that NGOs experts review.
Before starting the annotation process with the fine-grained annotations (Explicit, Implicit and Subtle HS), we had to make sure that the definition of HS originally used to annotate such resources is consistent with ours. In the first annotation round, we checked the messages originally annotated as HS, and discarded the few ones that did not correspond to the definition of HS reported in Section 3. For the YouTube dataset, we also added the HS labels. While all the messages annotated as HS are directed to PC, it should be noted that the topics distribution and the writing quality might be different, given the heterogeneity of the selected resources. HS messages mostly target Islamism, Judaism, misogyny, multi-culturalism, racism, immigration, and refugees. Regarding time creation, WSF is made from threads posted between 2002 and 2017, ToxiGen's LM was trained with messages from 2016 to 2019, Hateval consists of messages of 2018, the YouTube comments were collected in 2017, while the IHC contains tweets from U.S. ideological clusters from 2015 to 2017.
Annotation Procedure
Following the annotation scheme described in Section 3, four graduate-level annotators with linguistics and computational linguistics competences carried out a pilot annotation study on a sample of 100 messages extracted from each of the above mentioned resources to converge to non-ambiguous annotation strategies. We calculate the Inter Annotator Agreement (IAA) on this sample, resulting in Cohen's κ=0. 793 (Cohen, 1960) for the implicit layer (binary annotation Explicit/Implicit) and 0.730 for the subtlety layer (binary annotation Subtle/Non-Subtle). We also compute the IAA considering both layers simultaneously, that is, considering one layer of 4 classes (Implicit, Explicit, Subtle, Non-Subtle), obtaining a Cohen's κ of 0.734. In the reconciliation phase, we notice that most of the disagreements are due to the interlacement of subtlety and implicitness. For that reason, we also calculate an ordered weighted disagreement using Krippendorff's α to penalize less when the annotators agree at least on one of the layers (Artstein and Poesio, 2008). The Krippendorff's α is 0.757. Despite the complexity of the annotation task, obtained results are considered as strong agreement in a two-annotators setting. The rest of the annotations has then been carried out by two of the annotators mentioned above, which were provided with the final version of the annotation guidelines (containing the definitions of the target classes, i.e., subtlety and implicitness, and a discussion about borderline cases), together with a set of labeled examples.
Finally, the implicit properties annotations are added on top of the messages labeled as implicit as an additional annotation layer to highlight 18 linguistic features that implicitly convey hateful meaning. For this layer, annotations are carried out by one expert linguist. Table 1 shows statistics of the final dataset, reporting on the number of annotated HS messages for each resource and for the three annotation layers.
Data Statistics
The ISHate collection consists of a total of 29116 messages, where 11247 are HS (further annotated with the Explicit/Implicit and Subtle/Non-subtle labels). For computational purposes, we provide a dataset split in three subsets, i.e., train (70%), validation (15%), and test (15%) sets. Each of the partition respects the distribution of all the annotation layers using stratified splitting. As can be seen, classes are unbalanced, each resource providing only a reduced number of implicit and subtle messages -as expected. Note that CONAN and MCONAN do not contain Non-HS messages, because their main objective is CN generation. As for IHC and ToxiGen, we only look through previously annotated implicit HS messages disregarding non hateful ones. Note also that ToxiGen claimed to contain only implicit adversarial messages, but according to our definitions and annotation guidelines many messages are considered as explicit and non-subtle by our annotators. Table 2 shows the full distribution of the implicit properties relative to the implicit messages in ISHate. As it can be seen, Inference (58%), Context (48%), Sentiment (45%), Exaggeration (28%) and Irony (22%), are the most frequent properties of implicit HS messages, whereas Euphemism (4%), Circumlocution (3%), Metonymy (0.4%) and Synecdoche (0.08%) are the least recurrent. Note that one implicit message can be labeled with more than one property.
Data Augmentation
To overcome the problem of the unbalanced dataset, we propose oversampling and data augmentation (DA) methods for the minority classes using adversarial methods and generative models following ( Replace Scalar Adverbs (RSA). It replaces emphasising adverbs like considerably or largely, with another scalar adverb that might increase or decrease the emphasis of an adjective/verb.
9.
Original: I do like the outfit they dressed the simian in ; he looks so regal (WSF) Augmented: I do like the outfit they dressed the simian in ; he looks incredibly regal
Example 9 is another implicit racist HS message that uses irony by combining simian, dressed, and regal. RSA replaces the scalar adverb, slightly increasing the emphasis of regal in the sentence. Add Adverbs to Verbs (AAV). It adds modifiers to verbs to accentuate them. In this case, we use speculative adverbs like certainly, likely, clearly.
10.
Original: I wish we lived in a world in which only men could be doctors, lawyers, firemen, and teachers. (MCONAN) Augmented: I wish we lived in a world in which only men could plausibly be doctors, lawyers, firemen, and teachers.
In Example 10, the meaning is made elusive by the use of the expression I wish we lived, the addition of plausibly alters the possibility of the verb "to be", but keeps the misogyny meaning. Replace Adjectives (RA). It replaces adjectives with their synonyms, as in:
11. Original: If you look at an Alaskan malamute and a timber wolf that are two different species , they look a lot more alike than say a Congolese and a Swede . (WSF) Augmented: If you look at an Alaskan malamute and a timber wolf that are two unlike species , they look a lot more alike than say a Congolese and a Swede .
Replace In-Domain Expressions (RI). It replaces a list of manually-crafted expressions often used in HS messages (not captured by the RNE) with other semantically similar expressions, as in Example 12.
Here the word "migrants" is replaced by "foreigners" without adding hateful connotations. It would have been different to replace it with aliens, which is a derogatory term to refer to migrants.
12.
Original: migrants are entering the u.s. just to obtain welfare and other benefits . (MCO-NAN) Augmented: foreigners are entering the u.s. just to obtain welfare and other benefits .
Easy Data Augmentation (EDA). Given an input sentence, EDA randomly: i) replaces a non-stop word expression with a synonym using Wordnet; ii) inserts a synonym of a non-stop word in a random position; iii) chooses two words of the sentence and swaps their positions; iv) removes each word in the sentence with a certain probability. Only one of the four operations at a time is applied to a sentence.
13.
Original: A lot of white women are trying to create danger by drawing these other people into our countries . (WSF) Augmented: A lot of our women are trying to create danger by drawing these other people into white countries .
A parameter α expressing the percent of the words to be changed in a sentence is specified (in our setting, α = 0.1 as in Wei and Zou (2019)). Back Translation. It translates an input message into a different language to translate it back into the original language. We use the translation from English to Russian following (ElSherief et al., 2021).
Original:
As a nation , we are in desperate need for an immigration time out and a zero tolerance policy for those here illegally . (WSF) Augmented:
As a nation, we desperately need an immigration timeout and a zero-tolerance policy for those here illegally.
Generative Models (GM). We fine-tune autoregressive generative language models with instances from our minority classes, i.e., explicit subtle, implicit non-subtle, and implicit subtle messages. To do so, we prefix this label on the text as a prompt.
Then, language models are asked to generate messages starting with one of our fine-tuned prompts, as in Example 15. We use GPT2 (Radford et al., 2019) as a language model, fine-tuned for 4 epochs using learning rate of 3e-5, and batch size of 32. Additionally, we implement a human-in-the-loop approach revising the generated examples and reannotating them in case the original label is nomore appropriate for the message.
15. Input: Explicit Subtle HS:
Augmented: Explicit Subtle HS: In the end, it comes down to what women want from a man... If they want to play with whores, they can stay at home and have babies...
Except for GM and BT, the same strategy is applied to augmentation methods to produce new messages. Preprocessing (e.g., Parts-of-Speech tagging and Named Entities Recognition) is carried out using Flair (Akbik et al., 2019) and NLTK (Bird and Loper, 2004) models, and allows to recognize possible candidate phrases to perform a replacement/addition. Then, a candidate phrase is perturbed by another one according to a list of adverbs, NEs, or adjectives based on domain data. We rely on FastText and WordNet Synsets to maintain the semantics of the augmented sentences with respect to the original one. The number of candidates to perform a replacement/addition and the number of replacement/additions per candidate are provided as parameters to these methods.
Evaluation
To show that implicit and subtle HS detection is still a very challenging task, we evaluate a set of state-of-the-art models for HS detection on the ISHate dataset. We propose two 3-label classification tasks: As for preprocessing, we replace long non-space character chains for only one occurrence, and delete digits, special symbols, and URLs. Table 3 reports on the results of the different models on the two tasks. On both tasks, all models show satisfactory performances when detecting overt forms of HS (Explicit HS and Non-Subtle HS classes), with DeBERTa outperforming the other models. The results obtained by all models for the Implicit HS and Subtle HS classes are much lower, and comparable to those obtained by ElSherief et al.
Results
(2021) (F1-score=.586) on the implicit class.
As a follow-up experiment, we apply the oversampling techniques (Section 4) on the minority classes of tasks A and B until balancing them with respect to the Explicit HS and Non-Subtle HS categories. The oversampling is performed on the training set only. The test set is the one of the original dataset, and is therefore unbalanced in order to evaluate the system on real class distribution and to avoid information leakage from train to test through augmentation methods. Tables 4a and 4b show the number of additional generated implicit/subtle messages and the resultant training set distribution per augmentation method, respectively.
Among all tested models, only HateBERT significantly improves its performance for detecting implicit messages combining all augmented data (ALL) (see Table 3). We also highlight that back translation (BT) better contributes to the performance on the implicit hate class for BERT, DeBERTa, and USE+SVM 2 . Performances 2 The table reporting the obtained results by all models on surprisingly increase for the subtle class with USE+SVM+BT showing that back-translated messages provide diversity by rephrasing subtle examples without altering their meaning. Data generated with simpler augmentation methods as BERT+RNE and DeBERTa+RI also show slight improvements for subtlety. However, performances decrease on the implicit class when applying data augmentation strategies GM and GM+Revised, and only slightly improve on the subtle class.
Error Analysis
To gain insights into the models' behaviours, we manually analyse the classification errors of the best performing approaches, i.e., HateBERT+ALL and USE+SVM+BT for both tasks A and B. For the Non-HS/Explicit/Implicit classification, it is harder for HateBERT+ALL to differentiate implicit messages rather than explicit ones. Figure 1 in Appendix A shows the resultant embedding through t-SNE (van der Maaten and Hinton, 2008) approximation of all implicit messages of the test set, and two samples of the other two classes, showing how HateBERT+ALL is capable of separating the space in three different blobs for classification (Figure 1a), embedding well the human-annotated Explicit HS and Non-HS classes (Figure 1b), but badly encoding implicit annotated messages (instances spread over the explicit and non-hateful blobs). We also considered the F1-scores of the (Non-HS vs Implicit HS) and (Non-HS vs Explicit HS) classes, as if we had binary labels, obtaining an (F1-score=0.670) and (F1-score=0.850) on the target categories respectively, showing that implicit cases are harder to detect than explicit ones.
Among misclassified examples, we can find messages where the PC are not explicitly named (Example 16). Contextual knowledge is needed there to grasp that the target of the message is black people.
16. I can picture him as that baby monkey on the ' save the rainforest comercial or whatever
Additionally, from the misclassified messages we extract the most frequent implicit properties not captured by the classifier 3 . They concern Inference (53%), Context (41%), Sentiment (40%), Exaggeration (24%), Extralinguistic knowledge (24%). For Subtle/Non Subtle message classification, we also plotted the USE embedding for the best different types of augmented data is in the Appendix. 3 The full . 888 .866 .877 .766 .803 .784 .399 .382 .390 .891 .868 .879 .783 .832 .807 .667 .103 .178 BERT .903 .893 .898 .81 .833 .821 .394 .371 .382 .902 .891 .897 .819 .846 .832 .250 .103 .145 HateBERT .904 .89 .897 .811 .849 .829 .447 .382 .412 .903 .890 .897 .814 .850 .831 .143 .026 .043 DeBERTa .927 .899 .913 .825 .880 .851 .467 .419 .442 .920 .893 .906 .823 .877 .849 .375 .077 .128 HateBERT+ALL .903 .896 .899 .827 .827 .827 .502 .559 .529 .903 .881 .892 .816 .844 .830 .391 .462 .424 BERT+BT .909 .887 .898 .824 .826 .825 .459 .608 .523 .898 .900 .899 .839 .832 .835 .304 .359 .329 DeBERTa+BT .919 .885 .902 .830 .857 .844 .428 .543 .479 .920 .897 .908 .835 .876 .855 .385 .256 .308 USE+SVM+BT .897 .856 .876 .782 .787 .785 .403 .645 .496 .892 .868 .880 .789 .831 .809 .739 .436 .548 BERT+RNE .897 .897 .897 .807 .829 .818 .455 .349 .395 .899 .895 .897 .826 .839 .833 .400 .256 .312 DeBERTa+RI .922 .894 .908 .821 .878 .849 .460 .398 .427 .910 .894 .902 .828 .860 .843 .364 .205 .262 HateBERT+GM .901 .898 .899 .824 .827 .825 .414 .425 .419 .899 .898 .899 .831 .834 .832 .250 .231 .240 HateBERT+GM+R. .905 .891 .898 .816 .835 .826 .408 .419 .414 .894 .898 .896 .826 .826 .826 .192 .128 .154 model on this task (Figure 2 in Appendix A). However, it can be seen that USE+SVM+BT could not differentiate correctly on the subtle notion despite of the results reported in Table 6b. Example 17 is not predicted as subtle. It shows how the word order may influence our understanding. At a first glance, the part how stupid the Jews seems to have a different meaning from what the phrase actually conveys if we read it entirely. We may also notice a circumlocution in the second part of the message.
17. I am insulted by how stupid the jews think we are until i see what they see by reading the posts amongst our so called , ' ' awakened brethren .
Conclusions
In this paper, we have presented ISHate, the first benchmark dataset annotated with both implicit and subtle HS labels, which represents a challenging test-bed to evaluate computational approaches. We also provide a fine-grained annotation for implicit HS messages with 18 implicit properties which represent the relevant features that HS classifiers should possess to improve implicit HS detection. It has been created enriching 7 existing datasets for HS detection over different topics and from different social media. We have shown that current SOTA models fail to properly detect implicit and subtle HS messages as peculiar features connected to Sentiment, Inference, Context and Irony, as well as complex syntactic structure, cannot be properly understood. We also investigated data augmentation strategies to increase the number of instances for the minority classes. We show that -while they cannot be the ultimate solution to the lack of implicit and subtle examples -they still play a role in improving the systems' performances, in line with ElSherief et al. (2021). As for future work, we plan to propose alternative large-scale methods to collect implicit and subtle messages by targeting "hateful" users, manual creation (Wiegand et al., 2021a(Wiegand et al., , 2022 or refining human-in-the-loop generative methods as in (Hartvigsen et al., 2022). Also, we will investigate features modeling implicit properties (Wallace et al., 2014;Troiano et al., 2018;Frenda and Patti, 2019) and new model architectures for HS detection (Nejadgholi et al., 2022).
Limitations
The main limitation of this paper lies in the intrinsic difficulty to provide a clear definition of the notions of Implicit HS and Subtle HS (given the limited number of definitions available in the literature for these notions), and, as a consequence, to build annotated resources. Enhancing the ISHate dataset with new instances requires future annotators to be experts in computational linguistics trained on our annotation guidelines through pilot annotations to keep the same level of agreement. This restricts crowdsourcing-like options, making the resource building process more expensive. Moreover, the complexity of the messages and of the considered categories makes the process time-consuming (i.e., a trained annotator requires 30sec. for explicit messages and 1.30min. for implicit/subtle messages on average). Even opting for generative and synthetic data augmentation approaches, they still require human-in-the-loop intervention and high computational resources to generate Implicit/Subtle HS messages on a big scale.
Ethics Statement
This paper contains examples of HS from existing linguistic resources for HS detection and which do not reflect the authors' opinions. While our purpose is to prevent and curate social media resources from HS, the release of this dataset might still pose a potential misuse case. However, we still consider that effective classifiers for this task are necessary to tackle implicit and subtle online hate on scale and prevent the spreading of this harmful content online. Our work aims at making a step towards that objective and encourages the scientific community to investigate these aspects.
A Performance Details in Data Augmentation
Inspired by the ranked one augmentation strategy in ElSherief et al. (2021), i.e., a back-translation approach, we also test SOTA models on our dataset, ISHate, with the augmentation techniques described in Section 4.4. Each model is trained with the originally collected data described in Section 4.3 and additional data obtained from one augmentation strategy. At the end, we also evaluate each model using only non-augmented test data. Tables 6a and 6b show the experiments' results on tasks A and B. We further analyse the errors committed by the best performing model on task A. We took from Table 6a HateBERT+ALL and the third annotation layer described in Sections 3 and 4 to identify which are the most frequent implicit properties on task A miss-classified messages. We also analysed the embeddings of our bestperforming models in tasks A and B (Hate-BERT+ALL and USE+SVM+BT, respectively) through t-SNE (van der Maaten and Hinton, 2008). Figures 1 and 2 show the text embeddings for sentences of the test set, labeled by both classifiers and annotators, for the implicit and subtle tasks. (b) Results of SOTA models using data augmentation on task B.
•
Task A (Non-HS/Explicit HS/Implicit HS) • Task B (Non-HS/Non-Subtle HS/Subtle HS)To this goal, we consider the following models: Universal Sentence Encoder (USE) + SVM(Indurthi et al., 2019). First-ranked model on the HatEval benchmark(Basile et al., 2019). The USE (Cer et al., 2018) is a sentence embedding that encodes text into high dimensional vectors of 512 dimensions, trained on large data sources to provide an encoding method that works for various NLP tasks. An SVM classifier with RBF kernel and default parameters is then used for classification.DeBERTa V3 (hate_speech18). SOTA model on the WSF dataset(de Gibert et al., 2018). For classification, a default HuggingFace implementation of a one-layer Feed Forward network is used on top of DeBERTa(He et al., 2021a,b), a transformer-based model. The model is later fine-tuned for 4 epochs (learning rate of 2e-5, batch size of 32).BERT (Devlin et al., 2018). We use this language model to encode text sequences and classify them by adding a Feed-forward neural network on top. HateBERT. A re-trained BERT model using over 1 million posts from banned communities on Reddit(Caselli et al., 2021) and then fine-tuned on our dataset. HateBERT obtained very promising results in the benchmarks HatEval, OffensEval(Zampieri et al., 2019b), and AbusEval (Caselli et al., 2020).
Figure 1 :
1Embedding of HateBERT + ALL in the test set of task A (a) Embedding using predicted annotations.(b) Embedding using manual annotations.
Figure 2 :
2Embedding of USE+SVM+BT in the test set of task B
Table 1 :
1Statistics on the annotated dataset (resources and label distributions for the two tasks)Implicit HS
Implicit Properties
#
%
Inference
729 58.885
Context
602 48.627
Sentiment
569 45.961
Exaggeration
359 28.998
Irony
275 22.213
Extralinguistic knowledge 193 15.590
Black humor
144 11.632
Rhetorical question
134 10.824
Visual signs
122
9.855
Humiliation
115
9.289
Antithesis
97
7.835
Metaphor
93
7.512
Sarcasm
85
6.866
Fallacy
74
5.977
Euphemism
56
4.523
Circumlocution
41
3.312
Metonymy
6
0.485
Synecdoche
1
0.081
Table 2 :
2Statistics on implicit properties distribution.
Mayer et al., 2020; Wei and Zou, 2019), and the GPT2 language model(Radford et al., 2019). Replace Named Entities (RNE). It replaces a named entity (PER, LOC, ORG, and MISC) in the input sentence. A candidate NE in a sentence is replaced by another one according to a previously collected list of NEs (Mayer et al., 2020). Then, the most similar NE is selected by using pre-trained FastText embeddings(Bojanowski et al., 2016). In our use case, we notice that the number of NEs PER, LOC, and ORG are very few compared to MISC. This might be due to the fact that HS messages in our collection mostly target groups and not individuals. However, expressions like muslims, jews, or blacks are present in the MISC category and replaced, as in Example 8. As it can be noticed, the expression preserves its meaning, that is, the use of the rhetorical question to convey that Muslims are not considered as a part of society. 8. Original: Have Muslims ever made a contri-bution to our society? (CONAN)
Augmented: Have Moslem Arabs ever made
a contribution to our society?
table can be found in the Appendix.1996
Task A
Task B
P
R
F1
P
R
F1
P
R
F1
P
R
F1
P
R
F1
P
R
F1
Model
Non-HS
Explicit HS
Implicit HS
Non-HS
Non-Subtle HS
Subtle HS
USE+SVM
Table 3 :
3Obtained results on tasks A and B. Train set distribution (%) per augmentation method (ORIG corresponds to the original train distribution).Aug. method RSA AAV RNE
RI RA EDA
BT GM GM+Revised
ALL
Label
Implicit HS
6848 7032
828 817 467 6935 748 200
82 23957
Subtle HS
3192 3136
480 210 172 2912 179 200
204 10685
(a) Number of additional implicit/subtle messages generated by each augmentation method.
Aug. method
ORIG RSA AAV RNE
RI
RA EDA
BT GM GM+Revised ALL
Label
Non-HS
.614
.459
.456
.59 .590 .600
.458 .592 .608
.611
.282
Explicit HS
.344
.257
.256
.33 .331 .336
.257 .332 .340
.342
.158
Implicit HS
.042
.283
.288
.08 .079 .064
.286 .076 .052
.046
.560
Non-HS
.614
.531
.532
.600 .607 .609
.537 .608 .608
.608
.403
Non-Subtle HS
.377
.326
.327
.369 .374 .374
.330 .374 .374
.374
.248
Subtle HS
.009
.143
.141
.032 .019 .017
.133 .018 .019
.019
.350
(b)
Table 4 :
4Statistics on the train set with data augmentation.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Barthes Roland, Lavers Annette, and Smith Colin. 1968. Elements of semiology / Roland Barthes ; translated from the French by Annette Lavers and Colin Smith, 1st american ed. edition. Hill and Wang New York. Enrica Troiano, Carlo Strapparava, Gözde Özbal, and Serra Sinem Tekiroglu. 2018. A computational exploration of exaggeration. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3296-3304, Brussels, Belgium. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579-2605. Byron C. Wallace, Do Kook Choe, Laura Kertz, and Eugene Charniak. 2014. Humans require context to infer ironic intent (so computers probably do, too). In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 512-516, Baltimore, Maryland. Association for Computational Linguistics. William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Media, pages 19-26, Montréal, Canada. Association for Computational Linguistics. Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78-84, Vancouver, BC, Canada. Association for Computational Linguistics. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computational Linguistics. Jason Wei and Kai Zou. 2019. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. Technical Report arXiv:1901.11196, arXiv. ArXiv:1901.11196 [cs] type: article. Michael Wiegand, Elisabeth Eder, and Josef Ruppenhofer. 2022. Identifying Implicitly Abusive Remarks about Identity Groups using a Linguistically Informed Approach. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: HumanPinkesh Badjatiya, Shashank Gupta, Manish Gupta,
and Vasudeva Varma. 2017. Deep learning for hate
speech detection in tweets. CoRR, abs/1706.00188.
Valerio Basile, Cristina Bosco, Elisabetta Fersini,
Debora Nozza, Viviana Patti, Francisco Manuel
Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti.
2019. SemEval-2019 Task 5: Multilingual Detection
of Hate Speech Against Immigrants and Women in
Twitter. In Proceedings of the 13th International
Workshop on Semantic Evaluation, pages 54-63, Min-
neapolis, Minnesota, USA. Association for Compu-
tational Linguistics.
Darina Benikova, Michael Wojatzki, and Torsten Zesch.
2018. What Does This Imply? Examining the Im-
pact of Implicitness on the Perception of Hate Speech.
In Language Technologies for the Challenges of the
Digital Age, pages 171-179, Cham. Springer Interna-
tional Publishing.
Russell Bertrand. 1905.
On denoting.
Mind,
56(14):479-493.
Steven Bird and Edward Loper. 2004. NLTK: The natu-
ral language toolkit. In Proceedings of the ACL In-
teractive Poster and Demonstration Sessions, pages
214-217, Barcelona, Spain. Association for Compu-
tational Linguistics.
Aditya Bohra, Deepanshu Vijay, Vinay Singh, Syed Sar-
faraz Akhtar, and Manish Shrivastava. 2018. A
dataset of Hindi-English code-mixed social media
text for hate speech detection. In Proceedings of
the Second Workshop on Computational Modeling
of People's Opinions, Personality, and Emotions in
Social Media, pages 36-41, New Orleans, Louisiana,
USA. Association for Computational Linguistics.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and
Tomás Mikolov. 2016. Enriching word vectors with
subword information. CoRR, abs/1607.04606.
Luke Breitfeller, Emily Ahn, David Jurgens, and Yulia
Tsvetkov. 2019. Finding Microaggressions in the
Wild: A Case for Locating Elusive Phenomena in So-
cial Media Posts. In Proceedings of the 2019 Confer-
ence on Empirical Methods in Natural Language Pro-
cessing and the 9th International Joint Conference
on Natural Language Processing (EMNLP-IJCNLP),
pages 1664-1674, Hong Kong, China. Association
for Computational Linguistics.
Miguel Casas Gómez. 2009. Towards a new approach
to the linguistic definition of euphemism. Language
Sciences, 31(6):725-739.
Tommaso Caselli, Valerio Basile, Jelena Mitrović, and
Michael Granitzer. 2021. HateBERT: Retraining
BERT for abusive language detection in English. In
Proceedings of the 5th Workshop on Online Abuse
and Harms (WOAH 2021), pages 17-25, Online. As-
sociation for Computational Linguistics.
Tommaso Caselli, Valerio Basile, Jelena Mitrović, Inga
Kartoziya, and Michael Granitzer. 2020. I Feel Of-
fended, Don't Be Abusive! Implicit/Explicit Mes-
sages in Offensive and Abusive Language. In Pro-
ceedings of the 12th Language Resources and Evalua-
tion Conference, pages 6193-6202, Marseille, France.
European Language Resources Association.
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua,
Nicole Limtiaco, Rhomni St. John, Noah Con-
stant, Mario Guajardo-Cespedes, Steve Yuan, Chris
Tar, Yun-Hsuan Sung, Brian Strope, and Ray
Kurzweil. 2018. Universal sentence encoder. CoRR,
abs/1803.11175.
Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo
Park, Junhyuk Lee, Dongwon Lee, and Jongwuk Lee.
2021. MelBERT: Metaphor Detection via Contex-
tualized Late Interaction using Metaphorical Iden-
tification Theories. Number: arXiv:2104.13615
arXiv:2104.13615 [cs].
Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem
Tekiroglu, and Marco Guerini. 2019. CONAN -
COunter NArratives through nichesourcing: a mul-
tilingual dataset of responses to fight online hate
speech. In Proceedings of the 57th Annual Meet-
ing of the Association for Computational Linguistics,
pages 2819-2829, Florence, Italy. Association for
Computational Linguistics.
Jacob Cohen. 1960. A coefficient of agreement for
nominal scales. Educational and Psychological Mea-
surement, 20(1):37-46.
Michele Corazza, Stefano Menini, Elena Cabrio, Sara
Tonelli, and Serena Villata. 2020. A multilingual
evaluation for online hate speech detection. ACM
Trans. Internet Technol., 20(2).
Maral Dadvar, Dolf Trieschnigg, Roeland Ordelman,
and Franciska de Jong. 2013. Improving cyberbul-
lying detection with user context. In Proceedings of
the 35th European Conference on Advances in Infor-
mation Retrieval, ECIR'13, page 693-696, Berlin,
Heidelberg. Springer-Verlag.
Ona de Gibert, Naiara Perez, Aitor García-Pablos, and
Montse Cuadros. 2018. Hate Speech Dataset from
a White Supremacy Forum. arXiv:1809.04444 [cs].
ArXiv: 1809.04444.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. BERT: pre-training of
deep bidirectional transformers for language under-
standing. CoRR, abs/1810.04805.
Collins Dictionary. 2022. Collins dictionary.
Mai ElSherief, Caleb Ziems, David Muchlinski, Vaish-
navi Anupindi, Jordyn Seybolt, Munmun De Choud-
hury, and Diyi Yang. 2021. Latent hatred: A bench-
mark for understanding implicit hate speech. In Pro-
ceedings of the 2021 Conference on Empirical Meth-
ods in Natural Language Processing, pages 345-363,
Online and Punta Cana, Dominican Republic. Asso-
ciation for Computational Linguistics.
Margherita Fanton, Helena Bonaldi, Serra Sinem
Tekiroglu, and Marco Guerini. 2021. Human-in-the-
Loop for Data Collection: a Multi-Target Counter
Narrative Dataset to Fight Online Hate Speech. In
Proceedings of the 59th Annual Meeting of the Asso-
ciation for Computational Linguistics. Association
for Computational Linguistics.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and
Weizhu Chen. 2021b. Deberta: Decoding-enhanced
bert with disentangled attention. In International
Conference on Learning Representations.
Vijayasaradhi Indurthi, Bakhtiyar Syed, Manish Shri-
vastava, Nikhil Chakravartula, Manish Gupta, and
Vasudeva Varma. 2019. FERMI at SemEval-2019
task 5: Using sentence embeddings to identify hate
speech against immigrants and women in Twitter.
In Proceedings of the 13th International Workshop
on Semantic Evaluation, pages 70-74, Minneapo-
lis, Minnesota, USA. Association for Computational
Linguistics.
David Jurgens, Libby Hemphill, and Eshwar Chan-
drasekharan. 2019. A just and comprehensive strat-
egy for using NLP to address online abuse. In Pro-
ceedings of the 57th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 3658-
3666, Florence, Italy. Association for Computational
Linguistics.
Kepa Korta and John Perry. 2020. Pragmatics. In Ed-
ward N. Zalta, editor, The Stanford Encyclopedia of
Philosophy, Spring 2020 edition. Metaphysics Re-
search Lab, Stanford University.
Zhengyan Li, Yicheng Zou, Chong Zhang, Qi Zhang,
and Zhongyu Wei. 2021. Learning implicit sentiment
in aspect-based sentiment analysis with supervised
contrastive pre-training. In Proceedings of the 2021
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 246-256, Online and Punta
Cana, Dominican Republic. Association for Compu-
tational Linguistics.
Tobias Mayer, Santiago Marro, Elena Cabrio, and Ser-
ena Villata. 2020. Generating adversarial examples
for topic-dependent argument classification. In Com-
putational Models of Argument -Proceedings of
COMMA 2020, Perugia, Italy, September 4-11, 2020,
volume 326 of Frontiers in Artificial Intelligence and
Applications, pages 33-44. IOS Press.
Merriam-Webster. 2022. Dictionary.
Meta. 2022. Facebook: Hate speech policies.
Isar Nejadgholi, Kathleen Fraser, and Svetlana Kir-
itchenko. 2022. Improving generalizability in im-
plicitly abusive language detection with concept ac-
tivation vectors. In Proceedings of the 60th Annual
Meeting of the Association for Computational Lin-
guistics (Volume 1: Long Papers), pages 5517-5529,
Dublin, Ireland. Association for Computational Lin-
guistics.
Fabio Poletto, Valerio Basile, Manuela Sanguinetti,
Cristina Bosco, and Viviana Patti. 2021. Resources
and benchmark corpora for hate speech detection: a
systematic review. Language Resources and Evalua-
tion, 55(2):477-523.
Rolandos Alexandros Potamias, Georgios Siolas, and
Andreas Georgios Stafylopatis. 2020. A transformer-
based approach to irony and sarcasm detection.
Neural Computing and Applications, 32(23):17309-
17320.
Table 5
5shows how Inference, Context, Sentiment, Exaggeration, and Extralinguistic knowledge are the most recurrent not captured devices.Implicit HS
Implicit Property
#
%
Inference
44 53.659
Context
34 41.463
Sentiment
33 40.244
Exaggeration
23 28.049
Extralinguistic knowledge 20 24.390
Irony
17 20.732
Black humor
12 14.634
Visual signs
11 13.415
Metaphor
9 10.976
Rhetorical question
8
9.756
Antithesis
6
7.317
Humiliation
5
6.098
Sarcasm
5
6.098
Circumlocution
4
4.878
Fallacy
4
4.878
Euphemism
3
3.659
Table 5 :
5Implicit properties of the messages that are not captured by HateBERT+ALL.
(a) Results of SOTA models using data augmentation on task A.2002
Task A
Non-HS
Explicit HS
Implicit HS
USE + SVM
P
R
F1
P
R
F1
P
R
F1
RSA
.887
.875
.881
.768
.830
.798
.515
.285
.367
AAV
.888
.875
.881
.764
.825
.793
.491
.280
.356
RNE
.887
.867
.877
.770
.802
.786
.386
.382
.384
RI
.888
.865
.876
.769
.803
.786
.371
.371
.371
RA
.888
.867
.877
.768
.803
.785
.398
.387
.392
EDA
.892
.862
.877
.780
.797
.789
.339
.441
.383
BT
.897
.856
.876
.782
.787
.785
.403
.645
.496
GM
.887
.862
.874
.771
.794
.782
.352
.409
.378
GM+Revised
.887
.863
.875
.769
.803
.786
.385
.403
.394
ALL
.889
.879
.884
.796
.809
.803
.421
.430
.426
BERT
P
R
F1
P
R
F1
P
R
F1
RSA
.894
.899
.896
.805
.835
.819
.487
.301
.372
AAV
.896
.896
.896
.820
.817
.819
.387
.398
.393
RNE
.897
.897
.897
.807
.829
.818
.455
.349
.395
RI
.909
.897
.903
.812
.850
.831
.458
.376
.413
RA
.894
.905
.899
.822
.825
.823
.473
.382
.423
EDA
.900
.894
.897
.807
.836
.821
.416
.333
.370
BT
.909
.887
.898
.824
.826
.825
.459
.608
.523
GM
.898
.901
.900
.824
.821
.823
.409
.398
.403
GM+Revised
.905
.892
.899
.811
.839
.825
.451
.419
.435
ALL
.902
.894
.898
.816
.817
.816
.488
.543
.514
DeBERTaV3
P
R
F1
P
R
F1
P
R
F1
RSA
.912
.893
.902
.803
.877
.838
.441
.242
.312
AAV
.916
.904
.910
.832
.858
.845
.431
.403
.417
RNE
.922
.883
.902
.807
.880
.842
.430
.382
.405
RI
.922
.894
.908
.821
.878
.849
.460
.398
.427
RA
.909
.914
.911
.841
.859
.850
.482
.360
.412
EDA
.907
.899
.903
.813
.859
.835
.460
.312
.372
BT
.919
.885
.902
.830
.857
.844
.428
.543
.479
GM
.913
.899
.906
.839
.843
.841
.399
.468
.431
GM+Revised
.918
.893
.905
.819
.873
.845
.425
.366
.393
ALL
.924
.887
.905
.814
.867
.840
.456
.478
.467
HateBERT
P
R
F1
P
R
F1
P
R
F1
RSA
.895
.895
.895
.814
.830
.822
.452
.382
.414
AAV
.899
.900
.899
.819
.825
.822
.428
.398
.412
RNE
.904
.891
.897
.815
.850
.832
.415
.355
.383
RI
.902
.894
.898
.808
.845
.826
.408
.312
.354
RA
.895
.904
.900
.830
.823
.826
.459
.419
.438
EDA
.890
.901
.895
.808
.820
.814
.454
.317
.373
BT
.910
.880
.895
.820
.823
.822
.378
.543
.446
GM
.901
.898
.899
.824
.827
.825
.414
.425
.419
GM+Revised
.905
.891
.898
.816
.835
.826
.408
.419
.414
ALL
.903
.896
.899
.827
.827
.827
.502
.559
.529
Non-HS
Non-Subtle HS
Subtle HS
USE + SVM
P
R
F1
P
R
F1
P
R
F1
RSA
.891
.871
.881
.787
.832
.809
.800
.103
.182
AAV
.891
.871
.881
.786
.831
.808
.571
.103
.174
RNE
.891
.868
.879
.783
.831
.806
.571
.103
.174
RI
.891
.868
.879
.782
.832
.806
.750
.077
.140
RA
.891
.868
.879
.783
.832
.806
.800
.103
.182
EDA
.892
.870
.881
.788
.828
.807
.263
.128
.172
BT
.892
.868
.880
.789
.831
.809
.739
.436
.548
GM
.891
.867
.879
.786
.827
.806
.269
.179
.215
GM+Revised
.892
.866
.879
.785
.826
.805
.286
.205
.239
ALL
.888
.874
.881
.797
.818
.807
.263
.256
.260
BERT
P
R
F1
P
R
F1
P
R
F1
RSA
.894
.911
.902
.840
.828
.834
.200
.051
.082
AAV
.898
.896
.897
.824
.836
.830
.300
.154
.203
RNE
.899
.895
.897
.826
.839
.833
.400
.256
.312
RI
.899
.889
.894
.819
.840
.830
.240
.154
.188
RA
.906
.893
.900
.823
.850
.836
.190
.103
.133
EDA
.902
.885
.893
.813
.845
.829
.143
.077
.100
BT
.898
.900
.899
.839
.832
.835
.304
.359
.329
GM
.899
.899
.899
.836
.839
.837
.194
.154
.171
GM+Revised
.903
.893
.898
.826
.843
.835
.206
.179
.192
ALL
.904
.883
.893
.813
.845
.829
.385
.385
.385
DeBERTaV3
P
R
F1
P
R
F1
P
R
F1
RSA
.922
.894
.908
.826
.879
.852
.333
.103
.157
AAV
.910
.907
.908
.841
.858
.849
.267
.103
.148
RNE
.923
.893
.907
.829
.881
.854
.261
.154
.194
RI
.910
.894
.902
.828
.860
.843
.364
.205
.262
RA
.923
.884
.903
.815
.883
.848
.188
.077
.109
EDA
.924
.888
.905
.819
.882
.850
.188
.077
.109
BT
.920
.897
.908
.835
.876
.855
.385
.256
.308
GM
.911
.910
.911
.847
.860
.854
.316
.154
.207
GM+Revised
.911
.902
.907
.837
.856
.846
.267
.205
.232
ALL
.926
.881
.903
.817
.877
.846
.306
.385
.341
HateBERT
P
R
F1
P
R
F1
P
R
F1
RSA
.900
.893
.896
.823
.841
.832
.273
.154
.197
AAV
.901
.894
.897
.823
.842
.832
.292
.179
.222
RNE
.897
.894
.896
.823
.836
.829
.167
.103
.127
RI
.906
.886
.896
.812
.852
.832
.176
.077
.107
RA
.897
.892
.895
.816
.836
.826
.077
.026
.038
EDA
.902
.890
.896
.819
.845
.832
.217
.128
.161
BT
.909
.883
.896
.820
.848
.834
.207
.308
.247
GM
.899
.898
.899
.831
.834
.832
.250
.231
.240
GM+Revised
.894
.898
.896
.826
.826
.826
.192
.128
.154
ALL
.903
.881
.892
.816
.844
.830
.391
.462
.424
Table 6 :
6Obtained results on tasks A and B by all models and different types of augmented data.2003
The annotated corpora, and the accompanying annotation guidelines and software can be found at https://github. com/benjaminocampo/ISHate
AcknowledgementsThis work has been supported by the French government, through the 3IA Côte d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.In the following part, we provide a list of implicit properties with their definitions. All the examples illustrating implicit properties are used in implicit hateful messages and their descriptions are presented in the annotation guidelines.Antithesisthe rhetorical contrast of ideas through parallel arrangements of words, clauses, or sentences (as in "action, not words" or "they promised freedom and provided slavery")(Merriam-Webster, 2022)Black humorhumor marked by the use of usually morbid, ironic, grotesquely comic episodes; humor treating sinister subjects like death, disease, deformity, handicap or warfare with bitter amusement(Willinger et al., 2017)Circumlocutionthe use of an unnecessarily large number of words to express an idea(Merriam-Webster, 2022)Contextthe parts of a discourse that surround a word or passage and can throw light on its meaning(Dadvar et al., 2013)Euphemismthe substitution of an agreeable or inoffensive expression for one that may suggest something unpleasant(Casas Gómez, 2009)Exaggeration (hyperbole)an act or instance of exaggerating something: overstatement of the truth(Troiano et al., 2018)Extralinguistic knowledgeany knowledge that exists outside knowledge of the language. In other words, it refers to knowledge that an author or a recipient of a message may possess about the message itself or about the world, but which is not expressed by any linguistic means.Fallacya false or mistaken idea; an often plausible argument using false or invalid inference(Merriam-Webster, 2022)Humiliationthe embarrassment and shame a person feels when someone makes them appear stupid or when they make a mistake in public(Dictionary, 2022)Inferencesomething that is inferred. The premises and conclusion of a process of inferring(Merriam-Webster, 2022)Ironythe use of words to express something other than and especially the opposite of the literal meaning; incongruity between the actual result of a sequence of events and the normal or expected result(Potamias et al., 2020)Metaphora figure of speech in which a word or phrase literally denoting one kind of object or idea is used in place of another to suggest a likeness or analogy between them(Choi et al., 2021;Gao et al., 2018)Metonymya figure of speech consisting of the use of the name of one thing for that of another of which it is an attribute or with which it is associated (such as "crown" in "lands belonging to the crown")(Merriam-Webster, 2022)Rhetorical questiona question not intended to require an answer, used mainly for dramatic effect(Frank, 1990)Sarcasma mode of satirical wit depending on its effect on bitter, caustic, and often ironic language usually directed against an individual. Sarcasm differs from irony with one distinct characteristic: negativity. Sarcasm is mostly witty mockery having a negative connotation whereas irony does not represent negativity(Potamias et al., 2020)Sentimentan attitude, thought, or judgment prompted by feeling; the emotional significance of a passage or expression as distinguished from its verbal context(Li et al., 2021)Synecdochea figure of speech by which a part is put for the whole, the whole for a part, the species for the genus, the genus for the species, or the name of the material for the thing made(Merriam-Webster, 2022)Visual signspunctuation marks, quotes, and use of uppercase that play a role of support in hate messages.Figure 3ademonstrates a screenshot of the annotation interface of the Label Studio tool used for the labeling process. According to the annotation scheme represented by three annotation layers (discussed in Section 3 and Subsection 4.2) Label Studio has three consecutive annotation steps. The first step consists in implicitness with three choices: Implicit HS, Explicit HS, Undecided, keeping in mind that the tool allows to filter Non-Hate out before starting the labeling process. The first choice of Implicit HS or Explicit HS brings in the appearance of the second step of subtlety with three choices: Subtle, Non-Subtle, Undecided. This step does not appear with an Undecided choice at the previous step. As well as that, the choice of Implicit HS triggers the appearance of the third step which consists of implicit properties being characteristic of only implicit messages.Figure 3bshows the shape of the resultant dataset after annotation.C Annotation Tool Interface2004(a) Annotation tool interface.(b) Sample of the ISHate dataset after the annotation process.Figure 3: Label Studio interface to enhance the 7 HS datasets described in Section 4 with three new additional annotation layers: implicit_layer (Explicit HS/Implicit HS), subtlety_layer (Non-Subtle HS/Subtle HS), and implicit_props_layer (Antithesis/Black humor/Context/etc.). The annotation layer hateful_layer (Non-HS/HS) consists of the already provided labels of each HS corpus, with the exception of the Youtube dataset where we re-annotated it.2005
FLAIR: An easy-to-use framework for state-of-theart NLP. Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, Roland Vollgraf, NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the- art NLP. In NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59.
Survey article: Inter-coder agreement for computational linguistics. Ron Artstein, Massimo Poesio, 10.1162/coli.07-034-R2Computational Linguistics. 344Ron Artstein and Massimo Poesio. 2008. Survey article: Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555-596.
Association for Computational Linguistics. Language Technologies. Seattle, United StatesLanguage Technologies, pages 5600-5612, Seattle, United States. Association for Computational Lin- guistics.
Implicitly abusive comparisons -a new dataset and linguistic analysis. Michael Wiegand, Maja Geulig, Josef Ruppenhofer, 10.18653/v1/2021.eacl-main.27Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeMichael Wiegand, Maja Geulig, and Josef Ruppen- hofer. 2021a. Implicitly abusive comparisons -a new dataset and linguistic analysis. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 358-368, Online. Association for Computa- tional Linguistics.
Implicitly abusive language -what does it actually look like and why are we not getting there?. Michael Wiegand, Josef Ruppenhofer, Elisabeth Eder, 10.18653/v1/2021.naacl-main.48Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsMichael Wiegand, Josef Ruppenhofer, and Elisabeth Eder. 2021b. Implicitly abusive language -what does it actually look like and why are we not get- ting there? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 576-587, Online. Association for Computational Linguistics.
Cognitive and emotional demands of black humour processing: the role of intelligence, aggressiveness and mood. Ulrike Willinger, Andreas Hergovich, Michaela Schmoeger, Matthias Deckert, Susanne Stoettner, Iris Bunda, Andrea Witting, Melanie Seidler, Reinhilde Moser, Stefanie Kacena, David Jaeckle, Benjamin Loader, Cogn. Process. 182Ulrike Willinger, Andreas Hergovich, Michaela Schmoeger, Matthias Deckert, Susanne Stoettner, Iris Bunda, Andrea Witting, Melanie Seidler, Reinhilde Moser, Stefanie Kacena, David Jaeckle, Benjamin Loader, Christian Mueller, and Eduard Auff. 2017. Cognitive and emotional demands of black humour processing: the role of intelligence, aggressiveness and mood. Cogn. Process., 18(2):159-167.
Learning from bullying traces in social media. Jun-Ming Xu, Kwang-Sung Jun, Xiaojin Zhu, Amy Bellmore, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaAssociation for Computational LinguisticsJun-Ming Xu, Kwang-Sung Jun, Xiaojin Zhu, and Amy Bellmore. 2012. Learning from bullying traces in social media. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 656-666, Montréal, Canada. As- sociation for Computational Linguistics.
Predicting the type and target of offensive posts in social media. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar, 10.18653/v1/N19-1144Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420, Minneapolis, Minnesota. Association for Computational Linguistics.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar, Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval). Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. Semeval-2019 task 6: Identifying and catego- rizing offensive language in social media (offenseval).
Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationIn Proceedings of the 13th International Workshop on Semantic Evaluation, pages 75-86.
Hate speech detection: A solved problem? the challenging case of long tail on twitter. Ziqi Zhang, Lei Luo, abs/1803.03662Ziqi Zhang and Lei Luo. 2018. Hate speech detection: A solved problem? the challenging case of long tail on twitter. CoRR, abs/1803.03662. |
245,279,384 | Improving Persian Relation Extraction Models by Data Augmentation | Relation extraction that is the task of predicting semantic relation type between entities in a sentence or document is an important task in natural language processing. Although there are many researches and datasets for English, Persian suffers from sufficient researches and comprehensive datasets. The only available Persian dataset for this task is PERLEX, which is a Persian expert-translated version of the SemEval-2010-Task-8 dataset. In this paper, we present our augmented dataset and the results and findings of our system, participated in the Persian relation Extraction shared task of NSURL 2021 workshop. We use PERLEX as the base dataset and enhance it by applying some text preprocessing steps and by increasing its size via data augmentation techniques to improve the generalization and robustness of applied models. We then employ two different models including ParsBERT and multilingual BERT for relation extraction on the augmented PERLEX dataset. Our best model obtained 64.67% of Macro-F1 on the test phase of the contest and it achieved 83.68% of Macro-F1 on the test set of PERLEX. | [] | Improving Persian Relation Extraction Models by Data Augmentation
Moein Salimi Sartakhti [email protected]
Shahid Beheshti University Tehran
Iran
Romina Etezadi [email protected]
Shahid Beheshti University Tehran
Iran
Mehrnoush Shamsfard
Shahid Beheshti University Tehran
Iran
Improving Persian Relation Extraction Models by Data Augmentation
Relation extraction that is the task of predicting semantic relation type between entities in a sentence or document is an important task in natural language processing. Although there are many researches and datasets for English, Persian suffers from sufficient researches and comprehensive datasets. The only available Persian dataset for this task is PERLEX, which is a Persian expert-translated version of the SemEval-2010-Task-8 dataset. In this paper, we present our augmented dataset and the results and findings of our system, participated in the Persian relation Extraction shared task of NSURL 2021 workshop. We use PERLEX as the base dataset and enhance it by applying some text preprocessing steps and by increasing its size via data augmentation techniques to improve the generalization and robustness of applied models. We then employ two different models including ParsBERT and multilingual BERT for relation extraction on the augmented PERLEX dataset. Our best model obtained 64.67% of Macro-F1 on the test phase of the contest and it achieved 83.68% of Macro-F1 on the test set of PERLEX.
Introduction
The task of detecting semantic relations between entities in a text is called Relation Extraction (RE). RE plays an important role in various natural language processing (NLP) tasks such as Information Extraction, Knowledge Extraction, Question Answering, Text Summarization, etc. According to the literature, RE tasks can be divided into two categories: sentence-level and document-level. The goal of the sentence-level RE task is to obtain the relation between two known entities (predefined entities) in a sentence. Nevertheless, the document-level RE task aims to extract the relationship among several entities in a long text which usually contains multiple sentences. According to the differences mentioned earlier, document level relation extraction is more complicated than sentence-level.
In the RE task, entities are string literals that are marked in the sentence and the aim is to identify a limited number of predefined relationships between these entities from the input text. Different tasks can benefit from using RE. For example, suppose that the goal of an information extraction system is to extract corporations located in Iran from a text. For this purpose, the RE component may use the located-in predicate and Iran as the object of the relation to allow this information to be extracted. Moreover, suppose a question answering system, which is going to answer a question about the cause of an event. It may exploit an RE task in which the relationship is Cause-Effect and the object should be that specific event (Asgari-Bidhendi et al., 2021).
Another important application of RE is knowledge base creation. A knowledge base includes a set of entities and relationships between them. Most of the available large knowledge bases such as Yago (Suchanek et al., 2007), Freebase (Bollacker et al., 2008), DBpedia (Auer et al., 2007), and Wikidata (Vrandei and Krtzsch, 2014) are encoded in English. In Persian, there is a knowledge base (knowledge graph) called Farsbase (Asgari-Bidhendi et al., 2019). There are some standard RE datasets for the English language, such as SemEval-2010-Task 8 and TACRED. For Persian which is a low resource language in this field, the only RE dataset (up to authors' knowledge) is PERLEX, which is an expert-translated version of SemEval-2010-Task-8 dataset.
PERLEX has 10717 sentences and there is a relation and two entities in each sentence. In PERL-REX, the boundaries of each entity have been specified by certain tokens. For example, the first en- tity uses the tags <e1> and </e1> for the start and end of the entity (also <e2> and </e2> are used for the second entity). Table 1 shows some examples of annotated sentences.
Our contributions in this work are as follows: (1) Using text augmentation techniques to increase the size of the PERLEX dataset.
(2) Preprocessing the PERLEX to fix some of the issues which improves the performance of the latest Persian relation extractor. In this paper, a relation extraction system is presented which is submitted to the Second Workshop on NLP Solutions for Under Resourced Languages (NSURL 2021). Some modifications on available models are adopted and the effects of each modification on the total generalization and robustness are reported. The remainder of this paper is organized as follows: the methodology is described in Section 2. Section 3 shows the experimental results. Section 4 concludes the paper.
Methodology
Data Preprocessing
Although there are many datasets for English and other rich-resource languages, Persian has no comprehensive available resources for the RE task. Data annotating is a challenging, time-consuming, and cost-consuming task. Therefore, in the data preprocessing step we try to leverage techniques like text augmentation to increase the size of PRELEX. Some preprocessing is also applied to PERLEX. The preprocessing and text augmentation steps are shown in Figure 2.
The preprocessing and text augmentation procedure both includes three sub-steps. Text prepro- As PERLEX is translated semi-automatically, there are some problems in it, such as:
• Some of the sentences have more than one tag <e1> or </e1> or <e2> or </e2>. As it is supposed that each sentence contains one relation, such sentences are filtered. 975 sentences have this problem and are removed from the dataset (See the 4th sentence in Table 1). Relation Type Sentence Product-Producer(e2,e1) The <e1>company</e1> fabricates plastic <e2>chairs</e2>.
Message-Topic(e1,e2)
The major theme of the <e1>book</e1> is the <e2>beauty</e2> of a dream. Entity-Destination(e1,e2) He has just sent <e1>spam</e1> to the <e2>clients</e2>.
Message-Topic(e1,e2) I read the <e1>report</e1> from Somalia on the <e2>agreement</e2> reached by faction leaders on the form of a future government that has <e2>been</e2> warmly welcomed.
• In all of the sentences that <e2> (<e1>) comes exactly before </e1> (</e2>), position of these tags is swapped. This issue is fixed by detecting these sentences and swapping the tokens. 344 sentences have this problem.
• Some of the unclear translated sentences in PERLEX have been modified.
After the data preprocessing step, some noise are added and the text augmentation techniques are applied to increase the size of the PERLEX. Some of the employed techniques are listed below:
• Deleting a token in each sentence randomly • Swapping positions of some tokens randomly • Using the Back-translation method (Shleifer, 2019) in order to increase the size of PER-LEX dataset.
There are different ways for back-translating. For example, one way can be the translation of sentences to English, then to Arabic, and finally, return sentence to Persian. However, in this paper, each sentence is translated from Persian to English and then it is back-translated to Persian by using the python API of the google translate package 1 . Therefore, this method can increase PERLEX size from 9381 to 18762. Reaching 18762 sentences for Persian is an important achievement in the RE task.
Applied Models
This section describes different models that the data augmentation is applied on them: R-BERT (Wu and He, 2019) and RIFRE (Zhao et al., 2021). After the preprocessing and text augmentation steps, two state-of-the-art models R-BERT and RIFRE are employed.
R-BERT:
The main structure of R-BERT is shown in Figure 1. For a sentence with two target entities e1 and e2, $ has been inserted at both the beginning and end of the first entity, and # at both the beginning and end of the second entity. Also, there is a [CLS] symbol at the beginning of each sentence. We finetune the pre-trained ParsBERT (Farahani et al., 2021) Figure 1). Some of the modifications on the R-BERT are listed below:
• R-BERT V1: Average all of the three final embeddings in the fully connected layer rather than a concatenation of them (see Figure 1-C).
• R-BERT V2: Concatenation all of the embeddings of tokens in each entity rather than average them (Figure 1-A).
• Using the last (first) token instead of average all of the embeddings of tokens in the entities (Figure 1-B).
• Using the Multilingual BERT and ParsBERT to reach the best decision RIFRE: This work proposes a representation iterative fusion based on a heterogeneous graph neural network for joint entity and relation extraction. As shown in Figure 3, RIFRE models relations and words as nodes on the graph and update the nodes through a message passing mechanism. The model performs relation extraction after nodes are updated. First, the subject tagger is used to detect all possible subjects on the word nodes. Then, RIFRE combines each word node with the candidate subject and relation, and the object tagger is used to tag the object on the new word nodes. In this paper, RIFRE is adopted with the ParsBERT and Multilingual BERT.
Evaluation
There are three main ways to evaluate the RE classification results:
• Taking into account both variations of each class (18 classes in total). • Using only one variation of each class (and ignoring directionality).
Moreover, there are two approaches to calculate F1-score: Micro-averaging and Macro-averaging. In this dataset, those pairs of entities that do not fall into any of the main nine classes are labeled as the "Other" class. The "Other" class is not participated in the evaluation phase. In this section, the official evaluation method is used for the SemEval-2010-Task-8 dataset, which is (9+1)way classification with macro-averaging F1-score measurement while directionality is taken into account. This (9+1)-way means that the nine main classes plus Other in training and testing is considered, but "Other" is ignored to calculate the F1scores.
Results
Development Phase
In the development phase, PERLEX dataset is used and some improvements are achieved. Table 2 shows the major parameters used in R-BERT experiments. Hyperparameters of the RIFRE are Table 3. Table 4 shows the performance of the various models which are used. R-BERT model produces the best results, while RIFRE model produces the worst according to table 4. Figures 4, 5 and 6 show the loss and F1-score value per epochs. According to these evaluations, simple R-BERT has better results than V1, V2, and V3 variation of the R-BERT. As table 4 shows all of the results, the best model is the simple R-BERT which has achieved F1-Score 83.68 on the test set.
Test Phase
Finally, results show that the proposed model reaches 64.67 of Macro-F1 score on the shared task test data in NSURL contest.
Conclusion
In this paper, the PERLEX dataset is used which is a Persian expert-translated version of the "SemEval-2010-Task-8" dataset. As data annotating is a challenging, time-consuming and costconsuming task, we employ some of the text preprocessing and text augmentation techniques such as back-translation, deleting random tokens, and swapping random tokens. The Preprocessing and text augmentation could increase F-Score by about four percent in comparison to the last and best work on Persian. After preparing the PERLEX, we apply two state-of-the-art models namely R-BERT and RIFRE. In addition, we extend the R-BERT model by changing the R-BERT structure. Pre-trained BERT models that are tested in this paper are ParsBERT and Multilingual Bert. Results show that ParsBERT based on the simple R-BERT structure had a better result than other variations of the R-BERT models and RIFRE. The contributions in this paper are using text augmentation techniques to increase the size of the PERLEX dataset, and preprocessing the PERLEX dataset to fix some of the issues which improves the performance of the latest Persian relation extractor.
Figure 1 :
1R-BERT structure.
Figure 2 :
2Text preprocessing and text augmentation procedure. cessing sub-steps are listed below: • Swap position of the wrong tag • Modify the unclear sentences • Remove sentences which have more than one specific tag
Figure 3 :
3RIFRE structure.
Figure 4 :
4F-Score and Loss per epochs on the V1 R-BERT • Using only one variation of each class (and considering directionality).
Figure 5 :
5F-Score and Loss per epochs on the V2 R-BERT Figure 6: F-Score and Loss per epochs on the V3 R-BERT shown in
Table 1 :
1Some correct and wrong examples of the PERLEX.
and Multilingual BERT (Libovick et al., 2019) models on the augmented PERLEX. In addition, table 2 showsParameters
Value
Batch size
16
Max sentence length 128
Adam learning rate
2e-5
Number of epochs
10
Dropout rate
0.1
other hyperparameters of R-BERT. Furthermore,
we experiment with different combination of em-
beddings produced by R-BERT to reach the best
model (See embeddings A, B,and C in
Table 3 :
3Parameters settings for the RIFRE model.Parameters
Value
Batch size
16
Max sentence length 128
Adam learning rate
1e-1
Number of epochs
10
Dropout rate
0.1
Table 4 :
4Performance of the models on PERLEX.Models
F1-score
Simple R-BERT 83.86%
R-BERT V1
83.02%
R-BERT V2
83.11%
R-BERT V3
83.08%
RIFRE
79.54%
Table 5 :
5Performance of the models on different relations types in PERLEX.Relation Types
F1-score
Cause-Effect
61.70%
Content-Container
59.26%
Entity-Destination
76.01%
Entity-Origin
58.04%
Instrument-Agency 75.54%
Member-Collection 32.85%
Message-Topic
76.06%
Other
40.95%
https://pypi.org/project/googletrans/
Farsbase: The persian knowledge graph. Ali Majid Asgari-Bidhendi, Behrouz Hadian, Minaei-Bidgoli, Social Work. 106Majid Asgari-Bidhendi, Ali Hadian, and Behrouz Minaei-Bidgoli. 2019. Farsbase: The persian knowl- edge graph. Social Work, 10(6):1169-1196.
Perlex: A bilingual persian-english gold dataset for relation extraction. Mehrdad Majid Asgari-Bidhendi, Behrooz Nasser, Behrouz Janfada, Minaei-Bidgoli, Scientific Programming. 2021Majid Asgari-Bidhendi, Mehrdad Nasser, Behrooz Jan- fada, and Behrouz Minaei-Bidgoli. 2021. Perlex: A bilingual persian-english gold dataset for relation ex- traction. Scientific Programming, 2021:1-8.
Dbpedia: a nucleus for a web of open data. Sren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary Ives, ISWC'07/ASWC'07 Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference. 4825Sren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: a nucleus for a web of open data. In ISWC'07/ASWC'07 Proceedings of the 6th interna- tional The semantic web and 2nd Asian conference on Asian semantic web conference, volume 4825, pages 722-735.
Freebase: a collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, Proceedings of the 2008 ACM SIGMOD international conference on Management of data. the 2008 ACM SIGMOD international conference on Management of dataKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250.
Parsbert: Transformer-based model for persian language understanding. Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri, Neural Processing Letters. Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, and Mohammad Manthouri. 2021. Parsbert: Transformer-based model for persian language understanding. Neural Processing Letters, pages 1-17.
How language-neutral is multilingual bert?. Jindrich Libovick, Rudolf Rosa, Alexander Fraser, arXiv:1911.03310arXiv preprintJindrich Libovick, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? arXiv preprint arXiv:1911.03310.
Low resource text classification with ulmfit and backtranslation. Sam Shleifer, arXiv:1903.09244arXiv preprintSam Shleifer. 2019. Low resource text classification with ulmfit and backtranslation. arXiv preprint arXiv:1903.09244.
Yago: a core of semantic knowledge. Fabian M Suchanek, Gjergji Kasneci, Gerhard Weikum, Proceedings of the 16th international conference on World Wide Web. the 16th international conference on World Wide WebFabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697-706.
Wikidata: a free collaborative knowledgebase. Denny Vrandei, Markus Krtzsch, Communications of The ACM. 5710Denny Vrandei and Markus Krtzsch. 2014. Wikidata: a free collaborative knowledgebase. Communica- tions of The ACM, 57(10):78-85.
Enriching pretrained language model with entity information for relation classification. Shanchan Wu, Yifan He, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementShanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2361-2364.
Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction. Kang Zhao, Hua Xu, Yue Cheng, Xiaoteng Li, Kai Gao, Knowledge Based Systems. 219106888Kang Zhao, Hua Xu, Yue Cheng, Xiaoteng Li, and Kai Gao. 2021. Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction. Knowledge Based Systems, 219:106888. |
227,230,312 | A guide to the dataset explosion in QA, NLI, and commonsense reasoning | Question answering, natural language inference and commonsense reasoning are increasingly popular as general NLP system benchmarks, driving both modeling and dataset work. Only for question answering we already have over 100 datasets, with over 40 published after 2018. However, most new datasets get "solved" soon after publication, and this is largely due not to the verbal reasoning capabilities of our models, but to annotation artifacts and shallow cues in the data that they can exploit.This tutorial aims to (1) provide an up-to-date guide to the recent datasets, (2) survey the old and new methodological issues with dataset construction, and (3) outline the existing proposals for overcoming them. The target audience is the NLP practitioners who are lost in dozens of the recent datasets, and would like to know what these datasets are actually measuring. Our overview of the problems with the current datasets and the latest tips and tricks in the dataset construction methodology will also be useful to the researchers working on future benchmarks. The tutorial slides are available online at https://www.annargrs.github.io/ dataset-explosion.Tutorial descriptionHigh-level verbal reasoning tasks are increasingly used as de-facto Turing test proxies in evaluating the language capabilities of NLP systems. In particular, question answering (QA), natural language inference (NLI) and commonsense reasoning are included in evaluation suites and featured in most papers proposing new architectures. Accordingly, these tasks are seeing an explosion of datasets: there are already over 100 datasets only for QA, with over 40 published since 2018. This makes the choice of data for a given study a research-intensive task in itself.The goals of the tutorial are as follows:• to provide an up-to-date guide to the recent datasets for training verbal reasoning systems;• to survey the old and new methodological issues; • to outline the existing proposals for overcoming them, and to highlight the biggest remaining challenges.This tutorial would be useful to NLP practitioners who simply want to pick a dataset and focus on modeling work, while being aware of potential issues that often go unnoticed. It would also be useful to the researchers working on new datasets and looking for the latest tips and tricks for overcoming the common pitfalls.Details and PrerequisitesThe tutorial will be of the cutting-edge type. The tutorial slides are available online at https://www. annargrs.github.io/dataset-explosion.Prerequisites. We assume basic familiarity with the standard machine learning evaluation workflow and the three tasks that we are covering (question answering, commonsense reasoning, natural language inference). We also assume some familiarity with the methodology of crowdsourcing NLP datasets. | [
6360322,
52054914,
13746570,
52895001,
4537113,
7228830,
52057510,
26501419,
3432876,
52019251,
204823992,
11816014,
201698258,
44156126,
19204066,
2593903,
47018994,
174800890,
196181887
] | A guide to the dataset explosion in QA, NLI, and commonsense reasoning
December 12th, 2020
Anna Rogers [email protected]
Center for Social Data Science
Dept. of Computer Science
University of Copenhagen Copenhagen
Denmark
Anna Rumshisky
Univ. of Massachusetts Lowell Lowell
USA
A guide to the dataset explosion in QA, NLI, and commonsense reasoning
Proceedings of the 28th International Conference on Computational Linguistics: Tutorial Abstracts
the 28th International Conference on Computational Linguistics: Tutorial AbstractsBarcelona, Spain27December 12th, 2020
Question answering, natural language inference and commonsense reasoning are increasingly popular as general NLP system benchmarks, driving both modeling and dataset work. Only for question answering we already have over 100 datasets, with over 40 published after 2018. However, most new datasets get "solved" soon after publication, and this is largely due not to the verbal reasoning capabilities of our models, but to annotation artifacts and shallow cues in the data that they can exploit.This tutorial aims to (1) provide an up-to-date guide to the recent datasets, (2) survey the old and new methodological issues with dataset construction, and (3) outline the existing proposals for overcoming them. The target audience is the NLP practitioners who are lost in dozens of the recent datasets, and would like to know what these datasets are actually measuring. Our overview of the problems with the current datasets and the latest tips and tricks in the dataset construction methodology will also be useful to the researchers working on future benchmarks. The tutorial slides are available online at https://www.annargrs.github.io/ dataset-explosion.Tutorial descriptionHigh-level verbal reasoning tasks are increasingly used as de-facto Turing test proxies in evaluating the language capabilities of NLP systems. In particular, question answering (QA), natural language inference (NLI) and commonsense reasoning are included in evaluation suites and featured in most papers proposing new architectures. Accordingly, these tasks are seeing an explosion of datasets: there are already over 100 datasets only for QA, with over 40 published since 2018. This makes the choice of data for a given study a research-intensive task in itself.The goals of the tutorial are as follows:• to provide an up-to-date guide to the recent datasets for training verbal reasoning systems;• to survey the old and new methodological issues; • to outline the existing proposals for overcoming them, and to highlight the biggest remaining challenges.This tutorial would be useful to NLP practitioners who simply want to pick a dataset and focus on modeling work, while being aware of potential issues that often go unnoticed. It would also be useful to the researchers working on new datasets and looking for the latest tips and tricks for overcoming the common pitfalls.Details and PrerequisitesThe tutorial will be of the cutting-edge type. The tutorial slides are available online at https://www. annargrs.github.io/dataset-explosion.Prerequisites. We assume basic familiarity with the standard machine learning evaluation workflow and the three tasks that we are covering (question answering, commonsense reasoning, natural language inference). We also assume some familiarity with the methodology of crowdsourcing NLP datasets.
Reading list
The core approaches to machine reading comprehension and several widely-used datasets are covered in the survey by Qui et al. (2019). For NLI, we refer the reader to the surveys on resources and approaches (Storks et al., 2019b), as well as issues with the current data (Schlegel et al., 2020). A survey on benchmarks and approaches is also available for commonsense reasoning (Storks et al., 2019a).
Tutorial outline
The tutorial will present three hours of content with a thirty minute break.
Motivation. We will start by discussing the place of high-level reasoning tasks in the current NLP system evaluation paradigm: how the focus shifted away from the the low-level tasks such as POStagging, and how the low-level linguistic competences seems to be coming back (Ribeiro et al., 2020).
The dataset explosion. This first part of the tutorial will provide an overview of the main types of datasets for QA, NLI, and commonsense reasoning. For each sub-type, we will discuss representative dataset examples.
The field of question answering encompasses both open-world QA and reading comprehension (RC). Open-world QA focuses on factoid questions, with the answers typically extracted from web snippets or Wikipedia. The questions usually come from search engine queries (Kwiatkowski et al., 2019) and quiz data (Joshi et al., 2017). Bordering on open-world QA is the task of QA on structured data, such as tables and databases (Jiang et al., 2019).
Most current reading comprehension datasets are extractive (Rajpurkar et al., 2016;Dua et al., 2019), i.e. the correct answer is contained within the text itself, and the task is to find the correct span. Multiplechoice questions are harder to generate, as they need good confounds, and often come from curated test collections (Lai et al., 2017). Freeform answers remain rare (Bajaj et al., 2016), as evaluating them faces the general problem of evaluating language generation. Most RC datasets are single-domain, with a few exceptions (Reddy et al., 2018).
For NLI, we will organize the discussion in terms of domains covered by the current datasets: singledomain (Bowman et al., 2015), multi-domain (Williams et al., 2017), specialized domains (Romanov and Shivade, 2018). In both QA and NLI there have also been attempts to recast datasets from other tasks as QA/NLI problems (McCann et al., 2018;White et al., 2017), and researchers working on NLI often rely on the datasets for the related problem of RTE (Dzikovska et al., 2013).
Commonsense reasoning datasets come in different formats: multi-choice reading comprehension (Ostermann et al., 2018), extractive reading comprehension , story completion (Mostafazadeh et al., 2017) and also as multi-choice questions for a single sentence input (Levesque et al., 2012). The task of commonsense reasoning is supposed to involve a combination of contextinternal knowledge with context-external world knowledge, and we will briefly mention the major sources of such knowledge that are typically recommended in commonsense challenges, such as scripts (Wanzare et al., 2016), frames (Baker et al., 1998), and entity relations (Speer et al., 2017).
Reality check. One of the reasons there are so many new datasets is that most of them get "solved" very soon after publication, as it happened with CoQA (Reddy et al., 2018). However, this is not necessarily a testimony to the linguistic power of deep learning. It is becoming increasingly clear that, given the opportunity, our models exploit annotation artifacts and shallow lexical cues, achieving a high performance but not a high degree of language understanding. The second part of the tutorial will synthesize a string of papers exposing such issues (Niven and Kao, 2019;McCoy et al., 2019;Geva et al., 2019;Wallace et al., 2019).
To give a few examples, for QA it has been shown that human-level performance on SQuADcan be achieved while relying only on superficial cues (Jia and Liang, 2017), and 73% of the NewsQA can be solved by simply identifying the single most relevant sentence (Chen et al., 2016). A system trained on one QA dataset does not tend to perform well on another one, even if it is in the same domain (Yatskar, 2019). Research on adversarial attacks suggests that it is possible to find dataset-specific phrases that will force a QA system to output a certain prediction when added to any input. For example, a SQuADtrained QA system can be hacked in this way to always predict "to kill Americam people" as the answer to any question (Wallace et al., 2019).
In NLI, 67% of SNLI (Bowman et al., 2015) and 53% of MultiNLI (Williams et al., 2017) can be solved without looking at the premise (Gururangan et al., 2018). The HANS dataset showed that models trained on MNLI (Williams et al., 2017) actually learn to rely on shallow cues and can be fooled by syntactic heuristics (McCoy et al., 2019). Furthermore, the models trained on such datasets are unaware of lexical knowledge that would have enabled them to solve simple WordNet-based permutations of the original data (Glockner et al., 2018).
In commonsense reasoning, by definition, the challenge is to get the system to make decision based on both the current context and some general knowledge about the world. However, in the challenge of SemEval2018-Task 11 (Ostermann et al., 2018) most participants did not use any extra knowledge sources, and one of them still achieved 0.82 accuracy vs 0.84 achieved by the ConceptNet-based winner. It is argued that large pre-trained language models already possess much of such knowledge: for instance, BERT (Devlin et al., 2018) achieved over 86% on SWAG (Zellers et al., 2018).
We will also mention the widespread methodological problem of under-reporting of environment factors that may make as much difference as the proposed architecture changes. The effect of such factors as random seed, hardware, library versions has been discussed for several QA datasets (Crane, 2018).
Methodology developments and challenges. For existing datasets, simply removing annotation artifacts will not solve the problem, as it creates other exploitable artifacts (Gururangan et al., 2018). Among the recent improvements in the dataset collection methodology are complex queries that require aggregating information from several sources (Dua et al., 2019;Kocisky et al., 2018;. Reliance on shallow patterns could be reduced by paraphrasing, including adversarial paraphrasing with a model-in-the-loop as an oracle that would reject questions that were too easy (Dua et al., 2019). Another alternative is balanced datasets with as many question types and genres (Rogers et al., 2020). Diversity can also be somewhat improved with partly synthesized data (Labutov et al., 2018), but any templates or annotator examples themselves are potential sources of bias.
Questions are more difficult if they are collected independently from the text (Kwiatkowski et al., 2019), written from summaries (Trischler et al., 2016) or hints . Finally, unanswerable questions (Rajpurkar et al., 2018) in conjunction with adversarial inputs should also force the model to go beyond lexical pattern-patching.
A radically different direction is shifting to exclusively out-of-distribution evaluation (Linzen, 2020), e.g with adversarial (McCoy et al., 2019) and multi-dataset (Fisch et al., 2019) evaluation. However, for that we still need to be aware of the training distribution, which becomes particularly challenging because with very large pre-trained models it is hard to guarantee that the test examples were not seen in pre-training (Brown et al., 2020).
Diversity efforts
The tutorial will be presented by an all-female team with a senior researcher and a post-doc as the lead organizer.
The survey will focus on English datasets, but we will provide references to the existing datasets in other languages that we are aware of.
6 Organizers
6Anna Rogers, University of Copenhagen [email protected] Research interests: distributional and cognitive semantics, interpretability and evaluation of deep learning models, computational social science. Organization: LREC T4 tutorial on compositionality in distributional semantics, CogALex-V Shared Task on the Corpus-Based Identification of Semantic Relations, the Third Workshop on Evaluating Vector Space Representations for NLP (NAACL 2019), the First Workshop on Insights from Negative Results in NLP (EMNLP 2020). Rumshisky, University of Massachusetts Lowell [email protected] Research interests: distributional semantics, biomedical and social NLP, temporal reasoning, machine learning/deep learning for NLP Organization: Program Chair for NAACL 2021, Organizer for LREC T4 tutorial on compositionality in distributional semantics, SemEval-2017 task 6 (#HashtagWars: Learning a Sense of Humor), Clinical Natural Language Processing Workshop at COLING 2016, NAACL 2019 and EMNLP 2020, SemEval-2019 task 11 (Normalization of Medical Concepts in Clinical Narrative), The Third Workshop on Evaluating Vector Space Representations for NLP (NAACL 2019), the First Workshop on Insights from Negative Results in NLP (EMNLP 2020).Anna
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew Mcnamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang, arXiv:1611.09268MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. csPayal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An- drew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv: 1611.09268 [cs].
The Berkeley Framenet project. Collin F Baker, Charles J Fillmore, John B Lowe, Proceedings of the 17th International Conference on Computational Linguistics. the 17th International Conference on Computational Linguistics1Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley Framenet project. In Proceedings of the 17th International Conference on Computational Linguistics, volume 1, pages 86-90.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal, 17-21 September 2015.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, arXiv:2005.14165Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec RadfordTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christo- pher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs], June.
A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task. Danqi Chen, Jason Bolton, Christopher D Manning, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2358-2367.
QuAC: Question Answering in Context. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-Tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumEunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question Answering in Context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174-2184, Brussels, Belgium.
Questionable Answers in Question Answering Research: Reproducibility and Variability of Published Results. Matt Crane, Transactions of the Association for Computational Linguistics. 6Matt Crane. 2018. Questionable Answers in Question Answering Research: Reproducibility and Variability of Published Results. Transactions of the Association for Computational Linguistics, 6:241-252.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. csJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirec- tional Transformers for Language Understanding. arXiv:1810.04805 [cs].
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. the 2019 Conference of the North American Chapter of the Association for Computational LinguisticsDheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 2368-2378.
Myroslava Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa Trang Dang, SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge. Atlanta, Georgia, USA2Proceedings of the Seventh International Workshop on Semantic Evaluation (Se-mEval 2013)Myroslava Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Pe- ter Clark, Ido Dagan, and Hoa Trang Dang. 2013. SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (Se- mEval 2013), pages 263-274, Atlanta, Georgia, USA.
MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension. Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen, Proceedings of the 2nd Workshop on Machine Reading for Question Answering. the 2nd Workshop on Machine Reading for Question AnsweringHong Kong, ChinaAssociation for Computational LinguisticsAdam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 1-13, Hong Kong, China, November. Association for Computational Linguistics.
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets. Mor Geva, Yoav Goldberg, Jonathan Berant, EMNLP. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets. In EMNLP.
Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. Max Glockner, Vered Shwartz, Yoav Goldberg, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650-655.
Annotation Artifacts in Natural Language Inference Data. Swabha Suchin Gururangan, Omer Swayamdipta, Roy Levy, Samuel Schwartz, Noah A Bowman, Smith, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersSuchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112.
Adversarial Examples for Evaluating Reading Comprehension Systems. Robin Jia, Percy Liang, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingRobin Jia and Percy Liang. 2017. Adversarial Examples for Evaluating Reading Comprehension Systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031.
FreebaseQA: A New Factoid QA Data Set Matching Trivia-Style Question-Answer Pairs with Freebase. Kelvin Jiang, Dekun Wu, Hui Jiang, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. the 2019 Conference of the North American Chapter of the Association for Computational LinguisticsKelvin Jiang, Dekun Wu, and Hui Jiang. 2019. FreebaseQA: A New Factoid QA Data Set Matching Trivia-Style Question-Answer Pairs with Freebase. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 318-323.
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611.
The NarrativeQA Reading Comprehension Challenge. Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, Edward Grefenstette, Transactions of the Association for Computational Linguistics. 6Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, and Edward Grefenstette. 2018. The NarrativeQA Reading Comprehension Challenge. Transactions of the Association for Computational Linguistics, 6:317-328.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association of Computational Linguistics. Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav PetrovKenton LeeTom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association of Computational Linguistics.
Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds. Igor Labutov, Bishan Yang, Anusha Prakash, Amos Azaria, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers1Igor Labutov, Bishan Yang, Anusha Prakash, and Amos Azaria. 2018. Multi-Relational Question Answering from Narratives: Machine Reading and Reasoning in Simulated Worlds. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 833-844, Melbourne, Australia.
RACE: Large-scale ReAding Comprehension Dataset From Examinations. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingGuokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding Comprehension Dataset From Examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794.
The Winograd Schema Challenge. J Hector, Ernest Levesque, Leora Davis, Morgenstern, Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning. the Thirteenth International Conference on Principles of Knowledge Representation and ReasoningHector J Levesque, Ernest Davis, and Leora Morgenstern. 2012. The Winograd Schema Challenge. In Proceed- ings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, pages 552-561.
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?. Tal Linzen, arXiv:2005.00955Tal Linzen. 2020. How Can We Accelerate Progress Towards Human-like Linguistic Generalization? arXiv:2005.00955 [cs], May.
The Natural Language Decathlon. Bryan Mccann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher, arXiv:1806.08730Multitask Learning as Question Answering. cs, statBryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The Natural Language De- cathlon: Multitask Learning as Question Answering. arXiv:1806.08730 [cs, stat].
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. Tom Mccoy, Ellie Pavlick, Tal Linzen, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyTom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy.
Shared Task: The Story Cloze Test. Nasrin Mostafazadeh, Michael Roth, Nathanael Chambers, Annie Louis, Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-Level Semantics. the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-Level SemanticsNasrin Mostafazadeh, Michael Roth, Nathanael Chambers, and Annie Louis. 2017. LSDSem 2017 Shared Task: The Story Cloze Test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-Level Semantics, pages 46-51.
Probing Neural Network Comprehension of Natural Language Arguments. Timothy Niven, Hung-Yu Kao, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyTimothy Niven and Hung-Yu Kao. 2019. Probing Neural Network Comprehension of Natural Language Argu- ments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658-4664, Florence, Italy.
SemEval-2018 Task 11: Machine Comprehension Using Commonsense Knowledge. Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, Manfred Pinkal, Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationNew Orleans, LouisianaSimon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2018. SemEval-2018 Task 11: Machine Comprehension Using Commonsense Knowledge. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 747-757, New Orleans, Louisiana.
Boyu Qiu, Xu Chen, Jungang Xu, Yingfei Sun, arXiv:1906.03824A Survey on Neural Machine Reading Comprehension. Boyu Qiu, Xu Chen, Jungang Xu, and Yingfei Sun. 2019. A Survey on Neural Machine Reading Comprehension. arXiv:1906.03824 [cs], June.
SQuAD: 100,000+ Questions for Machine Comprehension of Text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.
Know What You Don't Know: Unanswerable Questions for SQuAD. Pranav Rajpurkar, Robin Jia, Percy Liang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, Australia2Short Papers)Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don't Know: Unanswerable Questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia.
Siva Reddy, Danqi Chen, Christopher D Manning, arXiv:1808.07042CoQA: A Conversational Question Answering Challenge. csSiva Reddy, Danqi Chen, and Christopher D. Manning. 2018. CoQA: A Conversational Question Answering Challenge. arXiv:1808.07042 [cs].
Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. Tongshuang Marco Tulio Ribeiro, Carlos Wu, Sameer Guestrin, Singh, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsMarco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Online, July. Association for Computational Linguistics.
Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks. Anna Rogers, Olga Kovaleva, Matthew Downey, Anna Rumshisky, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceAnna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. 2020. Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8722-8731.
Lessons from Natural Language Inference in the Clinical Domain. Alexey Romanov, Chaitanya Shivade, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAlexey Romanov and Chaitanya Shivade. 2018. Lessons from Natural Language Inference in the Clinical Domain. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1586- 1596.
Viktor Schlegel, Goran Nenadic, Riza Batista-Navarro, arXiv:2005.14709Beyond Leaderboards: A survey of methods for revealing weaknesses in Natural Language Inference data and models. Viktor Schlegel, Goran Nenadic, and Riza Batista-Navarro. 2020. Beyond Leaderboards: A survey of methods for revealing weaknesses in Natural Language Inference data and models. arXiv:2005.14709 [cs], May.
Conceptnet 5.5: An open multilingual graph of general knowledge. Robert Speer, Joshua Chin, Catherine Havasi, Thirty-First AAAI Conference on Artificial Intelligence. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence.
Shane Storks, Qiaozi Gao, Joyce Y Chai, arXiv:1904.01172Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches. Shane Storks, Qiaozi Gao, and Joyce Y. Chai. 2019a. Commonsense Reasoning for Natural Language Under- standing: A Survey of Benchmarks, Resources, and Approaches. arXiv:1904.01172 [cs], April.
Shane Storks, Qiaozi Gao, Joyce Y Chai, arXiv:1904.01172Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches. Shane Storks, Qiaozi Gao, and Joyce Y. Chai. 2019b. Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches. arXiv:1904.01172 [cs], November.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman, arXiv:1611.09830NewsQA: A Machine Comprehension Dataset. csAdam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Sule- man. 2016. NewsQA: A Machine Comprehension Dataset. arXiv:1611.09830 [cs].
Universal Adversarial Triggers for Attacking and Analyzing NLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh, EMNLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal Adversarial Triggers for Attacking and Analyzing NLP. EMNLP.
DeScript : A Crowdsourced Corpus for the Acquisition of High-Quality Script Knowledge. D A Lilian, Alessandra Wanzare, Stefan Zarcone, Manfred Thater, Pinkal, Language Resources and Evaluation Conference. Lilian D. A. Wanzare, Alessandra Zarcone, Stefan Thater, and Manfred Pinkal. 2016. DeScript : A Crowd- sourced Corpus for the Acquisition of High-Quality Script Knowledge. In Language Resources and Evaluation Conference, pages 3494-3501.
Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework. Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 996-1005, Taipei, Taiwan.
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. Adina Williams, Nikita Nangia, Samuel R Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana1Long PapersAdina Williams, Nikita Nangia, and Samuel R. Bowman. 2017. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana.
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D Manning, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumZhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium.
A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC. Mark Yatskar, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. the 2019 Conference of the North American Chapter of the Association for Computational LinguisticsMark Yatskar. 2019. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 2318-2323.
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumRowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104, Brussels, Belgium.
Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, Benjamin Van Durme, arXiv:1810.12885ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension. csSheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension. arXiv:1810.12885 [cs]. |
250,390,697 | Investigating phonological theories with crowd-sourced data: The Inventory Size Hypothesis in the light of Lingua Libre | Data-driven research in phonetics and phonology relies massively on oral resources, and access thereto. We propose to explore a question in comparative linguistics using an open-source crowd-sourced corpus, Lingua Libre, Wikimedia's participatory linguistic library, to show that such corpora may offer a solution to typologists wishing to explore numerous languages at once. For the present proof of concept, we compare the realizations of Italian and Spanish vowels (sample size = 5000) to investigate whether vowel production is influenced by the size of the phonemic inventory (the Inventory Size Hypothesis), by the exact shape of the inventory (the Vowel Quality Hypothesis) or by none of the above. Results show that the size of the inventory does not seem to influence vowel production, thus supporting previous research, but also that the shape of the inventory may well be a factor determining the extent of variation in vowel production. Most of all, these results show that Lingua Libre has the potential to provide valuable data for linguistic inquiry. | [
3164985,
209515879
] | Investigating phonological theories with crowd-sourced data: The Inventory Size Hypothesis in the light of Lingua Libre
July 14, 2022
Mathilde Hutin [email protected]
Saclay LISN-CNRS (UMR 9015
Université Paris
Bât 50791405OrsayFrance
Marc Allassonnière-Tang [email protected]
Muséum national d'Histoire naturelle Laboratoire Eco-Anthropologie (UMR
7206) 17, place du Trocadéro75016ParisFrance
Investigating phonological theories with crowd-sourced data: The Inventory Size Hypothesis in the light of Lingua Libre
19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
July 14, 2022
Data-driven research in phonetics and phonology relies massively on oral resources, and access thereto. We propose to explore a question in comparative linguistics using an open-source crowd-sourced corpus, Lingua Libre, Wikimedia's participatory linguistic library, to show that such corpora may offer a solution to typologists wishing to explore numerous languages at once. For the present proof of concept, we compare the realizations of Italian and Spanish vowels (sample size = 5000) to investigate whether vowel production is influenced by the size of the phonemic inventory (the Inventory Size Hypothesis), by the exact shape of the inventory (the Vowel Quality Hypothesis) or by none of the above. Results show that the size of the inventory does not seem to influence vowel production, thus supporting previous research, but also that the shape of the inventory may well be a factor determining the extent of variation in vowel production. Most of all, these results show that Lingua Libre has the potential to provide valuable data for linguistic inquiry.
Introduction
One of the main challenges in data-driven research on the phonetics-phonology interface is the access to reliable, exploitable oral resources in sufficient amounts. While linguists working on other linguistic levels such as semantics or syntax can use written data as a proxy for language production, phoneticians and phonologists are limited to oral data, thus relying on audio recordings for vocal languages or video recordings for signed languages. Accessing massive amounts of such data is difficult enough, especially for studies in language comparison, that require such amounts in not one, but at the very least two languages.
To overcome this challenge, researchers developed two strategies. On the one hand, they can collect their own corpora, e.g., the CMU Wilderness Corpus (Black, 2019) or its emanation, the VoxClamantis corpus (Salesky et al., 2020), or other types of language-specific laboratory recordings such as the TIMIT database for English (Garofolo et al., 1993) or NCCFr for French (Torreira et al., 2010). On the other hand, they can gather audio recordings from other sources such as TV or radio shows, as was done for instance in the framework of the international project OSEO Quaero (www.quaero. org/), or from audio books, as exemplified by the LibriSpeech corpus for English (Panayotov et al., 2015, www.openslr.org/12). Both options have the disadvantage of being overly costly, both in money and human resources, and sometimes not freely accessible to the community. A third path has been recently explored: crowd-sourced data, recorded by volunteers and therefore much less costly in time and money and generally opensource. The project Common Voice (Ardila et al., 2020, https://commonvoice.mozilla.org) for instance was launched in 2017 by Mozilla for the intended purpose of creating a free database for the development of speech recognition software. In March 2022, it contains ∼18,000 hours of speech, 14,000 of which have been validated by other speakers, in 87 languages.
In the present paper, we explore a similar project: Lingua Libre, a participatory linguistic media library developed by Wikimedia France (https: //lingualibre.org). It was launched in 2015, and, in March 2022, it counts ∼700,000 recordings in 148 languages across 775 speakers. This database is interesting to explore because it differs from Common Voice in the fact that its aim is not primarily the development of new technologies, or even linguistic inquiry in general, but patrimonial conservation of languages. Lingua Libre was used only once for academic purposes, i.e., to automatically estimate the transparency of orthographies in 17 languages (Marjou, 2021). With this study, we aim to show that such data can be easily processed and useful to answer phonological questions in linguistic typology. In this proof of concept, we explore the realization of vowels by comparing two Romance languages: Italian and Spanish.
The outline of the paper is as follows. In Section 2, we describe our research question to justify our choice of languages. In Section 3, we present our corpus and methodology. In Section 4, we provide an analysis of the vowels in Italian and Spanish. Section 5 concludes and discusses the results.
The Inventory Size Hypothesis vs the Vowel Quality Hypothesis
In this paper, we offer to use Lingua Libre to tackle the question of vowel production with regards to vowel inventory. Our research question stems from various theories regarding the shape of vowel inventories in the world's languages. Our study however focuses on synchronic phonetic variation with regards to phonological systems (on the phylogeny of vowel systems in the languages of the world, see Zhang and Gong (2022) and references therein). The original Vowel Dispersion Theory (Liljencrants and Lindblom, 1972;Lindblom, 1986) and a few years later the Adaptive Dispersion Theory (Lindblom, 1990), stem from the H&H ("Hypoand Hyperspeech") model of communication, that assumes that speakers tend toward minimal and sufficient perceptual contrast, i.e., operate a trade-off between articulatory economy (hypospeech) and perceptual distinctiveness (hyperspeech). In the original works, these theories are the foundation for phylogenetic research on the distribution of vocalic categories in the languages of the world, for instance to explain why three-vowel systems usually display /a, i, u/ and not, say, /a, y, u/. Phoneticians however have particularly focused on one hypothesis that emerges from this model: The more vocalic categories the language has in its phonemic inventory, the less phonetic variation the corresponding vowel realizations will display. This is the hypothesis we ourselves focus on in the present paper, to which we will refer as the Inventory Size Hypothesis, henceforth ISH.
This hypothesis has been tested in a number of studies, with contradictory results. Jongman et al. (1989) on American English, Greek and German, Al-Tamimi and Ferragne (2005) on French and two dialects of Arabic and Larouche and Steffann (2018) on Quebec French and Inuktitut support the ISH while Bradlow (1995) on English and Spanish, Meunier et al. (2003) on English, Spanish and French, Recasens and Espinosa (2009) on 5 dialects of Catalan, Lee (2012) on 5 dialects of Chinese and Heeringa et al. (2015) on 3 German languages, do not provide evidence in favor of the ISH, which can be due, for the last three at least, to the genetic and geographical closeness of the languages and possible bilingualism of the speakers. Studies on larger sets of languages however tend to invalidate the hypothesis: Engstrand and Krull (1991) found inconclusive results on 7 languages across 6 language families; Livijn (2000) Building on these negative results, we suggest that it may not so much be the number of categories but their actual quality that influences the vowel's realizations. For instance, between two imaginary languages A and B displaying /a, e, i, o, u/ vs /a, e, i, y, o, u/ respectively, it is also possible that not all the categories in language B will display less variation than those in language A: Only [i] and possibly [u], which compete with /y/ in B but not in A, would show less variation in B than in A. We propose to refer to this restatement of the original hypothesis, as the Vowel Quality Hypothesis, henceforth VQH.
In this paper, we aim to test this alternative: Either the ISH is valid, and all the vowels of the system will be affected by the size of the inventory, or the VQH is more accurate, and only some vowels or some acoustic parameters will be affected depending on the other vowels comprised in the system. The third possible outcome is that neither the ISH nor the VQH is accurate.
Materials and Methodology
As a crowd-sourcing tool, Lingua Libre allows any speaker to log in, fill in a profile with basic metadata for themselves of for other speakers, and record themselves or their guests reading lists of words in their language. The device detects pauses, which allows for the recording to end when the word has been read and the next recording to start automatically after, therefore effortlessly generating relatively short audio files for each word. Each audio file is supposed to be titled on the same template of 'Language -Speaker -Item'. For example, for the recording 'spa.-Marreromarcosolucionar.wav', the language is Spanish ('spa'), the speaker ID is 'Marreromarco', and the recorded item is 'solucionar', 'solve'. All audio files are under a Creative Commons licence, i.e., open-source.
First, the recordings are scrapped from the Lingua Libre database. In the present study, we extract a subsample of 500 items for /a, e, i, o, u/ in each language, to counter the fact that both languages have different amounts of data points and to also control for number of speakers (5) in each language. In total, we have 500 occurrences for each of the 5 vowels in both Italian and Spanish, which results in 5000 tokens. To avoid a potential sample bias, the sampling of tokens is conducted 10 times. We also took care to limit our investigation to the European variety of Spanish, to avoid any mismatch with the more limited geographical expansion of Italian.
Second, the recordings are segmented and aligned using WebMAUS (Kisler et al., 2017), the online open-access version of the MAUS software (Schiel, 2004). MAUS creates a pronunciation hypothesis graph based on the orthographic transcript of the recording (extracted from the name of the audio file) using a grapheme-to-phoneme converter. During this process, the orthographic transcription is converted to the Speech Assessment Methods Phonetic Alphabet (SAMPA). The signal is then aligned with the hypothesis graph and the alignment with the highest probability is chosen. Experiments have shown that the MAUS-based alignment is 95% accurate compared to human-based alignments (Kipp et al., 1997).
Third, the selected vowels are extracted from the recordings and analyzed in terms of formants. For each recording of each vowel, the mean F1 and F2 of the entire sound are calculated. The mean formants are considered to attenuate the effect of co-articulation with the left and right contexts. Table 1 shows an example of the extracted and compiled data used in this study. Each occurrence of vowel is given a unique identifier to allow tracking it within a word that has several vowels. The language iso code is provided along with the values of F1 and F2. Finally, the recorded word and its contributor are also noted. For the whole process, the following R packages are used: emuR (Winkelmann et al., 2021), PraatR (Albin, 2014), and tidyverse (Wickham, 2017). 4 Results: Shape of the inventory, more than size, influences vowel production
We focus on the F1 and F2 values for the 5 vowels that Spanish and Italian have in common, /a, e, i, o, u/. Our hypothesis is that, if the ISH is valid, we will find variation in both F1 and F2 for all vowels, while if the VQH is valid, we will find variation only in F1, especially in /a/, /e/ and /o/, which are in direct competition with /E/ and /O/. As general information, Figure 1 provides the mean values for F1 (top tier) and F2 (bottom tier) in Italian (left brackets) and Spanish (right brackets) for all 5 vowels of interest. It shows that F1 is significantly lower in Spanish for all 5 vowels, while F2 is statistically higher only for back vowels.
To test our hypotheses, however, we are less interested in F1 and F2 values in general than in their variation. Figure 2 shows the variation coefficient (standard deviation divided by the mean) of F1 (top tier) and F2 (bottom tier) for each replication of each vowel category in Italian (left brackets) and Spanish (right brackets). Each point represents the variation coefficient of a formant and a vowel for a replication. These results show that there is significantly less variation in F1 in Italian /a/, /e/, /o/ and /u/ than in Spanish, thus supporting the VQH. The difference between F2 variation coefficients is also significant but inverted for /e/, /i/, and /u/ where we observe more variation for Italian than for Spanish, thus invalidating the ISH. These results are also supported by the linear mixed models we conducted (in both Bayesian and non-Bayesian versions) based on the 500 data points from each of the 10 replications. First, Table 2 shows that the estimate for the variation of Spanish for F1 is five times larger than the one for F2. Furthermore, we also observe that the variation is generally larger for most of the vowels in F1 (except for /a/), while the variation varies for F2, in which the estimates are negative for /e/ and /i/. The same observation is found when comparing the overall areas covered by the polygons formed by the contours of F1 and F2. We conduct a 2D kernel density estimation (Venables and Ripley, 2002) to extract the contours of the area covered by the occurrences of each vowel in the two-dimensional space from F1 and F2. While there is generally more variation in Spanish than in Italian, this varies across vowels, as /e/ and /i/ tend to have a smaller formant space in general.
Conclusion and discussion
We used crowd-sourced data to test two competing hypotheses in language typology: The production of vowels is influenced either by the size of the inventory, or by its shape. Our proof-of-concept on Italian and Spanish shows that the size of the inventory does not influence the realization of vowels, but the exact quality of the vowels at hand does. Our study also points to several caveats. First, all audio files were not properly labeled and were thus unusable. Moreover, from a human point of view, it should be noted that crowd-sourced data heavily rely on the participants' good will and that researchers have no choice but to trust the provided metadata. One possible solution to that last problem would be for Lingua Libre to propose a verification tool, as does Common Voice, to improve the reliability of the data and metadata. However, crowd-sourced data proved to be a promising tool for linguistic inquiry, especially to investigate language universals, and could thus be tested on more substantial sets of languages.
26
on 28 languages, Gendrot and Adda-Decker (2007) on 8 languages across 4 families, and Salesky et al. (2020) on 38 languages across 11 families, found no evidence for an effect of inventory size on the global acoustic space.
Figure 1 :
1Distribution of formants for each of the 500 [a], [e], [i], [o], and [u] across the Italian and Spanish data extracted from Lingua Libre. The significance labels indicate the output of a wilcoxon test with bonferroni correction.
Figure 2 :
2The distribution of the variation coefficient for each of the 500 [a], [e], [i], [o], and [u] across the Italian and Spanish data extracted from Lingua Libre in each of the replications. The significance labels indicate the output of a wilcoxon test with bonferroni correction.
To test our hypothesis, we focus on the F1 and F2 values of the vowels in two Romance languages: Spanish and Italian. Spanish has a limited vowel inventory, with only 5 categories /a, e, i, o, u/ while Italian has 7: /a, E, e, i, o, O, u/. Their inventories differ only in the number of degrees of aperture (Spanish has open, mid and closed vowels while Italian has open, mid-open, mid-closed and closed vowels), which manifest as variation on the first frequency, F1. If the ISH is valid, we expect vowel productions from each language to differ in both F1 and F2, while if the VQH is valid, we expect Spanish and Italian vowels to differ only in F1.24
Dep.Var Pred Est t value p valueCV F1
spa
0.05
6.97
***
CV F1
/e/
0.06
5.79
***
CV F1
/i/
0.07
6.36
***
CV F1
/o/
0.04
3.41
***
CV F1
/u/
0.12
11.19
***
CV F2
spa
0.01
3.35
**
CV F2
/e/
-0.04 -6.87
***
CV F2
/i/
-0.07 -11.64
***
CV F2
/o/
0.11
16.56
***
CV F2
/u/
0.15
23.45
***
Area
spa
212
8.981
***
Area
/e/
-88
-2.35
*
Area
/i/
-210
-5.63
***
Area
/o/
230
6.16
***
Area
/u/
196
5.25
***
Table 2 :
2The output of linear mixed models based on the output of 10 vowel samplings with 500 tokens for each vowel in Italian and Spanish. The areas are counted as units of thousands. The abbreviations are read as follows: Pred = predictor, Est = estimate, CV = coefficient of variation, Dep.Var = Dependent variable.
Menghan Zhang and Tao Gong. 2022. Structural variability shows power-law based organization of vowel systems. Frontiers in Psychology, 13.
AcknowledgmentsThis research was partially supported by Institut DATAIA and the MSH Paris-Saclay in the framework of the Excellency Award for the project OTELO -OnTologies pour l'Enrichissement de l'analyse Linguistique de l'Oral (PI Ioana Vasilescu and Fabian Suchanek), and by the French National Research Agency in the framework of the grant EVOGRAM: The role of linguistic and nonlinguistic factors in the evolution of nominal classification systems, ANR-20-CE27-0021 (PI Marc Allassonnière-Tang). The authors would also like to thank the Wikimedia community for their interest in the project, and in particular Lucas Lévêque for his help on the Lingua Libre tool.
Does vowel space size depend on language vowel inventories? Evidence from two Arabic dialects and French. Emmanuel Jalal-Eddin Al-Tamimi, Ferragne, INTERSPEECH EUROSPEECH 2005. Lisbonne, PortugalJalal-Eddin Al-Tamimi and Emmanuel Ferragne. 2005. Does vowel space size depend on language vowel inventories? Evidence from two Arabic dialects and French. In INTERSPEECH EUROSPEECH 2005, pages pp.2465-2468, Lisbonne, Portugal.
Praatr: An architecture for controlling the phonetics software "praat" with the r programming language. Aaron Albin, Journal of the Acoustical Society of America. 13542198Aaron Albin. 2014. Praatr: An architecture for con- trolling the phonetics software "praat" with the r programming language. Journal of the Acoustical Society of America, 135(4):2198.
Common voice: A massivelymultilingual speech corpus. Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M Tyers, Gregor Weber, Proceedings of LREC. LRECRosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2020. Common voice: A massively- multilingual speech corpus. In Proceedings of LREC.
Cmu wilderness multilingual speech dataset. Alan W Black, 10.1109/ICASSP.2019.8683536ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Alan W Black. 2019. Cmu wilderness multilingual speech dataset. In ICASSP 2019 -2019 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5971-5975.
A comparative acoustic study of english and spanish vowels. R Ann, Bradlow, 10.1121/1.412064The Journal of the Acoustical Society of America. 97Ann R. Bradlow. 1995. A comparative acoustic study of english and spanish vowels. The Journal of the Acoustical Society of America, 97:1916-1924.
Effects of inventory size on the distribution of vowels in the formant space: preliminary data from seven languages. Olle Engstrand, D Krull, PER-ILUS. Olle Engstrand and D. Krull. 1991. Effects of inven- tory size on the distribution of vowels in the formant space: preliminary data from seven languages. PER- ILUS, pages 15-18.
Timit acousticphonetic continuous speech corpus. S John, Lori F Garofolo, Lamel, M William, Jonathan G Fisher, Fiscus, S David, Nancy L Pallett, Victor Dahlgren, Zue, 10.35111/17gk-bn40Linguistic Data ConsortiumJohn S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, David S Pallett, Nancy L Dahlgren, and Victor Zue. 1993. Timit acoustic- phonetic continuous speech corpus. Linguistic Data Consortium.
Impact of duration and vowel inventory on formant values of oral vowels: An automated formant analysis from eight languages. Cédric Gendrot, Martine Adda-Decker, International Conference on Phonetics Sciences. Saarbrücken, GermanyCédric Gendrot and Martine Adda-Decker. 2007. Im- pact of duration and vowel inventory on formant val- ues of oral vowels: An automated formant analysis from eight languages. In International Conference on Phonetics Sciences, pages 1417-1420, Saarbrücken, Germany.
Cross-linguistic vowel variation in saterland: Saterland frisian, low german, and high german. Wilbert Heeringa, Heike Schoormann, Jörg Peters, The Journal of the Acoustical Society of America. Wilbert Heeringa, Heike Schoormann, and Jörg Peters. 2015. Cross-linguistic vowel variation in saterland: Saterland frisian, low german, and high german. The Journal of the Acoustical Society of America, pages 25-29.
The Acoustic Vowel Space of Modern Greek and German. Allard Jongman, Marios Fourakis, Joan A Sereno, 10.1177/002383098903200303Language and Speech. 323Allard Jongman, Marios Fourakis, and Joan A. Sereno. 1989. The Acoustic Vowel Space of Modern Greek and German. Language and Speech, 32(3):221-248.
Maus goes iterative. Andreas Kipp, Maria-Barbara Wesenickm, Florian Schiel, Proceedings of the Fifth European Conference on Speech Communication and Technology EU-ROSPEECH. the Fifth European Conference on Speech Communication and Technology EU-ROSPEECHAndreas Kipp, Maria-Barbara WesenickM, and Flo- rian Schiel. 1997. 2004): Maus goes iterative. In Proceedings of the Fifth European Conference on Speech Communication and Technology EU- ROSPEECH 1997.
Multilingual processing of speech via web services. Thomas Kisler, Uwe Reichel, Florian Schiel, 10.1016/j.csl.2017.01.005Computer Speech & Language. 45Thomas Kisler, Uwe Reichel, and Florian Schiel. 2017. Multilingual processing of speech via web services. Computer Speech & Language, 45:326-347.
Vowel space of french and inuktitut: An exploratory study of the effect of vowel density on vowel dispersion. Chloé Larouche, François Steffann, Proceedings of the Workshop on the Structure and Constituency of Languages of the Americas. the Workshop on the Structure and Constituency of Languages of the Americas21Chloé Larouche and François Steffann. 2018. Vowel space of french and inuktitut: An exploratory study of the effect of vowel density on vowel dispersion. In Proceedings of the Workshop on the Structure and Constituency of Languages of the Americas, vol- ume 21.
A cross-dialect comparison of vowel dispersion and vowel variability. Wai-Sum Lee, 8th International Symposium on Chinese Spoken Language Processing. Wai-Sum Lee. 2012. A cross-dialect comparison of vowel dispersion and vowel variability. 2012 8th In- ternational Symposium on Chinese Spoken Language Processing, pages 25-29.
Numerical simulation of vowel quality systems: The role of perceptual contrast. Johan Liljencrants, Björn Lindblom, Language. 484Johan Liljencrants and Björn Lindblom. 1972. Numeri- cal simulation of vowel quality systems: The role of perceptual contrast. Language, 48(4):839-862.
Phonetic universals in vowel systems. Björn Lindblom, Experimental Phonology. Björn Lindblom. 1986. Phonetic universals in vowel systems. Experimental Phonology, pages 13-44.
Explaining phonetic variation: A sketch of the h&h theory. Björn Lindblom, 10.1007/978-94-009-2037-8_16Speech Production and Speech Modelling. William J. Hardcastle and Alain MarchalDordrechtSpringer NetherlandsBjörn Lindblom. 1990. Explaining phonetic variation: A sketch of the h&h theory. In William J. Hardcastle and Alain Marchal, editors, Speech Production and Speech Modelling, pages 403-439. Springer Nether- lands, Dordrecht.
Acoustic distribution of vowels in differently sized inventories -hot spots or adaptive dispersion? PERILUS. Peter Livijn, Peter Livijn. 2000. Acoustic distribution of vowels in differently sized inventories -hot spots or adaptive dispersion? PERILUS, pages 93-96.
Oteann: Estimating the transparency of orthographies with an artificial neural network. Xavier Marjou, 10.18653/v1/2021.sigtyp-1.1Proceedings of the Third Workshop on Computational Typology and Multilingual NLP. the Third Workshop on Computational Typology and Multilingual NLPAssociation for Computational LinguisticsXavier Marjou. 2021. Oteann: Estimating the trans- parency of orthographies with an artificial neural net- work. In Proceedings of the Third Workshop on Com- putational Typology and Multilingual NLP, pages 1-9. Association for Computational Linguistics.
Production and perception of vowels: does the density of the system play a role?. Christine Meunier, Cheryl Frenck-Mestre, Taissia Lelekov-Boissard, Martine Le Besnerais, hal archives ouvertes. Université Autonome de BarceloneChristine Meunier, Cheryl Frenck-Mestre, Taissia Lelekov-Boissard, and Martine Le Besnerais. 2003. Production and perception of vowels: does the den- sity of the system play a role? In hal archives ouvertes, pages 723-726. Université Autonome de Barcelone.
Librispeech: An asr corpus based on public domain audio books. Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, 10.1109/ICASSP.2015.71789642015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Vassil Panayotov, Guoguo Chen, Daniel Povey, and San- jeev Khudanpur. 2015. Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206-5210.
Dispersion and variability in catalan five and six peripheral vowel systems. Daniel Recasens, Aina Espinosa, 10.1016/j.specom.2008.09.002Speech Communication. 51Daniel Recasens and Aina Espinosa. 2009. Dispersion and variability in catalan five and six peripheral vowel systems. Speech Communication, 51:240-258.
A corpus for large-scale phonetic typology. Elizabeth Salesky, Eleanor Chodroff, Tiago Pimentel, Matthew Wiesner, Ryan Cotterell, Alan W Black, Jason Eisner, 10.18653/v1/2020.acl-main.415Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsElizabeth Salesky, Eleanor Chodroff, Tiago Pimentel, Matthew Wiesner, Ryan Cotterell, Alan W Black, and Jason Eisner. 2020. A corpus for large-scale pho- netic typology. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 4526-4546, Online. Association for Computational Linguistics.
Maus goes iterative. Florian Schiel, Proceedings of the LREC 2004. the LREC 2004Florian Schiel. 2004. 2004): Maus goes iterative. In Proceedings of the LREC 2004, pages 1015-1018.
The nijmegen corpus of casual french. Francisco Torreira, Martine Adda-Decker, Mirjam Ernestus, 10.1016/j.specom.2009.10.004Speech Communication. 52Francisco Torreira, Martine Adda-Decker, and Mir- jam Ernestus. 2010. The nijmegen corpus of casual french. Speech Communication, 52:201-212.
Modern applied statistics with S. W N Venables, Brian D Ripley, Statistics and computing. New York. OCLCSpringer493124024th ed editionW. N. Venables and Brian D. Ripley. 2002. Mod- ern applied statistics with S, 4th ed edition. Statis- tics and computing. Springer, New York. OCLC: ocm49312402.
tidyverse: Easily install and load the Tidyverse. Hadley Wickham, R package version, 1.2.1Hadley Wickham. 2017. tidyverse: Easily install and load the Tidyverse. R package version, 1.2.1.
2021. emuR: Main Package of the EMU Speech Database Management System. Raphael Winkelmann, Klaus Jaensch, Steve Cassidy, Jonathan Harrington, R package version 2.3.0Raphael Winkelmann, Klaus Jaensch, Steve Cassidy, and Jonathan Harrington. 2021. emuR: Main Pack- age of the EMU Speech Database Management Sys- tem. R package version 2.3.0. |
37,266,700 | Votter Corpus: A Corpus of Social Polling Language | The Votter Corpus is a new annotated corpus of social polling questions and answers. The Votter Corpus is novel in its use of the mobile application format and novel in its coverage of specific demographics. With over 26,000 polls and close to 1 millions votes, the Votter Corpus covers everyday question and answer language, primarily for users who are female and between the ages of 13-24. The corpus is annotated by topic and by popularity of particular answers. The corpus contains many unique characteristics such as emoticons, common mobile misspellings, and images associated with many of the questions. The corpus is a collection of questions and answers from The Votter App on the Android operating system. Data is created solely on this mobile platform which differs from most social media corpora. The Votter Corpus is being made available online in XML format for research and non-commercial use. The Votter android app can be downloaded for free in most android app stores. | [
7811096,
18039958
] | Votter Corpus: A Corpus of Social Polling Language
Nathan David Green [email protected]
Septina Dian
Larasati Globeotter [email protected]
Votter Corpus: A Corpus of Social Polling Language
Social MediaCorporaQuestion and AnswerAnnotation
The Votter Corpus is a new annotated corpus of social polling questions and answers. The Votter Corpus is novel in its use of the mobile application format and novel in its coverage of specific demographics. With over 26,000 polls and close to 1 millions votes, the Votter Corpus covers everyday question and answer language, primarily for users who are female and between the ages of 13-24. The corpus is annotated by topic and by popularity of particular answers. The corpus contains many unique characteristics such as emoticons, common mobile misspellings, and images associated with many of the questions. The corpus is a collection of questions and answers from The Votter App on the Android operating system. Data is created solely on this mobile platform which differs from most social media corpora. The Votter Corpus is being made available online in XML format for research and non-commercial use. The Votter android app can be downloaded for free in most android app stores.
Introduction
Social media has changed the way a generation communicates, as well as how the language is being used. From SMS text messages on phones, to Twitter (http://twitter.com) and Facebook (http://facebook.com) updates, all the way to your classroom essay, the language itself has been fundamentally changed. Addressing language changes in Natural Language Processing (NLP), under most situations, means statistically based techniques, which require a massive amount of data from a variety of data sets. Traditional data sets in NLP typically have been oriented around news, finance, and some sports. It is not news or shocking to anyone in the field that these data sets are inadequate for handling new media types. The languages used in newspapers is far different than that used on mobile phones, social networks, or even in day to day communication. While there are some corpora for modern media, they typically fall into three categories: Wikipedia (http://wikipedia.org), Facebook, or Twitter. Votter (http://globeotter.com) differs by these in its use of the medium. The Votter Corpus comes from mobile phones only and is almost entirely question and answer based, often solely based on opinion. Votter is unique for its short question and answer format. Most questions are opinion based, so it contains different types of information compared to Yahoo Answers (http://answers.yahoo.com/) or Quora (http://quora.com). Additionally, the system allows images to be associated with the questions, opening up further research avenues for image processing as well. Demographically, Votter gives NLP researches access to a very specific group of mostly female users ranging from age 13-24.
Background
The Twitter Corpus out of Edinburgh (Petrović et al., 2010) is a corpus with a similar goal to the Votter Corpus, to capture current language being used. With approximately 100 million tweets, it covers its domain rather well. Votter differs in a few critical aspects. First, Votter is not character limited like Twitter. This means the language is not abbreviated as often. If a word is abbreviated in the Votter Corpus, there is a greater chance that the abbreviation has become a standard. Second, Twitter covers multiple genres of statements and questions. Votter is a very specific domain in questions and answers. The French Social Media Bank (Seddah et al., 2012) is a similar effort to the Twitter corpus. Additionally, they include other user generated texts such as Facebook messages. It is currently being used as a test for parsing and part of speech (POS) accuracy on noisy data. Parsing and POS tagging is a rather early stage process in NLP, much of the intended use of social media for NLP can be seen in higher level applications such as sentiment analysis (Habernal et al., 2013), which has been applied for Czech social media, and opinion mining (Martínez-Cámara et al., 2013). In both of these areas we believe Votter's opinion based question and answer data will be of use.
Votter App
The Votter app is a social polling app for the Android operating system and is currently available for all Android phones and tablets. The app allows users to post polls in a number of categories and have other Android users answer those polls. Polls come in a few varieties:
• Text Poll: A text question with up to ten text based answers e.g -Q: "What is your favorite animal?"
-A: 1) Dog 2) Cat 3) Fish ...
• One Image Poll: Contains a text question and up to ten text based answers but also includes an image e.g.
-Q: "Do you think my dog is cute?" (an image is included in the poll)
-A: 1) yes 2) no • Two Image Poll: Contains a text question. The answers are 2 possible images and user selects one of the images as their answer. This does not contain any text answers e.g.
-Q: "Which dress would be best for prom?"
-A: Two images are included to choose from.
These poll types can be seen in Figure 1 as actual screenshots from the app. One can easily see how the three types allow different analysis from text to images. Errors are intentionally left in, such as the common misspelling of "which" and "witch" seen in the middle figure.
The votes of each user are tracked and tallied for a final result. Everyone that has voted can see the result. The average number of votes per user is 47 across the entire Votter user base. We have 26,000 polls and just shy of 1 million votes at this time.
The Votter users can login via Votter's registration page or by Facebook Login. The users are overwhelmingly younger and female. The demographics for the Facebook users can be seen in Figure 2. This shows that most of Votter's users are females between the ages 13-24. This demographic is represented by the category distribution seen in Table 1 with users mostly concentrating on "Fashion" and "Am I Pretty" photos of themselves. We see far less activity in the "Travel" and "Politics" categories, where users of this age might not be of age to vote or travel independently.
Data
The Votter Corpus is a collection of poll questions and their corresponding answers to the polls. The poll questions are entirely created by Votter users and are answered by other Votter users. The corpus not only consists of polls and answers that Votter users created, but also we add other information such as the poll results, categories and timestamps.
Category
Annotation
User Annotation: Some parts of the Votter Corpus are annotated by the users as they create the poll. The parts of the corpus that are created by the Votter users are as follows:
1. Question: The user supplies a question, which is open to all users for voting. These questions may be seeking an opinion or the correct answer to a question. Table 3. We allow up to 10 different possible answers for each question.
3. Images: When creating a poll, Votter users can use images in two ways. First, the user may submit an image to add to any textual poll questions with a set of textual possible answers. Second, we allow users to submit two images as a set of possible image answers to a poll with a textual question. In this case, the images are used as the possible answers for the question. For instance, a poll question "Which photo should I use as my profile picture ?" can have two image possible answers and no textual possible answers.
Category:
Each poll submitted has a category assigned to it as selected by the user from a list of categories. These categories are predefined by the Votter development team. If a category is deemed to be incorrect, users can report polls that fall into the wrong category and they can be corrected on the backend.
System Annotation:
Additional automatic data is recorded for each question including timestamps and an anonymous user id. The Votter Corpus also gives the voting results for the set of possible answers for a given poll. Figure 3 shows one poll snippet from the Votter Corpus. The Votter Corpus is stored in a simple XML format. One poll in the Votter Corpus consists of seven main XML tags. Those tags are:
Data Format
• poll: One poll in Votter. The poll has an id poll attribute that is a unique id number assigned to the poll.
• creator: A Votter user id that created the poll.
• category: A category assigned to a poll. There are 14 possible categories (see Table 1).
• date: the poll submission date.
• question: the poll question.
• answergroup: the set of possible answers.
• answer: one of the possible answers. The answer has id answer and count as its attribute. The id answer attribute is a unique id assigned to the answer. It is a combination of the id poll and the sequence number for the possible answers in the poll. The count attribute is the number of vote count for a particular possible answer.
<poll id_poll="15474"> <creator>20627</creator> <category>Sports</category> <date>2013-05-10 17:32:11</date> <question>Who is going to win the NBA finals? </question> <answergroup> <answer id_answer="15474_0" count="12"> Miami heat</answer> <answer id_answer="15474_1" count="7"> San Antonio spurs</answer> <answer id_answer="15474_2" count="3"> Oklahoma City thunder</answer> <answer id_answer="15474_3" count="7"> Chicago Bulls</answer> <answer id_answer="15474_4" count="4"> Golden state warriors</answer> <answer id_answer="15474_5" count="4"> Memphis grizzlies</answer> </answergroup> </poll> Figure 5 show the top 10 unigrams that appear in poll questions and in each poll's possible answers respectively. The results show typical usage of questions and answers with a slight twist of "pretty" and "ugly" being in the top counts. Table 2 shows the total and unique n-gram counts for up to n = 3 for poll questions, poll answers, and a mix of both. It is interesting to note the use of repeated language in the counts. For each dataset the unigrams are roughly 5% unique, bigrams 24% unique, and trigrams are 48% unique. Another feature of the corpus is that the questions seems to have different language style than the answer data set. This Table 1 shows the poll categories and their corresponding number of polls. While Votter was initially intended for political discussion and debate, "Am I Pretty", "Fashion" and "Dating" clearly are the most popular categories. This is a very positive outcome, as it gives researchers a very focused corpus given the demographics. Votter users use many textual emoticons to portray their emotions on the polls questions and answers. Table 3 shows the usage of some frequent emoticons. These may be used for sentiment analysis of both text and possibly their related images.
N-gram
Conclusion
We have shown and released a new data set covering a new social media format, social polling.
Future Work
We currently provide the user with a means to discuss their poll and the results of the poll. We associate a discussion board to each poll. The discussion board is powered by Disqus (http://disqus.com), a blog comment hosting service. In the future we can harvest further opinions given by the users related to each poll. We plan to apply named entity recognition into this work, to capture named entities and relate them to the sentiment that the users are giving through their opinion in the poll and discussion. We plan to add more information about the poll creators and the voters, such as their age group, location, gender, etc., to provide a better understanding about the language style used in the questions and answers, or the trends of the votes in the Votter Corpus.
Acknowledgments
Votter users, Android technology, selfie pictures, and trending pop culture. You guys rocks!
References
Habernal, I., Ptáček, T., and Steinberger, J. (2013). Sentiment analysis in czech social media using supervised
Figure 1 :
1Screen shot of a 3 Typical Votter polls. A text poll on the left, a 2 image poll in the middle, and a 1 image poll on the right
Figure 2 :
2Demographics of Votter's Facebook Users 2. Set of possible answers: This is a set of possible answers to the given question for other Votter users to vote on. Interestingly, there are users who submit emoticons or certain expressions as their possible answers instead of standard answers as seen in
Figure 3 :
3Votter
Figure 4 and
4Figure 4 and Figure 5 show the top 10 unigrams that appear in poll questions and in each poll's possible answers respectively. The results show typical usage of questions and answers with a slight twist of "pretty" and "ugly" being in the top counts. Table 2 shows the total and unique n-gram counts for up to n = 3 for poll questions, poll answers, and a mix of both. It is interesting to note the use of repeated language in the counts. For each dataset the unigrams are roughly 5% unique, bigrams 24% unique, and trigrams are 48% unique. Another feature of the corpus is that the questions seems to have different language style than the answer data set. This
Figure 4 :Figure 5 :
45Top 10 Unigrams in the poll questions Top 10 Unigrams in the poll set of possible answers
Table 2 :
2N-Gram counts for the Votter Corpus
The contents of the corpus are opinionated questions and answers that differ greatly from other question and answer sites. Additionally, photos are available for some corresponding questions. The corpus contains over 552,946 unigrams, many differing from typical n-grams. With very unique demographics,Emoticon
Counts
Question Answer
:)
310
568
(:
295
352
:(
38
230
:/
25
188
:D
26
89
:-)
35
41
C:
5
57
c:
13
31
:3
14
23
):
5
25
:C
0
28
:o
10
17
:*
6
20
/:
6
19
D:
2
23
:P
4
21
:p
9
13
Table 3 :
3Several emoticon examples and their corresponding occurrence counts in the dataThe Votter corpus should be useful as a new data set for NLP researchers dealing with question/answer systems and modern social language amongst other NLP tasks.
Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisAssociation for Computational Linguisticslearning. In Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 65-74, Atlanta, Geor- gia, June. Association for Computational Linguistics.
Bilingual experiments on an opinion comparable corpus. E Martínez-Cámara, M T Martín-Valdivia, M D Molina-González, Ureña López, L A , Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisAtlanta, GeorgiaAssociation for Computational LinguisticsMartínez-Cámara, E., Martín-Valdivia, M. T., Molina- González, M. D., and Ureña López, L. A. (2013). Bilin- gual experiments on an opinion comparable corpus. In Proceedings of the 4th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Media Analysis, pages 87-93, Atlanta, Georgia, June. Associ- ation for Computational Linguistics.
The edinburgh twitter corpus. S Petrović, M Osborne, V Lavrenko, Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media. the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social MediaLos Angeles, California, USAAssociation for Computational LinguisticsPetrović, S., Osborne, M., and Lavrenko, V. (2010). The edinburgh twitter corpus. In Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media, pages 25-26, Los Angeles, Cal- ifornia, USA, June. Association for Computational Lin- guistics.
The French Social Media Bank: a treebank of noisy user generated content. D Seddah, B Sagot, M Candito, V Mouilleron, V Combet, The COLING 2012 Organizing Committee. Mumbai, India, DecemberProceedings of COLING 2012Seddah, D., Sagot, B., Candito, M., Mouilleron, V., and Combet, V. (2012). The French Social Media Bank: a treebank of noisy user generated content. In Proceedings of COLING 2012, pages 2441-2458, Mumbai, India, De- cember. The COLING 2012 Organizing Committee. |
12,746,854 | What is Missing in User-Centric MT? | This paper describes some of the kinds of predictable errors in Machine Translation (MT). It then discusses means of alerting end-users of MT to the possible presence of such errors, including by providing training and/or by providing automated MT ratings, MT color coding and/or symbols, and footnotes and annotation. It also discusses the need for some kind of reliability measure and/or information to the MT consumer, and the likelihood of the MT user being open to using this kind of input. Some of the suggestions made for usercentric MT are also applicable to translatorcentric MT. | [
18208555,
235127291,
14900300
] | What is Missing in User-Centric MT?
Jennifer Decamp [email protected]
The MITRE Corporation
7515 Colshire Drive McLean22001VAUSA
What is Missing in User-Centric MT?
This paper describes some of the kinds of predictable errors in Machine Translation (MT). It then discusses means of alerting end-users of MT to the possible presence of such errors, including by providing training and/or by providing automated MT ratings, MT color coding and/or symbols, and footnotes and annotation. It also discusses the need for some kind of reliability measure and/or information to the MT consumer, and the likelihood of the MT user being open to using this kind of input. Some of the suggestions made for usercentric MT are also applicable to translatorcentric MT.
Introduction
What is missing in MT? Some text may not be translated. Some relationships may be reversed. Some names may be wrongly translated. Some negatives may get lost. However, the text may read reasonably well, and the consumer may not realize substantive errors that may affect his/her understanding and decisions. This paper addresses some of the types of consistent meaningful errors and proposes means for communicating this variation in reliability to the consumer of Machine Translation (MT) output. Some of the suggestions made for user-centric MT may also be applicable to translator-centric MT.
What is User-Centric MT?
User-centric computing is a phenomenon that has emerged primarily in the last decade users are searching for, deciding on, and often translating the information they need. As Van der Meer observes (1994), -The one source of information provided by the product manufacturer, the government, the doctor, or the hospital is now being replaced by dozens-if not hundreds-of alter-native and competing information sources. Tips and tricks from other users, prescriptions from multiple healthcare organizations, and analyses of government data from private sources may be much more valuable than the ‗authoritative' information from the original publisher.‖
End Users Data Owners
Translation
These users then employ online MT to access the information in their language of choice: hence, -User-Centric MT‖.
MT End Users
Microsoft was one of the notable pioneers of this approach for product literature, providing the MT developed for internal use at their company to premium users as a perk. The MT enabled the users to translate and thus access larger sections of the online Microsoft website.
Government organizations are also making greater use of User-Centric MT. Bemish (2008) observed that -Using advanced tools like MT has allowed analysts and investigators to see data that would have taken years to translate and compile.‖
What do MT Providers Do for Users?
In the past few years, the Association for Machine Translation of the Americas (AMTA) and the MT Summit have provided an increasing focus at their conferences on providing tools for translatorsprimarily for post-editing-thus creating translator-centric MT. However, little has been done to support the users who are not translators and who are utilizing MT from sites such as Bablefish, Altavista, Systran's own websites, Google Translate, Free Translation (SDL International), ProMT, Gist-inTime, PARS, Microsoft Windows Live Translator, and others, or from other resources.
There are a few exceptions. LanguageWeaver added a confidence rating to some if its systems in 2007. However, Gerber comments that -not a lot of attention was drawn to it, and I believe they have never gotten any feedback on its usefulness.‖ Systran for many years has enabled users to add their own terms to their online MT at www.systranet.com.
However, tools for the end-user of MT seemed to have received little attention and/or to have fallen off the community's radar. There are many reasons for this lack of focus. One reason provided by Gerber (2009) is that -users (and more importantly buyers) don't demand such tools.‖ Of course, if the users are unaware of such tools, they are unlikely to ask for them.
In addition, much of the user-centric MT has been with free MT systems on the Internet, so there has been little incentive for MT companies to commit additional development resources to provide tools. Some of the MT-such as Systran's free resources-was put online not for production purposes but for education. As Gachot (2005) pointed out, users became more knowledgeable about MT by playing with it.
Gerber also comments that -MT developers are aiming at so many different user environments, it can be hard to figure out which environment/users to target.‖ Tapling (2008) pointed out that the MT field has been segmented by technology rather than by user needs. Perhaps as this focus shifts and as the volume of MT increases the feasibility of market segmentation, tools can be better targeted.
Even so, there is a significant market segmentation of people other than translators and posteditors using online MT systems. Each of these users has a stake in knowing the reliability of the MT output. Moreover, the fact that these users are employing MT indicates that many may not know the foreign language or have the time or resources to otherwise assess the reliability of the translation.
It may be that MT providers do not believe users are ready to accept such tools and may even be turned off the use of MT by being presented with too many caveats. The last decade has been characterized by considerable growth in the sophistication of users concerning computer tools. It has also been characterized by increasingly realistic views of MT. A couple of weeks ago, a translator commented that her customers used to think that footnotes decreased the readability and thus the usability of translations, but that they now like footnotes.
Another reason may be a perceived lack of appropriate tools and underlying research. The automated MT evaluation tools at the forefront of MT assessment (e.g., BLEU, METEOR, etc.) require gold-standard reference translations of the same material. Such tools are thus probably not feasible for assessing the reliability of new translations where a reference translation is not available. These tools are also oriented towards evaluating software development rather than the communicative value of a text, although new work on taskbased metrics (e.g., Friedman and Strassell 2008) in the future may provide automated ratings more useful to end users.
There may also just be too many problems in MT to correct. It is significant that the Pan American Health Organization (PAHO) only color codes items that they are certain are correct (e.g., that are perfect matches in a Translation Memory or that come from an organizational terminology; Gerber 2009). To provide tools to correct all problems is not feasible. The only means of reasonably ensuring that all problems are addressed is to employ an excellent post-editor (i.e., a human) and preferably also an excellent second editor. Even so, the diminution of significant errors that may cause misunderstandings and bad decisions may still be a benefit to the users.
There are also those of us who are very concerned about unreviewed MT being used for any decision-making, due to the many problems with quality and reliability. However, despite our astute advice, people are increasingly using raw MT output.
One further reason for the lack of focus on user tools may just be the research focus that has permeated the MT community, particularly in the United States. For instance, in a presentation at the 2008 Conference of the Association for Machine Translation of the Americas (AMTA), Chang-Meadows described consistent errors with the Chinese particle -de‖ (的), resulting in confusion about who is doing what to whom or who reports to whom. When I raised the question of whether users could be alerted to such problems, the response from the DARPA program manager and his team was that the problem had been fixed. However, while the problem had been fixed from a research standpoint, it is still not fixed in the MT systems that are available to commercial and most Government users.
Part of this research focus and drive has been to fix MT as opposed to providing the user with explanations of what is wrong or missing or with tools for the users to fix the problem themselves. In addition, from a research and development standpoint, these problems are well known. They are mainly old news and not cutting edge research.
In any case, it may be a good time to review ways to help the users of MT.
What is Missing in MT?
There is a wealth of information in the MT research, development, and post-editing communities concerning common and predictable problems of MT-including of specific MT systems. The following examples are a few from a study conducted by Chang-Meadows of comparative output of Google, Microsoft Translate, and SYSTRAN Chinese-to-English MT (2008).
Change in Subordination
Chang-Meadows found predictable errors in the use of the Chinese particle -de‖ (的), resulting in confusion about who is doing what to whom or who reports to whom. For instance, in the following example, The Google MT version could be read as the Hua Jian Group investing in the Chinese Academy of Sciences instead of the reverse, as in the human translation.
Original:
华建集团中国科学院直接投资成立的高科技企业
Human Translation:
The Huajian Group is (a high-tech enterprise invested and established directly by the China Academy of Sciences). Google:
Hua Jian Group is a direct investment in the establishment of the Chinese Academy of Sciences of the high-tech enterprises.
Blank Space
One high-risk practice in several MT systems is to omit text with no indication that something has been omitted. In LanguageWeaver MT, for instance, the default setting for handling unknown words is to simply omit them from the text. The Microsoft translation for the example above was: -Hua -group was direct investment set up high -tech enterprises‖, which omitted any reference to the Chinese Academy of Sciences.
A second example is as follows, where the Google example omits the name of the enterprise:
Original:
大三通是目前中国最大的GPS连锁企业和营运成绩最 好的企业 Human Translation: Dasantong is China's (largest GPS chain enterprise in China) and (the enterprise that has the best operational results.) Google:
At present, China is the largest chain of businesses and operating GPS the best of the enterprise
Names, Acronyms, and Abbreviations
There are fairly consistent problems with names, acronyms, and abbeviations. For instance, in the example below, the Systran MT system translated the -Lanya‖ in the name as -blue‖, changing -the Wuhai City Lanya Chemical limited liability com-pany‖ to the -The Wuhai blue Asia chemical industry Limited liability company‖. This example also shows the predictable distortion in translation of proper nouns:
Original:
乌海市兰亚化工有限责任公司
Human Translation:
Wuhai City Lanya Chemical limited liability company Google:
Wuhai City LAN Ya Chemical Co., Ltd. Systran:
The Wuhai blue Asia chemical industry Limited liability company Microsoft:
Wuhai LAN Asia chemical co., Ltd.
Convoluted Complex Text
As Chang-Meadows points out, MT predictably does less well on convoluted and complex text:
Original:
该实验室多年来一直致力于环境工程和试验技术、可 靠性工程和试验技术、环境测量分析和预计技术、电 磁环境效应等方面的探索和研究工作,同时为各行业 提供了大量的环境与可靠性试验服务。
Google:
The lab has for many years been committed to environmental engineering and testing technology, reliability engineering and testing technology, environmental analysis and measurement is expected to technology, electromagnetic environmental effects, such as the exploration and research work, while for the industry to provide a large number of environment and reliable Test service. Systran:
This laboratory has for many years devoted to the environment project and the experimental technology, the reliability project and the experimental technology, the environment survey analyzes and estimated that technical, aspect and so on electromagnetic environment effect explorations and the research work, simultaneously have provided the massive environment and the reliability test for various professions serve. Microsoft:
The Laboratory efforts in environmental engineering and pilot technology, reliability engineering and pilot technical, environmental measurement analysis and estimated technology, electromagnetic environment effect aspects in the exploration and research work, at the same time for various industries provides a number of environmental and reliability testing services.
What Works Well?
As researchers and many editors point out, what works well with MT is simple structure and factual information.
Simple Structure
Bernth and McCord (2000) conducted studies showing the impact of simplified text on translation quality. Shubert and Spyridakis (1995) and Spyridakis, Homback, and Shubert (1997) showed that in many cases, the use of simplified English (as can be measured automatically) can improve HT results. Consistent with this analysis was Chang-Meadows' (2008) analysis of the best performance of Chinese-to-English MT output of Google, Systran, and Microsoft. She found that the best output occurred with simple parallel structures:
Original:
生产场地宽敞整洁, 生产设备一流, 生产技术先进
Google: Production sites spacious and clean, firstclass production equipment, advanced production technology. Systran:
Produces the location spaciously neat, production equipment first-class, production technological advance. Microsoft: production venues spacious clean production equipment first-class production technology, advanced.
Factual Information
Good output also occurred with simple factual information about personnel, assets, and services: Original:
集团公司拥有研发、流通和生产企业140余家, 并在全球数十个国家和地区建立了近百家海外分支机 构。至2007年底,资产总额近1500亿元,主营业务收入突破1300亿元,员工30万人。
Google: Group owned research and development, production and circulation of more than 140 enterprises, and dozens of countries in the world and the establishment of nearly 100 overseas branches. To the end of 2007, with total assets of nearly 150 billion yuan, the main business income of 130 billion yuan breakthrough, employees 300,000 people. Systran:
The Group has the research and development, the circulation and Production enterprise 140, and has established nearly hundred overseas Branch office in the entire nodule number ten countries and the area. By the end of 2007, the gross asset nearly 150,000,000,000 Yuan, the main business income tops 130,000,000,000 Yuan, the staff 300,000 people. Microsoft: owns r&d, circulation and production enterprise 140, and in the global dozens of countries and regions have established nearly 100 overseas branch offices. to the end of 2007, the total assets of nearly 1 500 billion, the primary business income breakthrough 1,300 billion, an employee 30 000 people.
What Can We Do?
There are numerous strategies that could be tried to help users of MT manage their risk, including providing training, providing ratings, marking errors or high-risk output, providing tools to the user to evaluate the likelihood of errors given input, providing ratings, and/or providing footnotes and annotations.
Provide Training
One risk mitigation strategy would be to provide training to users of Machine Translation. The poor readability of FAMT used to be at least some warning to readers to be careful of using the results. However, the improvements in readabilityparticularly with SMT-have now increased the risk of users over-trusting the results. Some U.S. Government MT systems provide a statement on the coversheet of the translation that the contents are machine translated and should be used with caution. At least one U.S. Government system provides online training. However, overall,there is little guidance on how to use those materials. What may be helpful for MT sites in general is a description of what to expect from the MT output and tips on how to improve the output by changing the input, in situations where changing the input is feasible.
There is still very little public training in understanding MT output. Free online MT services have enabled people to play with MT and to recognize both the potential and a few of the problems. However, limited play with a few usually short phrases is not sufficient preparation for using MT for real decision-making.
There are many efforts to provide language technology training, such as the European Commission's Multilingual E-Learning in Language Engineering (MELLANGE) project (part of the European Leonardo da Vinci program) and the Localization Industry Standards Association (LISA) Education Initiative Taskforce (LEIT). Such efforts, however, focus on the translators and language technology specialists and not on the average user of machine translation.
Teaching the general public how to better understand and use MT may be good goal for professional organizations such as AMTA, its international counterparts, and the MT Summit to undertake during the next few years.
Provide Ratings
There have been numerous efforts to develop rating systems for machine translatability, as was discussed previously regarding LanguageWeaver and IBM. LanguageWeaver confidence ratings are shown below, where the darker the purple, the less confidence there is in the accuracy of the MT.
Figure 3: LanguageWeaver Confidence Ratings
Uchimot, Hayashida, Ishida, and Isahara (2005) developed a system for rating MT quality without reference translations, specifically by using bidirectional translations. Many users of online MT have invented their own informal means of checking translation accuracy by using backwards MT. Of course, the use of bidirectional translations often creates new problems, since one translation pair is rarely the exact inverse of the reverse pair.
Clifford, Granoien, Jones, Shen, and Weinstein (2004) analyzed machine translation quality was affected by the level of text difficulty, as measured by the Interagency Language Roundtable Proficiency Scale. Various pre-editing and authoring systems also provide information on whether a document will translate well, as is discussed in the next section.
In the meantime, it may be possible to construct an automated rating system to help users based on the absence of problems in the source text that would be likely to create problems. Thus a source text with simple direct phrases and no known problems (such as -de‖) in Chinese might get more stars or smiley faces than a convoluted sentence with some of the problems discussed earlier in this paper.
Providing overall confidence ratings presents significant problems, since as Egan (2008) points out, -A single error/omission/deletion can seriously compromise the utility of a particular translation even when judged 70% or 80% accurate‖ by some of the popular scoring methods such as BLEU.
In addition, some kind disclaimer would may need to be provided concerning the ratings, since the MT providers and raters would want to avoid legal liability for the MT (e.g., if the MT provided wrong information about product capabilities or prices).
Mark Input
Xerox in the early 1980s developed software to check source text and make recommendations to writers about improvements to source text (e.g., shortening sentences) that would provide a more reliable MT output (Ryan 1993). This type of checker-or even some of the analysis behind itcould be provided to that subset of consumers who are in a position to change the source text. Bernth and Gdaniec (2001) identified characteristics of English text that resulted in higher quali-ty. There are also a number of authoring systems such as Smart's MaxIT, Acrolynx, and AuthorIT which are designed to help authors write better input for MT. Some of this work could be tailored for this community.
Mark Output
There are many forms of MT markup that could be provided to users. Xerox Corporation in the 1980s color coded the output of MT to indicate areas needing post-editing by human translators. The marking was primarily on the basis of non-matches with a rule based system (SYSTRAN). SYSTRAN used to include include markup of their Russian-to-English system used by the National Air and Space Intelligence Center (NASIC). However, the marking could be expanded to reflect a broader array of potential errors and or to be more friendly to end users.
Provide Footnotes and Annotation
Another method of improving the reliability of MT is to follow a common practice in human translation: to provide footnotes and/or inline or linked annotation. For instance, where a term does not have a direct equivalent in the target language, human translators frequently provide a footnote explaining the term. It would be possible to not only automate this process for FAMT but also to expand the footnotes and annotations to include warnings of common problems.
Conclusion
User-centric computing has changed the paradigms for at least one major segment of our MT user community. Users with little or no background in the source language or in MT are conducting a significant amount of machine translation, often to use for decision-making. As a community of MT professionals, we need to better educate these users on what they are receiving and on what they are missing. We also need to examine how we can better provide them with the kinds of tools now being used by researchers, authors, and posteditors-or better yet, more tailored tools-in order for them to at least better understand the quality of the translated information.
Figure 1 :
1Publisher-Centric Translation.
Figure 2 :
2User-Centric MT.
AcknowledgementsI would like to express my appreciation to Shin Chang-Meadows for her many outstanding examples of problems in Chinese-English MT. I would like to thank Shin and Laurie Gerber for their review of this paper. I would like to thank Ion Muslea for the screen capture of LanguageWeaver's confidence ratings. In addition, I would like to thank the United States Defense Intelligence Agency Foreign Language Program Office for sponsoring my participation in the MT Summit.
Post-Editing of MT Output in a Production Setting. Hermes Juliaaymerich, Camelo, Proceedings from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Workshop: Automated Post-Editing Techniques and Applications. from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Workshop: Automated Post-Editing Techniques and ApplicationsCambridge, MAJuliaAymerich and Hermes Camelo. 2006. Post-Editing of MT Output in a Production Setting. Proceedings from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Work- shop: Automated Post-Editing Techniques and Ap- plications. Cambridge, MA.
Can MT Really Help the Department of Defense?. Nicholas Bemish, Proceedings from the Association for Machine Translation in the Americas (AMTA. from the Association for Machine Translation in the Americas (AMTACambridge, MANicholas Bemish. 2008. Can MT Really Help the De- partment of Defense? Proceedings from the Asso- ciation for Machine Translation in the Americas (AMTA 2008). Cambridge, MA.
. Arendse Bernth, Claudia Gdaniec, MTranslatability. Machine Translation. 16Arendse Bernth and Claudia Gdaniec. 2001. MTransla- tability. Machine Translation Vol 16, 3, 175-218.
The Triple Advantage Factor of Machine Translation: Cost, Time-to-Market and FAUT. Proceedings from the Association for Machine Translation in the Americas (AMTA. Will Burgett, Julie Chang, Cambridge, MAWill Burgett and Julie Chang. 2008. The Triple Ad- vantage Factor of Machine Translation: Cost, Time-to-Market and FAUT. Proceedings from the Association for Machine Translation in the Ameri- cas (AMTA 2008). Cambridge, MA.
The Effect of Text Difficulty on Machine Translation Performance -A Pilot Study with ILR-Rated texts in Spanish, Farsi, Arabic, Russian and Korean‖. Shin Chang-Meadows ; Cambridge, Ma Ray Clifford, Neil Granoien, Douglas Jones, Wade Shen, Clifford Weinstein, MT Errors in CH-to-EN MT Systems: User Feedback. Proceedings from the Association for Machine Translation in the Americas (AMTA. LisbonProceedings of Language Resources and Evaluation Conference (LREC 2004)Shin Chang-Meadows. 2008. MT Errors in CH-to-EN MT Systems: User Feedback. Proceedings from the Association for Machine Translation in the Americas (AMTA 2008). Cambridge, MA. Ray Clifford, Neil Granoien, Douglas Jones, Wade Shen, and Clifford Weinstein. 2004. The Effect of Text Difficulty on Machine Translation Performance -A Pilot Study with ILR-Rated texts in Spanish, Farsi, Arabic, Russian and Korean‖. Proceedings of Lan- guage Resources and Evaluation Conference (LREC 2004), Lisbon.
User-Centered Development and Implementation. Kathleen Egan, Proceedings from the Association for Machine Translation in the Americas (AMTA. from the Association for Machine Translation in the Americas (AMTACambridge, MAKathleen Egan. 2008. User-Centered Development and Implementation. Proceedings from the Association for Machine Translation in the Americas (AMTA 2008). Cambridge, MA.
Identifying Common Challenges for Human and Machine Translation: A Case Study from the GALE Program. Lauren Friedman, Stephanie Strassell, Proceedings from the Association for Machine Translation in the Americas. from the Association for Machine Translation in the AmericasAMTALauren Friedman and Stephanie Strassell. 2008. Identi- fying Common Challenges for Human and Machine Translation: A Case Study from the GALE Pro- gram. Proceedings from the Association for Ma- chine Translation in the Americas (AMTA 2008).
. M A Cambridge, Cambridge, MA.
. Denis Gachot, Personal conversationDenis Gachot, 2005. Personal conversation.
Machine translation and human translation: in competition or in complementation?. John Hutchins, International Journal of Translation. 131-2John Hutchins, 2001. Machine translation and human translation: in competition or in complementation? International Journal of Translation, 13(1-2), 5-20.
BLEU: a Method for Automatic Evaluation of Machine Translation. K Papineni, S Roukos, T Ward, W Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-02). the 40th Annual Meeting of the Association for Computational Linguistics (ACL-02)Philadelphia, PAK. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a Method for Automatic Evaluation of Ma- chine Translation. Proceedings of the 40th Annual Meeting of the Association for Computational Lin- guistics (ACL-02). Philadelphia, PA, 311-318.
Coping with Machine Translation. Practical Experience of Machine. Richard Ruffino, Translation. Lawson57Richard Ruffino. 1982. Coping with Machine Transla- tion. Practical Experience of Machine Translation. Lawson (ed.) 57.
The Translatability of Simplified English Documents. Matching Information to Audience. Serena Shubert, Jan Spyridakis, Serena Shubert and Jan Spyridakis, The Translatability of Simplified English Documents. Matching In- formation to Audience.
Machine Translation: Matching Reality to Expectations. Progress in Machine Translation. Joann Ryan, Sergei Nirenburg. AmsterdamIOS PressJoAnn Ryan. 1993. Machine Translation: Matching Reality to Expectations. Progress in Machine Translation, ed. Sergei Nirenburg . Amsterdam: IOS Press, 225-235.
Post-Editing in the GALEProgram I. Greg Sanders, Proceedings from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Workshop: Automated Post-Editing Techniques and Applications. from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Workshop: Automated Post-Editing Techniques and ApplicationsCambridge, MAGreg Sanders. 2006. Post-Editing in the GALEProgram I. Proceedings from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Workshop: Automated Post-Editing Techniques and Applications. Cambridge, MA.
Measuring the Translatability of Simplified English in Procedural Documents. Jan Spyridakis, Heather Holmback, Serena Shubert, IEEE Transactions on Professional Communication. 401Jan Spyridakis, Heather Holmback, and Serena Shubert. 1997. Measuring the Translatability of Simplified English in Procedural Documents. IEEE Transac- tions on Professional Communication, Vol 40, No 1, 4-12.
Post-Editing in the GALE Program II. Stephanie Strassel, Proceedings from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Workshop: Automated Post-Editing Techniques and Applications. from the Association for Machine Translation in the Americas 2006 Conference (AMTA 2006) Workshop: Automated Post-Editing Techniques and ApplicationsCambridge, MAStephanie Strassel. 2006. Post-Editing in the GALE Program II. Proceedings from the Association for Machine Translation in the Americas 2006 Confe- rence (AMTA 2006) Workshop: Automated Post- Editing Techniques and Applications. Cambridge, MA.
Automatic Rating of Machine Translatability. No publication information provided. Kiyotaka Uchimoto, Naoko Hayashida, Toru Ishida, Hitoshi Isahara, MT Archive PDF. Kiyotaka Uchimoto, Naoko Hayashida, Toru Ishida, and Hitoshi Isahara. 2005. Automatic Rating of Ma- chine Translatability. No publication information provided. MT Archive PDF http://www.mt- archive.info/MTS-2005-Uchimoto.pdf.
The Emergence of FAUT: Fully Automatic Useful Translation. J Van Deer Meer, J , Keynote at the 11th Conference of the European Association for Machine Translation. Oslo, NorwayJ. van deer Meer, J. 2006. The Emergence of FAUT: Fully Automatic Useful Translation. In Keynote at the 11th Conference of the European Association for Machine Translation. Oslo, Norway. |
227,230,471 | Style Analysis of Argumentative Texts by Mining Rhetorical Devices | Using the appropriate style is key for writing a high-quality text. Reliable computational style analysis is hence essential for the automation of nearly all kinds of text synthesis tasks. Research on style analysis focuses on recognition problems such as authorship identification; the respective technology (e.g., n-gram distribution divergence quantification) showed to be effective for discrimination, but inappropriate for text synthesis since the "essence of a style" remains implicit. This paper contributes right here: it studies the automatic analysis of style at the knowledge-level based on rhetorical devices. To this end, we developed and evaluated a grammar-based approach for identifying 26 syntax-based devices. Then, we employed that approach to distinguish various patterns of style in selected sets of argumentative articles and presidential debates. The patterns reveal several insights into the style used there, while being adequate for integration in text synthesis systems. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | [
1804771,
544132,
6202343,
15749064,
14068874,
52013393,
6248369,
7100691,
9312342,
16050554,
252796
] | Style Analysis of Argumentative Texts by Mining Rhetorical Devices
December 13, 2020
Khalid Al-Khatib
Bauhaus-Universität Weimar
WeimarGermany
Viorel Morari [email protected]
Averbis, FreiburgGermany
Benno Stein
Bauhaus-Universität Weimar
WeimarGermany
Style Analysis of Argumentative Texts by Mining Rhetorical Devices
Proceedings of the 7th Workshop on Argument Mining
the 7th Workshop on Argument MiningBarcelona, SpainDecember 13, 2020106
Using the appropriate style is key for writing a high-quality text. Reliable computational style analysis is hence essential for the automation of nearly all kinds of text synthesis tasks. Research on style analysis focuses on recognition problems such as authorship identification; the respective technology (e.g., n-gram distribution divergence quantification) showed to be effective for discrimination, but inappropriate for text synthesis since the "essence of a style" remains implicit. This paper contributes right here: it studies the automatic analysis of style at the knowledge-level based on rhetorical devices. To this end, we developed and evaluated a grammar-based approach for identifying 26 syntax-based devices. Then, we employed that approach to distinguish various patterns of style in selected sets of argumentative articles and presidential debates. The patterns reveal several insights into the style used there, while being adequate for integration in text synthesis systems. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
Introduction
The decision for an adequate writing style plays a crucial role for an author who wants to achieve a particular goal, such as persuading the readers (Burton, 2007). "Style" is an elusive concept which covers a wide range of techniques an author can follow, including justifying a conclusion by anecdotal evidence, using regular repetition of the same phrase, or raising questions and then answering them. In the literature on the subject, these techniques are called rhetorical devices (Johnson, 2016).
The automatic analysis of style has been addressed mostly by developing a set of style features (aka style indicators) such as the percentage of function words (Ganjigunte Ashok et al., 2013;Bergsma et al., 2012). Those features have proven to be effective in various analysis tasks, such as genre classification and author recognition. However, they are not appropriate for typical text synthesis and writing assistance tasks, since they cannot reveal the "essence of a style" in an explicit and describable manner.
By contrast, analyzing the writing style based on rhetorical devices provides a mechanism to describe where, what, and how specific techniques are used. This kind of analysis is not only important for exploring content in social science (Niculae and Danescu-Niculescu-Mizil, 2014), but it can also serve text synthesis systems by improving the quality of automatically generated texts (Hu et al., 2017). Moreover, it can form the backbone of style suggestion tools. For example, when writing a text for which the desired specification (e.g., the genre) is given, adequate style techniques can be suggested to improve the text quality. In such a manner, new writers can learn to improve their texts and approach the quality of masterpieces written by top writers. Figure 1 illustrates the described connections.
Rhetoric has been the subject of investigation amongst scholars since the time of ancient Greece. Meanwhile, a considerable number of rhetorical devices were developed and discussed in the literature. The most well-known collected lists of devices contain more than 500 devices (Lawrence et al., 2017). Though various of them, such as irony and sarcasm, is hard to be computationally identified (Java, 2015), there is still a sufficiently large portion of popular and-for our purpose-highly useful devices whose identification can be tackled with the current state of the art. Basically, rhetorical devices can be categorized according to different principles, where an important one is a linguistic level (lexical, syntactic, semantic, and pragmatic). For the time being, we will deal with syntax-based devices. Against the above background, this paper addresses three research questions.
(1) How to identify syntax-based rhetorical devices in a text? (2) What are the most common patterns of using these devices? (3) To which degree differ these patterns across different monological and dialogical argumentative texts? Within and across the texts' genres, topics, and authors? And across different opponent debaters?
To answer these questions, we develop a grammar-based approach for the identification of 26 rhetorical devices. The grammars are built on top of the outputs of a probabilistic context-free grammar parser, PCFG. For evaluation purpose, we create a corpus of 1718 texts which are labelled for rhetorical devices. The evaluation results show that our approach is able to identify the devices with an average of 0.70 in terms of F 1 . Based on the developed approach, we quantify and discuss the usage of devices in monological texts within and across different genres, topics, and authors using a subset of the New York Times annotated corpus (Sandhaus, 2008). We also analyze the devices usage patterns in dialogical texts using a set of presidential debates from the American presidency project (Woolley and Gerhard, 2017).
We consider the gained qualitative and quantitative insights about the usage of rhetorical devices as step forward to a new generation of semi-automated argumentative text generation and writing tools. All developed resources in this paper are made publicly available at www.webis.de
Related Work
Recently, Investigating rhetorical devices for style analysis has been considered in computational linguistics. Various devices at the semantic and pragmatic levels have been addressed singly such as irony (e.g., (C. Wallace et al., 2014) ), sarcasm (e.g., (Ghosh et al., 2015)), evidence (e.g., (Rinott et al., 2015)), and means of persuasion (e.g., (Duthie et al., 2016)). In a notable work, Strommer (2011) work on identifying 'epanaphora'. They try to distinguish between accidental and intentional use of this device.
Other studies target identifying a mix of syntax, and semantic devices. Gawryjołek et al. (2009) addressed four rhetorical devices: 'anaphora', 'isocolon', 'epizeuxis', and 'oxymorons'. These devices were utilized to recognize the author of a set of documents. Java (2015) identified the four devices mentioned above in addition to nine new devices belonging to parallelism, repetition, and trope. The primary purpose of that work is to use the presence of a rhetorical device as a feature in machine learning models for authorship attribution. Since the authors consider syntax-based devices, we already considered five of their devices in our study. Regarding argumentaion, Lawrence et al. (2017) analyzed eight devices, six belong to the syntax and lexical levels, and two to the trope (i.e., semantic or pragmatic). Mainly, a pilot study was conducted to study the relation between argumentation structure and the identified devices.
Few resources for rhetorical devices are publicly free. Up to our knowledge, the code of the previous studies is not available anywhere on the web. Hence, researchers often have to write a new piece of code every time they need to analyze style based on rhetorical devices. This paper resolves this problem considerably by providing a tool for identifying 26 different rhetorical devices. Our developed resources, including the code, will be made freely available.
PCFG outputs have been employed for different tasks including response generation in dialogue (Yuan et al., 2015), multiword expression identification (Green et al., 2011), and the task at hand: identifying rhetorical devices (Gawryjołek et al., 2009;Java, 2015). However, we develop a set of original heuristic rules that map the devices' definitions to PCFG grammars. As far as we know, many devices from the 26 we identified have not been considered in any other study.
Writing style analysis has been studied widely. The authorship recognition has been tackled in a large number of papers (e.g., (Sundararajan and Woodard, 2018)). Besides, quality assessment research has involved applying several style analysis features (e.g., (Ganjigunte Ashok et al., 2013)). In comparison to our analysis, we conducted a controlled analysis using 'matching' technique and we covered various aspects of monological and dialogical texts such as genre, topic, author, and debate opponent.
Identification of Rhetorical Devices
Rhetorical devices are the techniques of using the language to produce an effect on the target audience or readers (McKay and McKay, 2010). For example, repeating particular phrases can produce effects such as emphasizing a certain argument, or evoking a specific emotion (Corbett, 1990).
This paper targets syntax-based rhetorical devices. Particularly, we aim to identify 26 devices belonging to two main categories: (1) figurative syntax, which is referred as schemes in literature, and (2) ordinary syntax, which concerns the rules of well-formed structuring texts. The effect of the first attributed to using an artful deviation from the ordinary arrangement of words, while the effect of the second is coming from using a specific arrangement of words among other arrangements.
In the next subsections, we detail the figurative and ordinary syntax devices and describe our approach for identifying them.
Figurative Syntax Devices
Figurative devices center on arranging words artfully (Burton, 2007). They are divided into four types: balance, inversion, omission and repetition.
• The balance devices involve arranging the rhythm of thoughts. Hence, they can produce a sense of equivalence among the proposed ideas, or emphasize ideas' differences. For example, we can notice the contrast between ideas in the famous quote of Neil Armstrong: "That's one small step for man, one giant leap for mankind".
• The inversion devices concern changing the order of words, either to stress some ideas or to avoid the monotonous flow of a sentence. For example, "Everybody's got troubles" could be reordered to "Troubles, everybody's got.".
• The omission devices deal with removing words that readers can reveal intuitively. They are often used to imply unfinished thoughts or to keep a fast rhythm, such as: "He came, he saw, he conquered.".
• The repetition is the most frequent, and arguably, the most powerful. According to Aristotle, repetition is the key to a persuasive speech (Fahnestock, 2003). Typically, repetition devices aim at influencing the emotional state of the reader by emphasizing or implicating a specific idea (Burton, 2007;Corbett, 1990). An example which illustrates the emotional impact of repetitions is the famous line from King Lear written by Shakespeare: "Never, never, never, never, never." (Müller, 2006). Table 1 shows an overview of the identified figurative devices in our work. The overview covers a definition, a formalization, and an example for each device belongs to balance, omission, or repetition 1 . Our formalization is grounded on the devices' definitions which are taken from a set of reliable sources such as 'Silva Rhetoricae': a comprehensive source for rhetoric on the web (Burton, 2007).
The formalization elements are: 'Cl' for clause, 'Phr' for phrase, 'W' for word, 'N' for noun, 'Vb' for verb, 'CC' for conjunction, 'COMMA' for comma, . . . for arbitrary intervening material, [. . . ] for word boundaries, {. . . } for phrase or clause boundaries, a = b for identity , and a = b for nonidentity. The elements of formalization are adopted from .
Notice that we essentially concentrate on identifying the devices at the sentence-level, or across consecutive sentences. Besides, some rhetorical devices, according to their definitions, might overlap with If it rains, you will get wet.
(C3) If-cond. Two: Expresses consequences that will not likely happen in the future.
If [VBD], then [MD+VB]
If it rained, you would get wet.
(C4) If-cond. Three: Used to explain that present circumstances would be different if something different had happened in the past.
If [VBD+VBN] , then [MD+VBN]
If I had worked harder, I would have passed the exam.
(C5) If-Counterf.: Statements that examine how a hypothetical change in a past experience could have affected the outcome of that experience.
If [VBD+VBN], then [past modals]
If I were you, I wouldn't come.
(C6) Unless-cond.: Restricted version of if-conditional (its intrinsic meaning is narrowed down to "Q in the case other than P"). <... unless ...> You can't go on vacation unless you save some money.
(C7) Whether-cond.: Expresses alternative (disjunctive) conditions. <... whether ... or ...> Whether you are overweight or not, it is always better to watch your diet.
(CS) Comp./Super. Adj. and Adv.: Used to compare differences between the two objects/states they modify. other devices in some special cases. This overlap is rare and partial. Nevertheless, we consider minimizing the possible overlaps among devices as much as possible in our formalization.
Ordinary Syntax Devices
From the ordinary syntax devices, we select conditionals, comparatives and superlatives, and passive voice. This selection is based on the impact of these devices on the readers (Martinet, 1960).
• The conditional devices entail the causality aspect of the language, and causality, in turn, could imply explaining an event. But it can also be used to argue about positive/negative consequences of a specific action such as "If we were elected him, we would not have achievements".
• The comparatives and superlatives devices might be used to emphasize the superiority of an entity or idea, e.g., "I will be the greatest jobs president that God ever created".
• The passive voice might be used to hide the subject of a negative action, or to stress the importance of an event, e.g., "many mistakes were made, but the future will be great". Table 1 provides an overview of the conditionals, comparatives and superlatives, and passive voice devices. The overview is analog to the one of the figurative category. The formalization is based on definitions from the same set of resources used in the figurative category. The elements of formalization are taken from The Penn Treebank POS Tag Set (Marcus et al., 1993). (60), tricolon (60) and tetracolon (60). † we applied the 26 classifiers on the 'other' instances. If any of them labels an instance with its device, we consider the instance as wrongly classified. Here, we discuss the evaluation experiments of our approach for identifying the syntax-based rhetorical devices. First, we describe the newly created evaluation dataset. Then, we talk about the experimental settings and report on the obtained results. Finally, we address the limitations of our approach and perform an error analysis for its output.
Experiments and Results
Evaluation Dataset: Creating a dataset for rhetorical devices using manual annotation, even with crowd-sourcing, is extremely expensive and time consuming (Java, 2015); The reason behind this is the big number of devices, the potential overlaps between them, and the possibility for some devices to be spread across phrases, sentences, or even paragraphs. Thereby, we decided to follow a bunch of related research studies (e.g., (Java, 2015)) and build the evaluation dataset as follows: We first identify a set of trustworthy sources on the web, which address the rhetorical devices, and have credibility as being developed by experts in rhetoric. Most of the selected sources are either mentioned or already used in some research studies, which speaks for their trustworthiness. From those sources, we use meta-data information (e.g., "Example of Pysma:") to collect a set of instances for our rhetorical devices. We found that targeting about 60 examples for each device is reasonable considering the size of content in the selected sources. We verified all the examples and ensured that there are no duplicated ones. Additionally, we accounted for the possible overlaps between the devices and minimized them adequately, i.e., all the examples for a device belong solely to this device. Unfortunately, two devices turned out to be considered only from few sources, and hence, we got less than 60 examples for them. We also collected 60 examples where none of the devices covered by our work is used. Overall, we collected 1718 examples: 1658 example distributed among the 26 devices and around 60 examples that belong to 'other'. The distribution is shown in Table 2. This dataset, despite its relatively small size, is significantly larger than those that have been used for rhetorical devices in related work (Java, 2015).
Experimental Settings: The implementation of our approach was carried out using Apache Ruta TM (Rule-based Text Annotation) (Kluegl et al., 2016). This tool provides a flexible language for identifying patterns in text spans intuitively. Thus, it facilitates identifying sophisticated patterns with a few lines of code. The implementation is performed on top of the outputs of Stanford Parser (Manning et al., 2014), the version of 3.8.0. We evaluated our approach using the one-vs.-rest classification. That means we performed one classification experiment for each device; The instances of this device in the evaluation dataset is considered as the positive class, and the instances of the remaining devices as well as the 'other' as the negative class. The classifiers' effectiveness is reported in terms of precision, recall, and F 1 -score.
Classification Results: Table 2 shows the results of our experiments. Overall, we manage to identify the 26 devices with an average of 0.70 F 1 -score., which indicates a high effectiveness of our approach.
As for the "figurative" devices, the approach got high scores for the balance devices, including F 1score of 1.00 for 'pysma'. The 'isocolon' is the most challenging with F 1 -score of 0.68. As for the omission devices, the F 1 -scores range from 0.39 for 'asyndeton' and 0.69 for the 'hypozeugma'. These results are a bit lower than the other types. Most of the repetition devices have F 1 -score of about 0.73, except 'mesarchia' with 0.59, and 'mesodiplosis' with 0.39. Besides, "ordinary" devices got scores between 0.56 and 1.00. Interestingly, despite their simple syntax, comparatives and superlatives devices got the lowest scores. Table 3 shows the results of our approach regarding the six rhetorical categories that group the 26 devices. The F 1 scores range from 0.54 to 0.87. The best result is obtained for passive voice (0.87) and conditionals (0.82). Omission and repetition are the hardest to identify with 0.54 and 0.67 F 1 . Figure 2 shows an excerpt from a news editorial along with several rhetorical devices that our approach manages to identify.
Error Analysis: Despite the high effectiveness of our approach, it is subject to fail in some cases.
Concerning the "figurative" category, identifying the balance devices seems to be precise except for 'isocolon'. The identification of this device is based on the outputs of the syntax parser (i.e., POS tags) which are sometimes inaccurate, especially for long sentences. This has a negative impact on the precision score; for instance, "It looks like the Libertarian candidate is racking up the percentage points in recent polls. As far as I can see the Libertarian candidate has over . . . .". Here, the 'Libertarian candidate' makes the classifier of 'isocolon' treats it wrongly as a valid instance. For the omission devices, our approach manages to get 0.93 recall score for 'asyndeton' device, but only 0.25 for precision. We found that the abundance of commas, which we use as an indicator of the lack of conjunctions is insufficient to distinguish 'asyndeton' from other devices, especially 'enumeration'. For example, "Old McDonald However, other experts say the current reliance on the Sabin oral live-virus vaccine has worked so well that great care should be taken before changing policies.
Passive Voice
Passive Voice The injectable '' killed-virus '' vaccine was largely replaced by an oral vaccine made from live viruses, which is still being given to millions of American children.
The development of the new form of the Salk vaccine opens the way for it to be used in combination with other childhood vaccinations. Some health o cials, noting that it has been used in Europe and tested in the developing world, believe that it can be an e ective way to reduce immunizations and associated costs.
Passive Voice
Asyndeton Passive Voice
Dr. Frederick C. Robbins of Case Western Reserve University, chairman of the panel, said early attempts at the combined vaccine were abandoned in this country because of potency problems. Later successes with this approach in Europe, using an enhanced polio vaccine, have rekindled the idea of a combination approach, including the possibility of using both types of polio vaccines to merge their bene ts, he added.
Hypozeugma Asyndeton Asyndeton Passive Voice Study Under Way The Institute of Medicine, an adjunct of the National Academy of Sciences, is studying polio policy and is expected to submit recommendations to the Federal health authorities by April. At a recent public meeting in Washington, the committee heard suggestions for bringing back the inactivated-virus vaccine by combining it with the diphtheria, tetanus and pertussis shots. Enumeration Hypozeugma Hypozeugma Enumeration Passive Voice Asyndeton Figure 2: An excerpt from a NYT news editorial. The rhetorical devices in each sentence are identified using our approach.
had a pig, a dog, a cow and a horse." is identified as 'asyndeton', while it is actually 'enumeration'. As regards repetition, two devices there got low scores: the 'mesarchia' and 'mesodiplosis'. These devices have the least number of instances in our evaluation dataset. We also observed that our heuristic rules for defining the beginning and middle of sentences are the reason for some errors. For the "ordinary" category, the approach has promising results. However, the scores for the 'comparatives and superlatives' are moderate. Observing the errors there, we found that the main reason is again the inaccurate POS tags. For example, in the sentence 'the airport is further than the train station.', 'further' is tagged as comparative adverb instead of comparative adjective.
The 'other' class got a low F 1 -score. In addition to its restrictive way of evaluation that we followed, this score indicates that some devices' classifiers tend to have a lot of false positives.
To have a better idea regarding the effectiveness of our approach, we performed a manual inspection of the classifiers' outputs on a set of ten newspaper articles. We found that some devices such as 'isocolon' and 'asyndeton' indeed have many false positives. Besides, we found that the classifiers make more mistakes with very long sentences.
Analysis of Rhetorical Devices
We rely on our identification approach to analyze the usage patterns of rhetorical devices in argumentative newspaper articles and presidential debates. First, we describe the acquisition and sampling of the analysis datasets. Then, we discuss the distribution of rhetorical devices there along with different article and debate aspects. The computed distributions illustrate various patterns of rhetorical devices and lead to several interesting insights.
Analysis Datasets: To conduct insightful analysis, we constructed two datasets for newspaper articles and presidential debates.
(1) Newspaper dataset: to construct this dataset, we used the NYT annotated corpus (Sandhaus, 2008). The corpus comprises more than 1.8 million high-quality articles written by professional writers. It comes with many types of meta-data labeled by NYT staff, including the type of material (e.g., editorial), the author name, and the topic (e.g., sport). From this corpus, we sampled three subsets, each of which represents one of the three properties of genre, topic, and author. To conduct a controlled analysis, the sampling should account for the confounding variables. For example, studying the style in articles with a specific 'topic' can be influenced by their genres and authors. Hence, we first tried to resolve this issue with the stratification method (Tripepi et al., 2010), which turned out to be not successful; despite the large size of the corpus, we found no information about the authors of about 40% of articles. Also, the distribution of articles in texts belong to the three properties are very skewed. The corpus includes much more reviews than editorials, for example. Many articles are written for 'politics' and few for 'sport', and some authors wrote tens of articles while others wrote only one. Therefore, we tried the matching technique (de Graaf et al., 2011), where we managed successfully to sample the three subsets.
To preserve the balance between the subsets, we consider three instances for each propriety, i.e., the 'topic' subset includes 114 articles belong to science, education, and art. The 'genre' subset includes 89 articles belong to biography, editorial, and review. Finally, the 'authors' subset includes 159 articles written by Martin, Lewis, and Hevesi.
(2) Debate dataset: we acquired this dataset based on the presidential debates from the American presidency project (Woolley and Gerhard, 2017). In particular, we extracted the entire set of debates that involve Donald Trump or/and Hillary Clinton. We think that these two characters are different in many aspects such as ideology, background, experience, opinions on different topics, etc. This difference could be reflected in their styles leading to interesting patterns. We created three subsets of the dataset: 'Trump vs. Clinton,' 'Trump vs. Not-Clinton', and 'Clinton vs. Not-Trump'. In this way, we can analyze the style of the two characters, and also address the question of whether they change their styles according to the debate opponent. In total, Clinton has 226 turns in her debates with Trump, and 1216 in her debates with the other candidates. Trump, on the other side, has 342 turns in his debates with Clinton, and 778 in his debates with the rest of candidates.
Analysis Method: Basically, we applied our Identification approach (see Section 3) to the analysis datasets. In particular, a classifier for each device is applied to the articles or debates turns, resulting in the frequency of the device there. However, since our identification approach is not perfect, it is crucial to account for its errors. Hence, we followed the method used in (Al-Khatib et al., 2017): for the frequency n of a rhetorical device rd in an instance i in a dataset. We computed a confidence interval for n, where the lowerbound = n * precision(rd), and the upperbound = n/recall(rd). Ultimately, The mean of the upper and lower bounds is the new frequency, which is normalized by the number of sentences in the articles/debate-turns belong to i. Accordingly, we computed the distributions of rhetorical devices in the analysis datasets and their subsets. The chi-squared test with 0.01 significant level is used to check whether the difference in the usage of rhetorical devices in the datasets and across their instances is significant, and the Cramer's V test is used to measure the effect-size of the distributions' difference.
Analysis Results: Figure 3 shows the distribution of rhetorical devices among the three authors (a), the three genres (b), the three topics (c), and the debate subsets (d). As expected, the style in newspaper articles (monologue) are significantly different than in debates (dialogue). Some analysis results for each of the datasets are as follows.
(1) Newspaper dataset: In addition to the significant difference among the three properties under studied, the results show a significant difference among the three authors. For example, Lewis and Hevesi use more repetition than Martin. Also, Lewis barely considers conditionals, in contrast to the other two authors. The results also show a significant difference between 'biography' and 'editorial' as well as 'editorial' and 'review', but not between 'review' and 'biography'. The reason might be that the articles in these two genres are written mainly to describe an entity. Interestingly, there is no significant difference between the three topics. Overall, our analysis suggests that the "style" identified by syntax-based rhetorical devices is primarily influenced by the 'author', and 'genre', while 'topic' has the least impact.
(2) Debate dataset: Interestingly, the results show that Clinton is more fond of 'comparatives' and 'passive voice' than Trump, which actually contradicts a widespread assumption (Gingell, 2016;Raskin, 2016). However, our findings are mainly related to the debate genre. The style could be different in speeches, for example. We also found that Clinton uses 'asyndeton' more often than Trump. Since this device is very effective for making the turns easier to grasp, our finding this time is in line with (Raskin, 2016), where they find that Clinton's language is 13% clearer and more direct than Trump's. The results indicate a significant difference between Clinton and Trump styles. More interestingly, while Clinton's style is significantly different when she debates with Trump than when she debates with the rest, Trump's style has no significant difference between his debates with Clinton and his debates with the rest. Apparently, unlike Clinton, Trump does not change his style depending on the opponent.
Conclusion and Future Work
Writing style analysis has become a mature discipline, but it is mostly tackled from the recognition perspective. I.e., it can give strong classification results that, because of their intrinsic nature, cannot be transferred to constrained text generation or computational writing assistance. We address this shortcoming by proposing an approach for the explicit encoding and identification of rhetorical devices. In carefully designed experiments, we study the usage of these devices in different argumentative articles and presidential debates. The distributions show different patterns of style among three text's properties and provide new insights regarding style usage within the studied topics. The achieved F 1 classification performance (0.70) can be considered as very good for concrete multi-class classification setting; it shows that the applied approach has the potential to find its way into real-world argumentative text synthesis tools. We plan in the future to improve our grammars to minimize mistakes and increase the number of devices considering the inversion type of figurative syntax devices.
Figure 1 :
1Envisioned tool for style checking and suggestion.
<... [JJR/JJS/RBR/RBS] ...> (comp. adjective): My house is larger. (PV) Passive Voice: Occurs when the object of an action is changed into the subject of a sentence. <... [to be] ... [VBN] ...> The problem is solved. Table 1: An overview of the (B) Balance, (O) Omission, and (R) Repetition figurative devices, and the (C) Conditionals, (CS) Comparatives and Superlatives, and (PV) Passive Voice ordinary devices.
Figure 3 :
3The distribution of rhetorical devices among the authors (a), the genres (b), and the topics (c) in the newspaper articles dataset, and the distribution of rhetorical devices in the debate dataset (d).
Device definition | Formalization| Example (B1) Enumeration: Lists a series of details, words or phrases. < ...W [CC | COMMA] W ...> Diligence, talent and passion will drive anybody to success. (B2) Isocolon: Similarly structured elements with the same length. < ...<Phr>a <Phr>a ...<Phr>a ...> Fill the armies, rule the air, and pour out the munitions. (B3) Pysma: Asking multiple questions successively. < ...< Cl? > < Cl? > ...>? Ex: Who are you? Why are you doing here (O1) Asyndeton: Omission of conjunctions between clauses. <Cla> COMMA <Cl b > COMMA <Clc> ... ) I came, I saw, I conquered. (O2) Hypozeugma: Placing last, in a construction containing several elements of equal value, the word(s) on which all of them depend. < ...[W]a , [W] b , [W]c ...Vb > Friends, Romans, countrymen, lend me your ears. . . (O3) Epizeugma: Placing the verb that holds together the entire sentence either at the very beginning or the very ending of that sentence. < Vb ...> or, < ...Vb > Neither a borrower nor a lender be. (R1) Epanalepsis: Repetition at the end of a line, the word(s) that occurred at the beginning of the same line. < [W]a ...[W]a > Believe not all you can hear, tell not all you believe. (R2) Mesarchia: Repetition of the same word(s) at the beginning and middle of successive sentences. < [W]a ...[W] b ...> < [W]a ...[W] b ...> I was looking for a paper. I was anxious for a paper. (R3) Epiphoza: Repetition of the same word(s) at the end of successive sentences. < ...[W]a > < ...[W]a > O apple! wretched apple! Miserable apple! (R4) Mesodiplosis: Repetition of the same word(s) in the middle of successive sentences. < ...[W]a ...> < ...[W]a ...> There's no time like the future! There's no time like the past! (R5) Anadiplosis: Repetition of the last word(s) from the previous sentence at the beginning of the next. < ...[W]a > < [W]a ...> We ordered a pizza pie. A pizza pie that changed our lives. (R6) Diacope: Repetition of a word with one or more in between. < ...[W]a ...[W]a ...> The horror! Oh, the horror! (R7) Epizeuxis: Repetition of words with no others between. < [W]a [W]a > Awake, awake and stand up O Jerusalem. (R8) Polysyndeton: Several conjunctions in close succession (mainly between clauses). <Cla> CC <Cl b > CC <Clc> ... He ran and jumped and laughed for joy. Device definition | Formalization| Example (C1) If-cond. Zero: Conditionals which express general truths. If [VB/VBP/VBZ], then [VB/VBP/VBZ] If you heat ice, it melts. (C2) If-cond. One: Express situations which are very likely to happen in the future. If [VB/VBP/VBZ/VBG], then [MD+VB]
Table 2 :
2The precision, recall, and F 1 -score for identifying the 26 rhetorical devices.Category
Instances Prec. Recall F1
(B) Balance
300
0.67
0.88 0.76
(O) Omission
180
0.4
0.81 0.54
(R) Repetition
420
0.6
0.77 0.67
(C) Conditionals
420
0.85
0.8
0.82
(CS) Comp.&Super.
278
0.59
0.62 0.60
(PV) Passive voice
60
0.78
0.98 0.87
Table 3 :
3The precision, recall, and F 1 -score for identifying the rhetorical devices by category.
Title: HEALTH; Salk's Injectable Polio Vaccine May Be Revived Rhetorical Device Comp. Adv. Health o cials are considering a major change in the strategy of polio vaccination, using a new, more potent version of the injectable Salk vaccine that helped eradicate polio in the United States almost 30 years ago.
The inversion is left to future work due to its complexity.
Patterns of Argumentation Strategies across Topics. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Benno Stein, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics17Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, and Benno Stein. 2017. Patterns of Argumentation Strategies across Topics. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 17), pages 1362-1368. Association for Computational Linguistics.
Stylometric analysis of scientific articles. Shane Bergsma, Matt Post, David Yarowsky, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 12). the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 12)Association for Computational LinguisticsShane Bergsma, Matt Post, and David Yarowsky. 2012. Stylometric analysis of scientific articles. In Proceed- ings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 12), pages 327-337. Association for Computational Linguistics.
The forest of rhetoric (silva rhetoricae). Accessed on 16. G Burton, G. Burton. 2007. The forest of rhetoric (silva rhetoricae). Accessed on 16.08.2017.
Humans Require Context to Infer Ironic Intent (so Computers Probably do, too). Byron C Wallace, Laura Do Kook Choe, Eugene Kertz, Charniak, Proceedings of the 2014 Annual Meeting on Association for Computational Linguistics. the 2014 Annual Meeting on Association for Computational Linguistics1Byron C. Wallace, Do Kook Choe, Laura Kertz, and Eugene Charniak. 2014. Humans Require Context to Infer Ironic Intent (so Computers Probably do, too). In Proceedings of the 2014 Annual Meeting on Association for Computational Linguistics (ACL 14) -Volume 1.
Classical rhetoric for the modern student. E P J Corbett, Oxford University PressUSA3 editionE. P. J. Corbett. 1990. Classical rhetoric for the modern student. USA: Oxford University Press, 3 edition.
Matching, an Appealing Method to Avoid Confounding?. M A De Graaf, K J Jager, C Zoccali, F W Dekker, Nephron Clin Pract. M A de Graaf, K J Jager, C Zoccali, and F W Dekker. 2011. Matching, an Appealing Method to Avoid Confound- ing? Nephron Clin Pract.
Conditionals: A Comprehensive Empirical Analysis. Beitrage Zur Alexander-Von-Humboldt-Forschung. R Declerck, S Reed, Mouton de GruyterR. Declerck and S. Reed. 2001. Conditionals: A Comprehensive Empirical Analysis. Beitrage Zur Alexander- Von-Humboldt-Forschung. Mouton de Gruyter.
Mining Ethos in Political Debate. Rory Duthie, Katarzyna Budzynska, Chris Reed, 6th International Conference on Computational Models of Argument (COMMA 16). Rory Duthie, Katarzyna Budzynska, and Chris Reed. 2016. Mining Ethos in Political Debate. In 6th International Conference on Computational Models of Argument (COMMA 16), pages 299-310.
Verbal and Visual Parallelism. Jeanne Fahnestock, Written Communication. 202Jeanne Fahnestock. 2003. Verbal and Visual Parallelism. Written Communication, 20(2):123-152.
Success with Style: Using Writing Style to Predict the Success of Novels. Song Vikas Ganjigunte Ashok, Yejin Feng, Choi, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsVikas Ganjigunte Ashok, Song Feng, and Yejin Choi. 2013. Success with Style: Using Writing Style to Predict the Success of Novels. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1753-1764. Association for Computational Linguistics.
An annotation tool for automatically detecting rhetorical figures. J Jakub, Randy A Gawryjołek, Chrysanne Harris, Dimarco, Proceedings, CMNAIX (Computational Models of Natural Argument). CMNAIX (Computational Models of Natural Argument)Jakub J. Gawryjołek, Randy A. Harris, and Chrysanne DiMarco. 2009. An annotation tool for automatically detecting rhetorical figures. In Proceedings, CMNAIX (Computational Models of Natural Argument).
Sarcastic or Not: Word Embeddings to Predict the Literal or Sarcastic Meaning of Words. Debanjan Ghosh, Weiwei Guo, Smaranda Muresan, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language Processing15Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. Sarcastic or Not: Word Embeddings to Predict the Literal or Sarcastic Meaning of Words. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 15).
Why superlatives are the absolute worst (unless you're Donald Trump. James Gingell, James Gingell. 2016. Why superlatives are the absolute worst (unless you're Donald Trump). https://www. theguardian.com/media/mind-your-language/2016/apr/15/. visited on 24.10.17.
Multiword Expression Identification with Tree Substitution Grammars: A Parsing Tour De Force with French. Spence Green, Marie-Catherine De Marneffe, John Bauer, Christopher D Manning, Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11. the Conference on Empirical Methods in Natural Language Processing, EMNLP '11Association for Computational LinguisticsSpence Green, Marie-Catherine de Marneffe, John Bauer, and Christopher D. Manning. 2011. Multiword Expres- sion Identification with Tree Substitution Grammars: A Parsing Tour De Force with French. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 725-735. Associa- tion for Computational Linguistics.
Constructing a Rhetorical Figuration Ontology. R Harris, C Dimarco, Symposium on Persuasive Technology and Digital Behaviour Intervention. R. Harris and C. DiMarco. 2009. Constructing a Rhetorical Figuration Ontology. In In Symposium on Persuasive Technology and Digital Behaviour Intervention.
Toward Controlled Generation of Text. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, Eric P Xing, PMLRProceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward Controlled Generation of Text. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1587-1596. PMLR.
Characterization of Prose by Rhetorical Structure for Machine Learning Classification. James Java, Ph.D. thesisJames Java. 2015. Characterization of Prose by Rhetorical Structure for Machine Learning Classification. Ph.D. thesis.
The Alphabet of Rhetoric. R Johnson, BiblioLifeR. Johnson. 2016. The Alphabet of Rhetoric. BiblioLife.
Uima ruta: Rapid development of rule-based information extraction applications. Peter Kluegl, Martin Toepfer, Philip-Daniel Beck, Georg Fette, Frank Puppe, Natural Language Engineering. 22Peter Kluegl, Martin Toepfer, Philip-Daniel Beck, Georg Fette, and Frank Puppe. 2016. Uima ruta: Rapid development of rule-based information extraction applications. Natural Language Engineering, 22:1-40.
Harnessing rhetorical figures for argument mining. Johna Lawrence, Jackya Visser, Chris , Argument & Computation. 8Johna Lawrence, Jackya Visser, and Chris Reed. 2017. Harnessing rhetorical figures for argument mining. Argu- ment & Computation, 8:289-310.
The Stanford CoreNLP Natural Language Processing Toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mcclosky, Association for Computational Linguistics (ACL) System Demonstrations. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Association for Computational Linguis- tics (ACL) System Demonstrations, pages 55-60.
Building a Large Annotated Corpus of English: The Penn Treebank. Mitchell P Marcus, Mary Ann Marcinkiewicz, Beatrice Santorini, Comput. Linguist. 192Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Comput. Linguist., 19(2):313-330.
Elements of General Linguistics. André Martinet, Faber and Faber LtdLondonAndré Martinet. 1960. Elements of General Linguistics. Faber and Faber Ltd., London.
Classical Rhetoric 101. Accessed on 14. Brett Mckay, Kate Mckay, Brett McKay and Kate McKay. 2010. Classical Rhetoric 101. Accessed on 14.08.2017.
Style. Wolfgang G Müller, Encyclopedia of Rhetoric. Thomas O. SloaneOxford University PressWolfgang G. Müller. 2006. Style. In Thomas O. Sloane, editor, Encyclopedia of Rhetoric. Oxford University Press, February.
Brighter than gold: Figurative language in user generated comparisons. Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Proceedings of EMNLP. EMNLPVlad Niculae and Cristian Danescu-Niculescu-Mizil. 2014. Brighter than gold: Figurative language in user generated comparisons. In Proceedings of EMNLP, October.
Hillary clinton's acceptance speech as seen by the algorithms. the huffington post. Robin Raskin, Robin Raskin. 2016. Hillary clinton's acceptance speech as seen by the algorithms. the huffington post. https: //www.huffingtonpost.com/robin-raskin. visited on 18.11.17.
Show Me Your Evidence -An Automatic Method for Context Dependent Evidence Detection. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, M Mitesh Khapra, Ehud Aharoni, Noam Slonim, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 15). the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 15)Association for Computational LinguisticsRuty Rinott, Lena Dankin, Carlos Alzate Perez, M. Mitesh Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show Me Your Evidence -An Automatic Method for Context Dependent Evidence Detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 15), pages 440-450. Association for Computational Linguistics.
Evan Sandhaus, The new york times annotated corpus ldc2008t19. dvd. Philadelphia: Linguistic Data Consortium. Evan Sandhaus. 2008. The new york times annotated corpus ldc2008t19. dvd. Philadelphia: Linguistic Data Consortium.
Using rhetorical figures and shallow attributes as a metric of intent in text. Claus W Strommer, Waterloo, Ontario, CanadaUniversity of WaterlooPh.D. thesisClaus W. Strommer. 2011. Using rhetorical figures and shallow attributes as a metric of intent in text. Ph.D. thesis, University of Waterloo, Waterloo, Ontario, Canada.
What represents "style" in authorship attribution?. Kalaivani Sundararajan, Damon L Woodard, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAKalaivani Sundararajan and Damon L. Woodard. 2018. What represents "style" in authorship attribution? In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26,2018, pages 2814-2822.
Stratification for confoundingpart 1: The mantel-haenszel formula. Giovanni Tripepi, J Kitty, Friedo W Jager, Carmine Dekker, Zoccali, Giovanni Tripepi, Kitty J Jager, Friedo W. Dekker, and Carmine Zoccali. 2010. Stratification for confounding - part 1: The mantel-haenszel formula.
. T John, Peters Woolley, Gerhard, American Presidency ProjectJohn T. Woolley and Peters Gerhard. 2017. American Presidency Project. http://www.presidency. ucsb.edu/. visited on 18.11.17.
Response Generation in Dialogue Using a Tailored PCFG Parser. Caixia Yuan, Xiaojie Wang, Qianhui He, Proceedings of the 15th European Workshop on Natural Language Generation (ENLG). the 15th European Workshop on Natural Language Generation (ENLG)Association for Computational LinguisticsCaixia Yuan, Xiaojie Wang, and Qianhui He. 2015. Response Generation in Dialogue Using a Tailored PCFG Parser. In Proceedings of the 15th European Workshop on Natural Language Generation (ENLG), pages 81-85. Association for Computational Linguistics. |
6,147,316 | Discontinuous Incremental Shift-Reduce Parsing | We present an extension to incremental shift-reduce parsing that handles discontinuous constituents, using a linear classifier and beam search. We achieve very high parsing speeds (up to 640 sent./sec.) and accurate results (up to 79.52 F 1 on TiGer). | [
5754528,
11493954,
216094149,
12807398,
2218985,
15506666,
173611,
15416717,
3146611,
17254964,
2029816,
6794841,
2739834,
447387,
17403101,
2304543
] | Discontinuous Incremental Shift-Reduce Parsing
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 26-31, 2015. 2015
Wolfgang Maier [email protected]
Institut für Sprache und Information Universitätsstr. 1
Universität Düsseldorf
40225DüsseldorfGermany
Discontinuous Incremental Shift-Reduce Parsing
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsJuly 26-31, 2015. 2015
We present an extension to incremental shift-reduce parsing that handles discontinuous constituents, using a linear classifier and beam search. We achieve very high parsing speeds (up to 640 sent./sec.) and accurate results (up to 79.52 F 1 on TiGer).
Introduction
Discontinuous constituents consist of more than one continuous block of tokens. They arise through phenomena which traditionally in linguistics would be analyzed as being the result of some kind of "movement", such as extraposition or topicalization. The occurrence of discontinuous constituents does not necessarily depend on the degree of freedom in word order that a language allows for. They can be found, e.g., in almost equal proportions in English and German treebank data (Evang and Kallmeyer, 2011).
Generally, discontinuous constituents are accounted for in treebank annotation. One annotation method consists of using trace nodes that denote the source of a movement and are co-indexed with the moved constituent. Another method is to annotate discontinuities directly by allowing for crossing branches. Fig. 1 shows an example for the latter approach with which we are concerned in this paper, namely, the annotation of (1). The tree contains a discontinuous VP due to the fact that the fronted pronoun is directly attached. framed as a separate pre-, post-or in-processing task to PCFG parsing (Johnson, 2002;Dienes and Dubey, 2003;Jijkoun, 2003;Levy and Manning, 2004;Schmid, 2006;Cai et al., 2011, among others); see particularly Schmid (2006) for more details. Directly annotated discontinuous constituents can be parsed with a dependency parser, given a reversible transformation from discontinuous constituency trees to non-projective dependency structures. Transformations have been proposed by Hall and Nivre (2008), who use complex edge labels that encode paths between lexical heads, and recently by Fernández-González and Martins (2015), who use edge labels to encode the attachment order of modifiers to heads.
Direct parsing of discontinuous constituents can be done with Linear Context-Free Rewriting System (LCFRS), an extension of CFG which allows its non-terminals to cover more than one continuous block (Vijay-Shanker et al., 1987). LCFRS parsing is expensive: CYK chart parsing with a binarized grammar can be done in O(n 3k ) where k is the block degree, the maximal number of continuous blocks a non-terminal can cover (Seki et al., 1991). For a typical treebank LCFRS (Maier and Søgaard, 2008), k ≈ 3, instead of k = 1 for PCFG. In order to improve on otherwise impractical parsing times, LCFRS chart parsers employ different strategies to speed up search : Kallmeyer and Maier (2013) use A * search; van Cranenburgh (2012) and van Cranenburgh and Bod (2013) use a coarse-to-fine strategy in combination with Data-Oriented Parsing; Angelov and Ljunglöf (2014) use a novel cost estimation to rank parser items. Maier et al. (2012) apply a treebank transformation which limits the block degree and therewith also the parsing complexity.
Recently Versley (2014) achieved a breakthrough with a EaFi, a classifier-based parser that uses an "easy-first" approach in the style of Goldberg and Elhadad (2010). In order to obtain discontinuous constituents, the parser uses a strategy known from non-projective dependency parsing (Nivre, 2009;): For every non-projective dependency tree, there is a projective dependency tree which can be obtained by reordering the input words. Non-projective dependency parsing can therefore be viewed as projective dependency parsing with an additional reordering of the input words. The reordering can be done online during parsing with a "swap" operation that allows to process input words out of order. This idea can be transferred, because also for every discontinuous constituency tree, one can find a continuous tree by reordering the terminals. Versley (2014) uses an adaptive gradient method to train his parser. He reports a parsing speed of 40-55 sent./sec. and results that surpass those reported for the above mentioned chart parsers.
In (continuous) constituency parsing, incremental shift-reduce parsing using the structured perceptron is an established technique. While the structured perceptron for parsing has first been used by Collins and Roark (2004), classifier-based incremental shift-reduce parsing has been taken up by Sagae and Lavie (2005). A general formulation for the application of the perceptron algorithm to various problems, including shift-reduce constituency parsing, has been introduced by Zhang and Clark (2011b). Improvements have followed (Zhu et al., 2012;Zhu et al., 2013). A similar strategy has been shown to work well for CCG parsing (Zhang and Clark, 2011a), too.
In this paper, we contribute a perceptron-based shift-reduce parsing architecture with beam search (following Zhu et al. (2013) and Bauer (2014)) and extend it such that it can create trees with crossing branches (following Versley (2014)). We present strategies to improve performance on discontinuous structures, such as a new feature set.
Our parser is very fast (up to 640 sent./sec.), and produces accurate results. In our evaluation, where we pay particular attention to the parser performance on discontinuous structures, we show among other things that surprisingly, a grammarbased parser has an edge over a shift-reduce approach concerning the reconstruction of discontinuous constituents.
The remainder of the paper is structured as follows. In subsection 2.1, we introduce the general parser architecture; the subsections 2.2 and 2.3 introduce the features we use and our strategy for handling discontinuous structures. Section 3 presents and discusses the experimental results, section 4 concludes the article.
Discontinuous Shift-Reduce Parsing
Our parser architecture follows previous work, particularly Zhu et al. (2013) and Bauer (2014).
Shift-reduce parsing with perceptron training
An item in our parser consists of a queue q of token/POS-pairs to be processed, and a stack s, which holds completed constituents. 1 The parser uses different transitions: SHIFT shifts a terminal from the queue on to the stack. UNARY-X reduces the first element on the stack to a new constituent labeled X. BINARY-X-L and BINARY-X-R reduce the first two elements on the stack to a new X constituent, with the lexical head coming from the left or the right child, respectively. FINISH removes the last element from the stack. We additionally use an IDLE transition, which can be applied any number of times after FINISH, to improve the comparability of analyses of different lengths (Zhu et al., 2013). The application of a transition is subject to restrictions. UNARY-X, e.g., can only be applied when there is at least a single item on the stack. We implement all restrictions listed in the appendix of Zhang and Clark (2009), and add additional restrictions that block transitions involving the root label when not having arrived at the end of a derivation. We do not use an underlying grammar to filter out transitions which have not been seen during training.
For decoding, we use beam search (Zhang and Clark, 2011b). Decoding is started by putting the start item (empty stack, full queue) on the beam. Then, repeatedly, a candidate list is filled with all items that result from applying legal transitions to the items on the beam, followed by putting the highest scoring n of them back on the beam (given a beam size of n). Parsing is finished if the highest scoring item on the beam is a final item (stack holds one item labeled with the root label, queue is empty), which can be popped. Item scores are computed as in Zhang and Clark (2011b): The score of the i + 1th item is computed as the sum of the score of the ith item and the dot product of a global feature weight vector and the local weight vector resulting from the changes induced by the corresponding transition to the i + 1th item. The start item has score 0. We train the global weight vector with an averaged Perceptron with early update (Collins and Roark, 2004).
Parsing relies on binary trees. As in previous work, we binarize the incoming trees headoutward with binary top and bottom productions. Given a constituent X which is to be binarized, all intermediate nodes which are introduced will be labeled @X. Lexical heads are marked with Collins-style head rules. As an example, Fig. 2 shows the binarized version of the tree of Fig. 1.
Finally, since we are learning a sparse model, we also exploit the work of Goldberg and Elhadad (2011) who propose to include a feature in the calculation of a score only if it has been observed ≥ MINUPDATE times.
Features
Features are generated by applying templates to parser items. They reflect different configurations of stack and queue. As BASELINE features, we use the feature set from Zhang and Clark (2009) without the bracketing features (as used in Zhu et al. (2013)). We furthermore experiment with features that reflect the presence of separating punctuation ",", ":", ";" (SEPARATOR) (Zhang and Clark, 2009), and with the EXTENDED features of Zhu et unigrams s0tc,s0wc,s1tc,s1wc,s2tc,s2wc,s3tc,s3wc,q0wt,q1wt,q2wt,q3wt,s0lwc,s0rwc,s0uwc,s1lwc,s1rwc,s1uwc bigrams s0ws1w,s0ws1c,s0cs1w,s0cs1c,s0wq0w,s0wq0t,s0cq0w,s0cq0t,s1wq0w,s1wq0t,s1cq0w,s1cq0t,q0wq1w,q0wq1t,q0tq1w, q0tq1t trigrams s0cs1cs2w, s0cs1cs2c, s0cs1cq0w, s0cs1cq0t, s0cs1wq0w, s0cs1wq0t, s0ws1cs2c, s0ws1cq0t extended s0llwc, s0lrwc, s0luwc, s0rlwc, s0rrwc, s0ruwc, s0ulwc, s0urwc, s0uuwc, s1llwc, s1lrwc, s1luwc, s1rlwc, s1rrwc, s1ruwc separator s0wp, s0wcp, s0wq, s0wcq, s0cs1cp, s0cs1cq s1wp, s1wcp, s1wq, s1wcq 2013), which look deeper into the trees on the stack, i.e., up to the grand-children instead of only to children. Fig. 3 shows all the feature templates. Note that s i and q i stands for the ith stack and queue item, w stands for the head word, t for the head tag and c for the constituent label (w, t and c are identical on POS-level). l and r (ll and rr) represent the left and right children (grand-children) of the element on the stack; u handles the unary case. Concerning the separator features, p is a unique separator punctuation between the head words of s 0 and s 1 , q is the count of any separator punctuation between s 0 and s 1 .
Handling Discontinuities
In order to handle discontinuities, we use two variants of a swap transition which are similar to swap-eager and swap-lazy from Nivre (2009) and . The first variant, SIN-GLESWAP, swaps the second item of the stack back on the queue. The second variant COM-POUNDSWAP i bundles a maximal number of adjacent swaps. It swaps i items starting from the second item on the stack, with 1 ≤ i < |s|. Both swap operations can only be applied if 1. the item has not yet been FINISHed and the last transition has not been a transition with the root category, 2. the queue is not empty, 3. all elements to be swapped are pre-terminals, and 4. if the first item of the stack has a lower index than the second (this inhibits swap loops).
SINGLESWAP can only been applied if there are at least two items on the stack. For COM-POUNDSWAP i , there must be at least i + 1 items.
Transition sequences are extracted from treebank trees with an algorithm that traverses the tree bottom-up and collects the transitions. For a given tree τ , intuitively, the algorithm works as follows. We start out with a queue t containing the preterminals of τ , a stack σ that receives finished constituents, a counter s that keeps track of the number of terminals to be swapped, and an empty sequence r that holds the result. First, the first element of t is pushed on σ and removed from t.
While |σ| > 0 or |t| > 0, we repeat the following two steps.
Repeat while transitions can be added:
(a) if the top two elements on σ, l and r, have the same parent p labeled X and l/r is the head of p, add BINARY-X-l/r to r, pop two elements from σ and push p; (b) if the top element on σ is the only child of its parent p labeled X, add UNARY-X, pop an element of σ and push p.
2. If |t| > 0, while the first element of t is not equal to the leftmost pre-terminal dominated by the right child of the parent of the top element on σ (i.e., while there are terminals that must be swapped), add SHIFT to r, increment s, push the first element of t on σ and remove it from t. Finally, add another SHIFT to r, push first element of t to σ and remove it from t (this will contribute to the next reduction). If s > 0, we must swap. Either we add s many SWAP transitions or one COMPOUNDSWAP s to r. Then we move s many elements from σ to the front of t, starting with the second element of σ. Finally we set s = 0.
As an example, consider the transition sequence we would extract from the tree in Fig. 2. Using SINGLESWAP, we would obtain SHIFT, SHIFT, SHIFT, SHIFT, SINGLESWAP, SINGLESWAP, BINARY-VP-R, SHIFT, BINARY-@S-R, SHIFT, BINARY-S-L, FINISH. Using COMPOUNDSWAP i , instead of two SINGLESWAPs, we would just obtain a single COMPOUNDSWAP 2 . unigrams s0xwc, s1xwc, s2xwc, s3xwc, s0xtc, s1xwc, s2xtc, s3xwc, s0xy, s1xy, s2xy, s3xy bigrams s0xs1c, s0xs1w, s0xs1x, s0ws1x, s0cs1x, s0xs2c, s0xs2w, s0xs2x, s0ws2x, s0cs2x, s0ys1y, s0ys2y, s0xq0t, s0xq0w We explore two methods which improve the performance on discontinuous structures. Even though almost a third of all sentences in the German NeGra and TiGer treebanks contains at least one discontinuous constituent, among all constituents, the discontinuous ones are rare, making up only around 2%. The first, simple method addresses this sparseness by raising the importance of the features that model the actual discontinuities by counting all feature occurrences at a gold swap transition twice (IMPORTANCE).
Secondly, we use a new feature set (DISCO) with bigram and unigram features that conveys information about discontinuities. The features condition the possible occurrence of a gap on previous gaps and their properties. 2 The feature templates are shown in Fig. 4. x denotes the gap type of a tree on the stack. There are three possible values, either "none" (tree is fully continuous), "pass" (there is a gap at the root, i.e., this gap must be filled later further up in the tree), or "gap" (the root of this tree fills a gap, i.e., its children have gaps, but the root does not). Finally, y is the sum of all gap lengths.
Experiments
Data
We use the TiGer treebank release 2.2 (TIGER), and the NeGra treebank (NEGRA). For TIGER, we use the first half of the last 10,000 sentences for development and the second half for testing. 3 We also recreate the split of Hall and Nivre (2008) (TIGERHN), for which we split TiGer in 10 parts, assigning sentence i to part imod 10. The first of those parts is used for testing, the concatenation of the rest for training.
From NeGra, we exclude all sentences longer than 30 words (in order to make a comparison with rparse possible, see below), and split off the last 10% of the treebank for testing, as well as the previous 10% for development. As a preprocessing step, in both treebanks we remove spurious discontinuities that are caused by material which is attached to the virtual root node (mainly punctuation). All such elements are attached to the least common ancestor node of their left and right terminal neighbors (as proposed by Levy (2005), p. 163). We furthermore create a continuous variant NEGRACF of NEGRA with the method usually used for PCFG parsing: For all maximal continuous parts of a discontinuous constituent, a separate node is introduced (Boyd, 2007). Subsequently, all nodes that do not cover the head child of the discontinuous constituent are removed.
No further preprocessing or cleanup is applied.
Experimental Setup
Our parser is implemented in Java. We run all our experiments with Java 8 on an Intel Core i5, allocating 15 GB per experiment. All experiments are carried out with gold POS tags, as in previous work on shift-reduce constituency parsing (Zhang and Clark, 2009). Grammatical function labels are discarded.
For the evaluation, we use the corresponding module of discodop. 4 We report several metrics (as implemented in discodop):
• Extended labeled bracketing, in which a bracket for a single node consists of its label and a set of pairs of indices, delimiting the continuous blocks it covers. We do not include the root node in the evaluation and ignore punctuation. We report labeled precision, recall and F 1 , as well as exact match (all brackets correct).
• Leaf-ancestor (Sampson and Babarczy, 2003), for which we consider all paths from leaves to the root.
• Tree edit distance (Emms, 2008), which consists of the minimum edit distance between gold tree and parser output.
Aside from a full evaluation, we also evaluate only the constituents that are discontinuous. We perform 20 training iterations unless indicated otherwise. When training stops, we average the model (as in Daumé III (2006)).
We run further experiments with rparse 5 (Kallmeyer and Maier, 2013) to facilitate a comparison with a grammar-based parser.
Results
We start with discontinuous parsing experiments on NEGRA and TIGER, followed by continuous parsing experiments, and a comparison to grammar-based parsing.
Discontinuous Parsing
NeGra The first goal is to determine the effect of different beam sizes with BASELINE features and the COMPOUNDSWAP i operation. We run experiments with beam sizes 1, 2, 4 and 8; Fig. 5 shows the results obtained on the dev set after each iteration. Fig. 6 shows the average decoding speed during each iteration for each beam size (both smoothed).
Tracking two items instead of one results in a large improvement. Raising the beam size from 2 to 4 results in a smaller improvement. The improvement obtained by augmenting the beam size from 4 to 8 is even smaller. This behavior is mirrored by the parsing speeds during training: The differences in parsing speed roughly align with the result differences. Note that fast parsing during training means that the parser does not perform well (yet) and that therefore, early update is done more often. Note finally that the average parsing speeds on the test set after the last training iteration For further experiments on NeGra, we choose a beam size of 8. Tab. 1 shows the bracketing scores for various parser setups. In Tab. 2, the corresponding TED and Leaf-Ancestor scores are shown.
In the first block of the tables, we compare SWAP with COMPOUNDSWAP i . On all Table 2: Results NEGRA TED and Leaf-Ancestor constituents, the latter beats the former by 0.8 (F 1 ). On discontinuous constituents, using COM-POUNDSWAP i gives an improvement of more than four points in precision and of about 0.8 points in recall. A manual analysis confirms that as expected, particularly discontinuous constituents with large gaps profit from bundling swap transitions.
In the second block, we run the BASELINE features with COMPOUNDSWAP i combined with SEPARATOR, EXTENDED and DISCO. The SEP-ARATOR features were not as successful as they were for Zhang and Clark (2009). All scores for discontinuous constituents drop (compared to the baseline). The EXTENDED features are more effective and give an improvement of about half a point F 1 on all constituents, as well as the highest exact match among all experiments. On discontinuous constituents, precision raises slightly but we loose about 1.4% in recall (compared to the baseline). The latter seems to be due to the fact that in comparison to the baseline, with EXTENDED, more sentences get erroneously analyzed as not containing any crossing branches. This effect can be explained with data sparseness and is less pronounced when more training data is available (see below). Similarly to EXTENDED, the new DISCO features lead to a slight gain over the baseline (on all constituents). As with EXTENDED, on discontinuous constituents, we again gain precision (3%) but loose recall (0.5%), because more sentences wrongly analyzed as not having discontinuities than in the BASELINE. A category-based evaluation of discontinuous constituents reveals that EX-TENDED has an advantage over DISCO when considering all constituents. However, we can also see that the DISCO features yield better results than EXTENDED particularly on the frequent discontinuous categories (NP, VP, AP, PP), which indicates that the information about gap type and gap length is useful for the recovery of discontinuities. IM-PORTANCE (see Sec. 2.3) is not very successful, yielding results which lie in the vicinity of those of the BASELINE.
In the third block of the tables, we test the performance of the DISCO features in combination with other techniques, i.e., we use the BASELINE and DISCO features with COMPOUNDSWAP i and combine it with EXTENDED and SEPARATOR features as well as with the IMPORTANCE strategy. All experiments beat the BASELINE/DISCO combination in terms of F 1 . EXTENDED and DISCO give a cumulative advantage, resulting in an increase of precision of almost 4%, resp. over 6% on discontinuous constituents, compared to the use of DISCO, resp. EXTENDED alone. Adding the SEPARATOR features to this combination does not bring an advantage. The IMPORTANCE strategy is the most successful one in combination with DISCO, causing a boost of almost 10% on precision of discontinuous constituents, leading to the highest overall discontinuous F 1 of 29.41 (notably more than 12 points higher than the baseline); also on all constituents we obtain the third-highest F 1 . Combining DISCO with IMPORTANCE and EX-TENDED leads to the highest overall F 1 on all constituents of 76.95, however, the results on discontinuous constituents are slightly lower than for IM-PORTANCE alone. This confirms the previously observed behavior: The EXTENDED features help when considering all constituents, but they do not seem to be effective for the recovery of discontinuities in particular.
In the TED and LA scores (Tab. 2), we see much less variation than in the bracketing scores. As reported in the literature (e.g., Rehbein and van Genabith (2007)), this is because of the fact that with bracketing evaluation, a single wrong attachment can "break" brackets which otherwise would be counted as correct. Nevertheless, the trends from bracketing evaluation repeat.
To sum up, the COMPOUNDSWAP i operation works better than SWAP because the latter misses long gaps. The most useful feature sets were EX-TENDED and DISCO, both when used independently and when used together. DISCO was particularly useful for discontinuous constituents. SEP-ARATOR yielded no usable improvements. IM-PORTANCE has also proven to be effective, yielding the best results on discontinuous constituents (in combination with DISCO). Over almost all experiments, a common error is that on root level, CS and S get confused, indicating that the present features do not provide sufficient information for disambiguation of those categories. We can also confirm the tendency that discontinuous VPs in relatively short sentences are recognized correctly, as reported by Versley (2014).
TiGer We now repeat the most successful experiments on TIGER. Tab. 3 shows the parsing results for the test set.
Some of the trends seen on the experiments with NEGRA are repeated. EXTENDED and DISCO yields an improvement on all constituents. However, now not only DISCO, but also EXTENDED lead to improved scores on discontinuous constituents. As mentioned above, this can be explained with the fact that for the EXTENDED features to be effective, the amount of training data available in NEGRA was not enough. Other than in NEGRA, the DISCO features are now more effective when used alone, leading to the highest overall F 1 on discontinuous constituents of 19.45. They are, however, less effective in combination with EXTENDED. This is partially remedied by giving the swap transitions more IMPORTANCE, which leads to the highest overall F 1 on all constituents of 74.71.
The models we learn are sparse, therefore, as mentioned above, we can exploit the work of Goldberg and Elhadad (2011). They propose to only include the weight of a feature in the computation of a score if it has been seen more than MIN-UPDATE times. We repeat the BASELINE experiment with two different MINUPDATE settings (see Tab. 3). As expected, the MINUPDATE models are much smaller. The final model with the baseline experiment uses 8.3m features (parsing speed on test set 73 sent./sec.), with MINUPDATE 5 3.3m features (121 sent./sec.) and with MINUPDATE 10 1.8m features (124 sent./sec.). With MINUP-DATE 10, the results do degrade. However, with MINUPDATE 5 in addition to the faster parsing we consistently improve over the baseline.
Finally, in order to check the convergence, we run a further experiment in which we limit training iterations to 40 instead of 20, together with beam size 4. We use the BASELINE features with COMPOUNDSWAP i combined with DISCO, EX-
Continuous Parsing
We investigate the impact of the swap transitions on both speed and parsing results by running an experiment with NEGRACF using the BASELINE and EXTENDED features. The corresponding results are shown in Tab. 4.
Particularly high frequency categories (NP, VP, S) are much easier to find in the continuous case and show large improvements. This explains why without the swap transition, F 1 with BASELINE features is 6.9 points higher than the F 1 on discontinuous constituents (with COMPOUNDSWAP i ). With the EXTENDED features, we obtain a small improvement.
Note that with the shift-reduce approach, the difference between the computational cost of producing discontinuous constituents vs. the cost of producing continuous constituents is much lower than for a grammar-based approach. When producing continuous constituents, parsing is only 20% faster than with the swap transition, namely 97 instead of 81 sentences per second.
In order to give a different perspective on the role of discontinuous constituents, we perform two further evaluations. First, we remove the discontinuities from the output of the discontinuous baseline parser using the procedure described in Sec. 3.1 and evaluate the result against the continuous gold data. We obtain an F 1 of 76.70, 5.5 points lower than the continuous baseline. Secondly, we evaluate the output of the continuous baseline parser against the discontinuous gold data. This leads to an F 1 78.89, 2.9 point more than the discontinuous baseline. Both evaluations confirm the intuition that parsing is much easier when discontinuities (i.e., in our case the swap transition) do not have to be considered.
Comparison with other Parsers
rparse In order to compare our parser with a grammar-based approach, we now parse NEGRA with rparse, with the same training and test sets as before (i.e., we do not use the development set). We employ markovization with v = 1, h = 2 and head driven binarization with binary top and bottom productions. The first thing to notice is that rparse is much slower than our parser. The average parsing speed is about 0.3 sent./sec.; very long sentences require over a minute to be parsed. The parsing results are shown in Tab. 5. They are about 5 points worse than those reported by Kallmeyer and Maier (2013). This is due to the fact that they train on the first 90% of the treebank, and not on the first 80% as we do, which leads to an increased number of unparsed sentences. In comparison to the baseline setting of the shift-reduce parser with beam size 8, the results are around 10 points worse. However, rparse reaches an F 1 of 26.61 on discontinuous constituents, which is 5.9 points more than we achieved with the best setting with our parser.
In order to investigate why the grammar-based approach outperforms our parser on discontinuous constituents, we count the frequency of LCFRS productions of a certain gap degree in the binarized grammar used in the rparse experiment. The average occurrence count of rules with gap degree 0 is 12.18. Discontinuous rules have a much lower frequency, the average count of productions with one, two and three gaps being 3.09, 2.09, and 1.06, respectively. In PCFG parsing, excluding low frequency productions does not have a large effect (Charniak, 1996); however, this does not hold for LCFRS parsing, where they have a major influence (cf. Maier (2013, p. 205)): This means that removing low frequency productions has a negative impact on the parser performance particularly concerning discontinuous structures; however, it also means that low frequency discontinuous productions get triggered reliably. This hypothesis is confirmed by the fact that the our parser performs much worse on discontinuous constituents with a very low frequency (such as CS, making up only 0.62% of all discontinuous constituents) than it performs on those with a high frequency (such as VP, making up 60.65% of all discontinuous constituents), while rparse performs well on the low frequency constituents.
EaFi and Dependency Parsers
We run an experiment with 40 iterations on TIGERHN, using DISCO, EXTENDED and IMPORTANCE. Tab. 6 lists the results, together with the corresponding results of Versley (2014), Hall and Nivre (2008) (H&N) and Fernández-González and Martins (2015) (F&M).
Our results exceed those of EaFi 6 and the exact match score of H&N. We are outperformed by the F&M parser. Note, that particularly the comparison to EaFi must be handled with care, since Versley (2014) uses additional preprocessing: PPinternal NPs are annotated explicitly, and the parenthetical sentences are changed to be embedded by their enclosing sentence (instead of vice versa).
We postpone a thorough comparison with both EaFi and the dependency parsers to future work. 6 Note that Versley (2014) reports a parsing speed of 40-55 sent./sec.; depending on the beam size and the training set size, per second, our parser parses 39-640 sentences.
Discussion
To our knowledge, surprisingly, numerical scores for discontinuous constituents have not been reported anywhere in previous work. The relatively low overall performance with both grammar-based and shift-reduce based parsing, along with the fact that the grammar-based approach outperforms the shift-reduce approach, is striking. We have shown that it is possible to push the precision on discontinuous constituents, but not the recall, to the level of what can be achieved with a grammar-based approach.
Particularly the outcome of the experiments involving the EXTENDED features and IMPOR-TANCE drives us to the conclusion that the major problem when parsing discontinuous constituents is data sparseness. More features cannot be the only solution: A more reliable recognition of discontinuous constituents requires a more robust learning from larger amounts of data.
Conclusion
We have presented a shift-reduce parser for discontinuous constituents which combines previous work in shift-reduce parsing for continuous constituents with recent work in easy-first parsing of discontinuous constituents. Our experiments confirm that an incremental shift-reduce architecture with a swap transition can indeed be used to parse discontinuous constituents. The swap transition is associated with a low computational cost. We have obtained a speed-up of up to 2,000% in comparison to the grammar-based rparse, and we have shown that we obtain better results than with the grammar-based parser, even though the grammarbased strategy does better at the reconstruction of discontinuous constituents.
In future work, we will concentrate on methods that could remedy the data sparseness concerning discontinuous constituents, such as self-training. Furthermore, we will experiment with larger feature sets that add lexical information. An formal investigation of the expressivity of our parsing model is currently under way.
Figure 1 :
1is what we want to reverse" Several methods have been proposed for parsing such structures. Trace recovery can been Example annotation with discontinuous constituents from TiGer
Figure 2 :
2Binarization example
Figure 3 :
3Feature templates al. (
Figure 4 :
4Features for discontinuous structures
4Figure 5 :
5http://github.com/andreasvc/discodop NEGRA dev results (F 1 ) for different beam sizes
Figure 6 :
6NEGRA dev average parsing speeds per sentence for different beam sizes range from 640 sent./sec. (greedy) to 80 sent./sec. (beam size 8).
Table 3 :
3Results TIGER, beam size 4
LR
LP
LF1
E
BASELINE
81.89 82.49 82.19 49.05
EXTENDED 82.20 82.70 82.45 49.54
Table 4 :
4Results NEGRACF TENDED, and IMPORTANCE. The parsing speed on the test set drops to around 39 sentences per second. However, we achieve 75.10 F 1 , i.e., a slight improvement over the experiments in Tab. 3 that confirms the tendencies visible inFig. 5.
Table 5 :
5Results NEGRA rparse
Table 6 :
6Results TIGERHN, sentence length ≤ 40
As in other shift-reduce approaches, we assume that POS tagging is done outside of the parser.
See Maier and Lichte (2011) for a formal account on gaps in treebanks.3 This split, which corresponds to the split used in the SPMRL 2013 shared task(Seddah et al., 2013), was proposed inFarkas and Schmid (2012). We exclude sentences 46,234 and 50,224, because of annotation errors. Both contain nodes with more than one parent node.
http://github.com/wmaier/rparse
Acknowledgments I wish to thank Miriam Kaeshammer for enlightening discussions and the three anonymous reviewers for helpful comments and suggestions. This work was partially funded by Deutsche Forschungsgemeinschaft (DFG).
Fast statistical parsing with parallel multiple context-free grammars. Krasimir Angelov, Peter Ljunglöf, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsGothenburg, SwedenKrasimir Angelov and Peter Ljunglöf. 2014. Fast statistical parsing with parallel multiple context-free grammars. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 368-376, Gothenburg, Sweden.
Stanford shift-reduce parser. John Bauer, John Bauer. 2014. Stanford shift-reduce parser. http://nlp.stanford.edu/software/ srparser.shtml.
Discontinuity revisited: An improved conversion to context-free representations. Adriane Boyd, Proceedings of The Linguistic Annotation Workshop (LAW) at ACL 2007. The Linguistic Annotation Workshop (LAW) at ACL 2007Prague, Czech RepublicAdriane Boyd. 2007. Discontinuity revisited: An improved conversion to context-free representations. In Proceedings of The Linguistic Annotation Work- shop (LAW) at ACL 2007, pages 41-44, Prague, Czech Republic.
Language-independent parsing with empty elements. Shu Cai, David Chiang, Yoav Goldberg, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, ORShu Cai, David Chiang, and Yoav Goldberg. 2011. Language-independent parsing with empty ele- ments. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, pages 212- 216, Portland, OR.
Tree-bank grammars. Eugene Charniak, CS-96-02Providence, RIDepartment of Computer Science, Brown UniversityTechnical ReportEugene Charniak. 1996. Tree-bank grammars. Tech- nical Report CS-96-02, Department of Computer Science, Brown University, Providence, RI.
Incremental parsing with the perceptron algorithm. Michael Collins, Brian Roark, Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume. the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main VolumeBarcelona, SpainMichael Collins and Brian Roark. 2004. Incremen- tal parsing with the perceptron algorithm. In Pro- ceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 111-118, Barcelona, Spain.
Practical Structured Learning Techniques for Natural Language Processing. Hal Daumé, Iii , University of Southern California, Los Angeles, CAPh.D. thesisHal Daumé III. 2006. Practical Structured Learning Techniques for Natural Language Processing. Ph.D. thesis, University of Southern California, Los Ange- les, CA.
Antecedent recovery: Experiments with a trace tagger. Péter Dienes, Amit Dubey, Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. the 2003 Conference on Empirical Methods in Natural Language ProcessingSapporo, JapanPéter Dienes and Amit Dubey. 2003. Antecedent re- covery: Experiments with a trace tagger. In Pro- ceedings of the 2003 Conference on Empirical Meth- ods in Natural Language Processing, pages 33-40, Sapporo, Japan.
Tree distance and some other variants of Evalb. Martin Emms, Proceedings of the Sixth International Language Resources and Evaluation (LREC'08). the Sixth International Language Resources and Evaluation (LREC'08)Marrakech, MoroccoMartin Emms. 2008. Tree distance and some other variants of Evalb. In Proceedings of the Sixth International Language Resources and Eval- uation (LREC'08), pages 1373-1379, Marrakech, Morocco.
PLCFRS parsing of English discontinuous constituents. Kilian Evang, Laura Kallmeyer, Proceedings of the 12th International Conference on Parsing Technologies (IWPT 2011). the 12th International Conference on Parsing Technologies (IWPT 2011)Dublin, IrelandKilian Evang and Laura Kallmeyer. 2011. PLCFRS parsing of English discontinuous constituents. In Proceedings of the 12th International Conference on Parsing Technologies (IWPT 2011), pages 104-116, Dublin, Ireland.
Forest reranking through subtree ranking. Richard Farkas, Helmut Schmid, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaRichard Farkas and Helmut Schmid. 2012. Forest reranking through subtree ranking. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1038-1047, Jeju Island, Korea.
Parsing as reduction. Daniel Fernández, - González, F T André, Martins, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and Teh 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and Teh 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language ProcessingBeijing, ChinaTo appearDaniel Fernández-González and André F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and Teh 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Beijing, China. To appear.
An efficient algorithm for easy-first non-directional dependency parsing. Yoav Goldberg, Michael Elhadad , Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Los Angeles, CAYoav Goldberg and Michael Elhadad. 2010. An effi- cient algorithm for easy-first non-directional depen- dency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 742-750, Los Angeles, CA.
Learning sparser perceptron models. Yoav Goldberg, Michael Elhadad, Ben Gurion University of the NegevTechnical reportYoav Goldberg and Michael Elhadad. 2011. Learning sparser perceptron models. Technical report, Ben Gurion University of the Negev.
Parsing discontinuous phrase structure with grammatical functions. Johan Hall, Joakim Nivre, Advances in Natural Language Processing. Bengt Nordström and Aarne RantaGothenburg, SwedenSpringer5221Johan Hall and Joakim Nivre. 2008. Parsing dis- continuous phrase structure with grammatical func- tions. In Bengt Nordström and Aarne Ranta, editors, Advances in Natural Language Processing, volume 5221 of Lecture Notes in Computer Science, pages 169-180. Springer, Gothenburg, Sweden.
Finding non-local dependencies: Beyond pattern matching. Valentin Jijkoun, The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics. Sapporo, JapanValentin Jijkoun. 2003. Finding non-local dependen- cies: Beyond pattern matching. In The Compan- ion Volume to the Proceedings of 41st Annual Meet- ing of the Association for Computational Linguis- tics, pages 37-43, Sapporo, Japan.
A simple pattern-matching algorithm for recovering empty nodes and their antecedents. Mark Johnson, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, PAMark Johnson. 2002. A simple pattern-matching al- gorithm for recovering empty nodes and their an- tecedents. In Proceedings of the 40th Annual Meet- ing of the Association for Computational Linguis- tics, pages 136-143, Philadelphia, PA.
Datadriven parsing using probabilistic linear contextfree rewriting systems. Laura Kallmeyer, Wolfgang Maier, Computational Linguistics. 391Laura Kallmeyer and Wolfgang Maier. 2013. Data- driven parsing using probabilistic linear context- free rewriting systems. Computational Linguistics, 39(1):87-119.
Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation. Roger Levy, Christopher Manning, Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume. the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main VolumeBarcelona, SpainRoger Levy and Christopher Manning. 2004. Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation. In Proceedings of the 42nd Meeting of the Associa- tion for Computational Linguistics (ACL'04), Main Volume, pages 327-334, Barcelona, Spain.
Probabilistic Models of Word Order and Syntactic Discontinuity. Roger Levy, Stanford UniversityPh.D. thesisRoger Levy. 2005. Probabilistic Models of Word Or- der and Syntactic Discontinuity. Ph.D. thesis, Stan- ford University.
Characterizing discontinuity in constituent treebanks. Wolfgang Maier, Timm Lichte, Formal Grammar. 14th International Conference. Bordeaux, FranceSpringer-Verlag5591Revised Selected PapersWolfgang Maier and Timm Lichte. 2011. Characteriz- ing discontinuity in constituent treebanks. In For- mal Grammar. 14th International Conference, FG 2009. Bordeaux, France, July 25-26, 2009. Revised Selected Papers, volume 5591 of LNCS/LNAI, pages 167-182. Springer-Verlag.
Treebanks and mild context-sensitivity. Wolfgang Maier, Anders Søgaard, Proceedings of the 13th Conference on Formal Grammar (FG-2008). Philippe de Grootethe 13th Conference on Formal Grammar (FG-2008)Hamburg, GermanyCSLI PublicationsWolfgang Maier and Anders Søgaard. 2008. Tree- banks and mild context-sensitivity. In Philippe de Groote, editor, Proceedings of the 13th Confer- ence on Formal Grammar (FG-2008), pages 61-76, Hamburg, Germany. CSLI Publications.
Data-driven PLCFRS parsing revisited: Restricting the fan-out to two. Wolfgang Maier, Miriam Kaeshammer, Laura Kallmeyer, Proceedings of the Eleventh International Conference on Tree Adjoining Grammars and Related Formalisms (TAG+11). the Eleventh International Conference on Tree Adjoining Grammars and Related Formalisms (TAG+11)Paris, FranceWolfgang Maier, Miriam Kaeshammer, and Laura Kallmeyer. 2012. Data-driven PLCFRS parsing re- visited: Restricting the fan-out to two. In Proceed- ings of the Eleventh International Conference on Tree Adjoining Grammars and Related Formalisms (TAG+11), pages 126-134, Paris, France.
Parsing Discontinuous Structures. Dissertation. Wolfgang Maier, University of TübingenWolfgang Maier. 2013. Parsing Discontinuous Struc- tures. Dissertation, University of Tübingen.
An improved oracle for dependency parsing with online reordering. Joakim Nivre, Marco Kuhlmann, Johan Hall, Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09). the 11th International Conference on Parsing Technologies (IWPT'09)Paris, FranceJoakim Nivre, Marco Kuhlmann, and Johan Hall. 2009. An improved oracle for dependency parsing with online reordering. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09), pages 73-76, Paris, France.
Non-projective dependency parsing in expected linear time. Joakim Nivre, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPSingaporeJoakim Nivre. 2009. Non-projective dependency pars- ing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351-359, Singapore.
Evaluating evaluation measures. Ines Rehbein, Josef Van Genabith, Proceedings of the 16th Nordic Conference of Computational Linguistics NODALIDA-2007. the 16th Nordic Conference of Computational Linguistics NODALIDA-2007Tartu, EstoniaInes Rehbein and Josef van Genabith. 2007. Eval- uating evaluation measures. In Proceedings of the 16th Nordic Conference of Computational Linguis- tics NODALIDA-2007, pages 372-379, Tartu, Esto- nia.
A classifier-based parser with linear run-time complexity. Kenji Sagae, Alon Lavie, Proceedings of the Ninth International Workshop on Parsing Technology. the Ninth International Workshop on Parsing TechnologyVancouver, BCKenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceed- ings of the Ninth International Workshop on Parsing Technology, pages 125-132, Vancouver, BC.
A test of the leaf-ancestor metric for parse accuracy. Geoffrey Sampson, Anna Babarczy, Journal of Natural Language Engineering. 9Geoffrey Sampson and Anna Babarczy. 2003. A test of the leaf-ancestor metric for parse accuracy. Journal of Natural Language Engineering, 9:365-380.
Trace prediction and recovery with unlexicalized PCFGs and slash features. Helmut Schmid, Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsSydney, AustraliaHelmut Schmid. 2006. Trace prediction and recov- ery with unlexicalized PCFGs and slash features. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguis- tics, pages 177-184, Sydney, Australia.
Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. Djamé Seddah, Reut Tsarfaty, Sandra Kübler, Marie Candito, Jinho D Choi, Richárd Farkas, Jennifer Foster, Iakes Goenaga, Yoav Koldo Gojenola Galletebeitia, Spence Goldberg, Nizar Green, Marco Habash, Wolfgang Kuhlmann, Yuval Maier, Joakim Marton, Adam Nivre, Ryan Przepiórkowski, Wolfgang Roth, Yannick Seeker, Veronika Versley, Marcin Vincze, Alina Woliński, Wróblewska, Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. the Fourth Workshop on Statistical Parsing of Morphologically-Rich LanguagesSeattle, WADjamé Seddah, Reut Tsarfaty, Sandra Kübler, Marie Candito, Jinho D. Choi, Richárd Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Gallete- beitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Yuval Mar- ton, Joakim Nivre, Adam Przepiórkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woliński, and Alina Wróblewska. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146-182, Seattle, WA.
On Multiple Context-Free Grammars. Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, Tadao Kasami, Theoretical Computer Science. 882Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On Multiple Context- Free Grammars. Theoretical Computer Science, 88(2):191-229.
Discontinuous parsing with an efficient and accurate DOP model. Andreas Van Cranenburgh, Rens Bod, Proceedings of The 13th International Conference on Parsing Technologies. The 13th International Conference on Parsing TechnologiesNara, JapanAndreas van Cranenburgh and Rens Bod. 2013. Dis- continuous parsing with an efficient and accurate DOP model. In Proceedings of The 13th Interna- tional Conference on Parsing Technologies, Nara, Japan.
Efficient parsing with linear context-free rewriting systems. Andreas Van Cranenburgh, Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. the 13th Conference of the European Chapter of the Association for Computational LinguisticsAvignon, FranceAndreas van Cranenburgh. 2012. Efficient parsing with linear context-free rewriting systems. In Pro- ceedings of the 13th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 460-470, Avignon, France.
Experiments with easy-first nonprojective constituent parsing. Yannick Versley, Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages. the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical LanguagesDublin, IrelandYannick Versley. 2014. Experiments with easy-first nonprojective constituent parsing. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 39- 53, Dublin, Ireland.
Characterising structural descriptions used by various formalisms. K Vijay-Shanker, David Weir, Aravind K Joshi, Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics. the 25th Annual Meeting of the Association for Computational LinguisticsStanford, CAK. Vijay-Shanker, David Weir, and Aravind K. Joshi. 1987. Characterising structural descriptions used by various formalisms. In Proceedings of the 25th An- nual Meeting of the Association for Computational Linguistics, pages 104-111, Stanford, CA.
Transitionbased parsing of the Chinese treebank using a global discriminative model. Yue Zhang, Stephen Clark, Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09). the 11th International Conference on Parsing Technologies (IWPT'09)Paris, FranceYue Zhang and Stephen Clark. 2009. Transition- based parsing of the Chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09), pages 162-171, Paris, France.
Shift-reduce CCG parsing. Yue Zhang, Stephen Clark, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, ORYue Zhang and Stephen Clark. 2011a. Shift-reduce CCG parsing. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 683-692, Portland, OR.
Syntactic processing using the generalized perceptron and beam search. Yue Zhang, Stephen Clark, Computational Linguistics. 371Yue Zhang and Stephen Clark. 2011b. Syntactic pro- cessing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105-151.
Exploiting lexical dependencies from large-scale data for better shift-reduce constituency parsing. Muhua Zhu, Jingbo Zhu, Huizhen Wang, Proceedings of COLING 2012. COLING 2012Mumbai, IndiaMuhua Zhu, Jingbo Zhu, and Huizhen Wang. 2012. Exploiting lexical dependencies from large-scale data for better shift-reduce constituency parsing. In Proceedings of COLING 2012, pages 3171-3186, Mumbai, India.
Fast and accurate shiftreduce constituent parsing. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, Jingbo Zhu, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaLong Papers1Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shift- reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 434-443, Sofia, Bulgaria. |
1,999,239 | Splitting Complex English Sentences | This paper applies parsing technology to the task of syntactic simplification of English sentences, focusing on the identification of text spans that can be removed from a complex sentence. We report the most comprehensive evaluation to-date on this task, using a dataset of sentences that exhibit simplification based on coordination, subordination, punctuation/parataxis, adjectival clauses, participial phrases, and appositive phrases. We train a decision tree with features derived from text span length, POS tags and dependency relations, and show that it significantly outperforms a parser-only baseline. | [
2382276,
15700645,
5336265,
7207849,
14068874,
3231246,
33850544,
2245040,
17817489,
2935285,
15636533,
18228350,
17133917,
4896510
] | Splitting Complex English Sentences
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 20-22, 2017. 2017
John Lee [email protected]
Department of Linguistics and Translation City
Kong Applied Science and Technology Research Institute
University of Hong Kong
Hong
J Buddhika
Department of Linguistics and Translation City
Kong Applied Science and Technology Research Institute
University of Hong Kong
Hong
K Pathirage Don [email protected]
Department of Linguistics and Translation City
Kong Applied Science and Technology Research Institute
University of Hong Kong
Hong
Splitting Complex English Sentences
Proceedings of the 15th International Conference on Parsing Technologies
the 15th International Conference on Parsing TechnologiesPisa, ItalyAssociation for Computational LinguisticsSeptember 20-22, 2017. 2017
This paper applies parsing technology to the task of syntactic simplification of English sentences, focusing on the identification of text spans that can be removed from a complex sentence. We report the most comprehensive evaluation to-date on this task, using a dataset of sentences that exhibit simplification based on coordination, subordination, punctuation/parataxis, adjectival clauses, participial phrases, and appositive phrases. We train a decision tree with features derived from text span length, POS tags and dependency relations, and show that it significantly outperforms a parser-only baseline.
Introduction
The task of text simplification aims to rewrite a sentence so as to reduce its complexity, while preserving its meaning and grammaticality. The rewriting may involve various aspects, including lexical simplification, syntactic simplification, content deletion, and content insertion for clarification. This paper focuses on syntactic simplification and, specifically, on splitting a complex sentence into two simpler sentences. 1 Consider the input sentence S in Table 1, a complex sentence containing a participial phrase, "carrying numerous books". After removing this phrase from S, the system generates S 2 from the phrase by turning the participle "carrying" into the finite form "was carrying", and by generating the pronoun "he" as the subject.
A number of systems can already perform this task (Chandrasekar et al., 1996; The second author completed this work as a Postdoctoral Fellow at City University of Hong Kong. 1 The simplified sentences can in turn be split iteratively.
S
The man, carrying numerous books, entered the room. S 1 The man entered the room. S 2 He was carrying numerous books. Table 1: Example input (S) and output sentences (S 1 , S 2 ) for the task of syntactic simplification.
2002a; Inui et al., 2003;Belder and Moens, 2010;Bott et al., 2012;Saggion et al., 2015). While some systems have undergone task-based evaluations, such as reading comprehension , most have adopted holistic assessment, which commonly includes human ratings on the grammaticality, fluency, meaning preservation, and simplicity of the system output (Štajner et al., 2016). These ratings are indeed helpful in indicating the overall quality of a system; however, the need for human intervention restricts the scale of the evaluations, and makes direct comparisons difficult. Other systems have been evaluated with automatic metrics, such as BLEU and readability metrics (Aluisio et al., 2010;Narayan and Gardent, 2014), which overcome the limitations of human ratings, but do not reveal what aspects of the simplification process caused the most difficulties.
The contribution of this paper is two-fold. First, it presents the first publicly available dataset that facilitates detailed, automatic evaluation on syntactic simplification. Second, we report the results of a decision tree approach that incorporates parse features, giving a detailed analysis on its performance for various constructs.
Previous Work
The phrase-based and syntax-based machine translation approaches have been used in many text simplification systems (Vickrey and Koller, Table 2: Distribution of syntactic constructs in the test set (Section 3). The complex sentence can be split into two simpler sentences by (i) removing the text span (italicized); and (ii) transforming the text span into a new sentence with the referent (underlined). This paper focuses on step (i).
2008; Zhu et al., 2010;Coster and Kauchak, 2011;Wubben et al., 2012). While these approaches are effective for lexical substitution and deletion, they are less so for sentence splitting, sentence reordering, and morphological changes (Bott et al., 2012;Siddharthan, 2014). Most syntactic simplification systems start by analyzing the input sentence via a parse tree, or a deep semantic representation (Narayan and Gardent, 2014). For identifying the referent NP (clause attachment), accuracy can reach 95%; for identifying clause boundary, accuracy is at 97% when there is punctuation, and 80% in general (Siddharthan, 2002b). Noun post-modifiers can be extracted at an F-measure of 92% (Dornescu et al., 2014;Stanovsky and Dagan, 2016).
Given a syntactic analysis of the input sentence, the system then applies manually written transformation rules (Siddharthan, 2002a;Belder and Moens, 2010;Bott et al., 2012;Saggion et al., 2015). These rules identify specific constructs in a parse tree, such as the participial phrase in S in Table 1; they then determine whether the construct should be split, and if so, generate an independent sentence from it. For example, Aluísio et al. (2008) used a set of transformation rules to treat 22 syntactic constructs. Siddharthan (2011) used 63 rules, which handle coordination of both verb phrases and full clauses, subordination, apposition and relative clauses, as well as conversion of passive voice to active voice. Siddharthan and Angorsh (2014) used 26 hand-crafted rules for apposition, relative clauses and combinations of the two; as well as 85 rules handle subordination and coordination.
Data
Many evaluation datasets are available for lexical simplification (Paetzold and Specia, 2016), but there is not yet any that enables automatic evaluation on syntactic simplification systems. We created an annotated dataset for this purpose, based on the 167,689 pairs of "normal" and simplified sentences from Wikipedia and Simple Wikipedia aligned by Kauchak (2013). While a majority of these pairs are one-to-one alignments, 23,715 of them are one-to-two alignments. 2 These aligned sentences, in their raw form, can serve as triplets of S, S 1 and S 2 (Table 1).
However, as pointed out by Xu et al. (2015), Wikipedia and Simple Wikipedia contain rather noisy data; indeed, upon manual inspection, not all triplets are good examples for syntactic simplification. These fall into two cases. In the first case, significant content from S is deleted and appear neither in S 1 nor S 2 ; these triplets provide examples of content deletion rather than splitting. In the second case, S 2 (or S 1 ) consists mostly of new content. In some instances, S 1 (or S 2 ) is so similar to S that no real splitting occurs. In others, the new content put into doubt whether the splitting of S was motivated by syntactic complexity alone, or were influenced by the new content. To reduce the noise, we employed human annotation to create the test set, and an automatic procedure to clean up the training set.
<S><ref>The man</ref> <split type="participial">, carrying numerous books,</split> entered the room.</S> <S1><ref1>The man</ref1> entered the room.</S1> <S2><ref2>He</ref2> was carrying numerous books.</S2> Table 3: Each triplet in our corpus contains a complex sentence (S) and the two shorter sentences (S 1 , S 2 ) into which it was re-written.
Test set
An annotator marked up 1,070 triplets of (S, S 1 , S 2 ) in the format shown in Table 3, with the following items: 3
Removed text span The <split> element encloses the text span that is removed from S. This text span usually, though not necessarily, appears in S 1 or S 2 .
Construct Type
The type attribute inside the <split> element indicates the construct type of the removed text span. Table 2 gives a list of the construct types and their distribution.
Re-ordering If the removed text span forms the basis of S 1 (S 2 ), the dest attribute inside the <split> element takes the value S1 (S2). This attribute indicates whether sentence reordering has occurred.
Referent There are often referring expressions in S 1 and S 2 for an entity in S. For example, in Table 1, the words "the man" and "he" in S 1 and S 2 refer to "the man" in S. These referring expressions are marked as <ref1> and <ref2>, and the entity in S is marked as <ref>.
Training set
The rest of the triplets form our training instances.
To filter out instances that are not genuine splits (see above) and to determine the value of dest, we require at least 75% of the words in either S 1 or S 2 to appear in S. To determine the value of type, we ran the baseline system (Section 4), which is unsupervised and has relatively high recall, on S. Thus, the training set has all the annotations in Table 3, except for <ref>, <ref1> and <ref2>.
Approach
Baseline system. We manually developed tree patterns, in the form of dependency relations and POS tags, to identify text spans that should be removed from a complex sentence (Table 4). These patterns are designed to yield high recall but lower precision. The system parses the input sentence with the Stanford parser (Manning et al., 2014), and then performs breadth-first search on the dependency tree for these patterns, returning the first match it finds. This algorithm removes at most one text span from each complex sentence; this assumption is consistent with the material in our dataset, which was derived from one-to-two sentence alignments.
Proposed system. Even if one of the constructs in Table 2 is present in a complex sentence, it may not be appropriate or worthwhile to remove it. To refine the tree patterns developed for the baseline system, we trained a decision tree with the scikitlearn package. For each word in the sentence, the decision tree considers the features listed in Table 5. If the decision tree predicts a split, the text span headed by the word is removed from the sentence.
Evaluation
We evaluate our system's performance at identifying a text span, if any, in a complex sentence that should be removed to form an independent sentence.
As expected, the baseline system achieved relatively high recall (0.88) but low precision (0.34), since it always tries to split a text span that matches any of the tree patterns in Table 4. The decision tree was able to substantially increase the precision (0.63) by learning some useful rules, at the expense of lower recall (0.72).
Some rules that substantially contributed to the performance gain are as follows. Consider the rule that a comma should separate the word from its parent when the dependency relation is xcomp. It was able to block the system from mistakenly tak- Table 4: Manually crafted tree patterns, written in Stanford dependencies (Manning et al., 2014), that are used in the baseline system. If the pattern exists in S, the text span headed by the child word (e.g., VBN/VBG for participial phrases) is to be removed from S. Table 6: Precision, recall and F-measure for identifying the text span to be removed from S.
ing the phrase "conducting at Montreux ..." out of the sentence "He began conducting at Montreux ...". Another useful rule was that the parent word in the conj relation must be root; otherwise, the structure is likely coordinated NPs rather than coordinated clauses. Further, when the modifier is tagged as TO (i.e., an infinitive), or when the subject of the sentence is a determiner (e.g., "this", "that"), no splitting should be done. Finally, shorter text spans are less likely to be split up.
Among the different constructs, the proposed system performed best for punctuation/parataxis, with precision at 0.92 and recall at 0.95. This construct is not only clearly marked, but also more consistently split up. The most challenging construct turned out to be appositive phrases, with precision at 0.36 and recall at 0.56. Many of the errors trickled down from inaccurate analysis by the automatic parser, especially mistakes in relative clause attachment and clause boundary identification .
The precision figures can be viewed as lower bounds. In post-hoc analysis, we found that many of the proposed text spans by our system can be acceptable, but they were not deemed necessary to be split up by the editors of Simple Wikipedia. Ultimately, the decision to split a complex sentence should be made in consideration of the reader's proficiency, but our current dataset does not support the modelling of this factor.
Conclusions and Future Work
We have presented a study on syntactic simplification, focusing on the identification of text spans that should be removed from a complex sentence.
We trained a decision tree to learn to recognize these text spans, using dependencies, POS tags and text span length as features. Experimental results showed that it outperformed a parser-only baseline.
We have reported the most detailed evaluation to-date on this task. This evaluation was made possible with a new dataset, derived from Wikipedia and Simple Wikipedia, that covers coordinated clauses, subordinated clauses, punctuation/parataxis, adjectival clauses, participial clauses, appositive phrases, and prepositional phrases.
In future work, we plan to investigate the next steps in syntactic simplification, i.e., sentence reordering and the generation of referring expressions. Our dataset, which traces sentence order and annotates referring expressions, is well suited for automatic evaluation for these tasks.
Table 5 :
5Features used by the decision tree.System →
Baseline
Proposed
↓ Construct
P/R/F
P/R/F
Coordination 0.31/0.84/0.45 0.61/0.80/0.69
Adjectival
0.29/0.97/0.45 0.59/0.79/0.68
clause
Participial
0.33/0.90/0.48 0.56/0.58/0.57
phrase
Appositive
0.21/0.91/0.34 0.36/0.56/0.44
phrase
Subordination 0.39/0.84/0.53 0.70/0.74/0.72
Punctuation/
0.78/0.99/0.87 0.92/0.95/0.93
Parataxis
Overall
0.34/0.88/0.49 0.63/0.72/0.67
There is no one-to-n alignments for n > 2.
The annotations on re-ordering and referent were not used in this study, but will be useful for evaluation on sentence re-generation.
AcknowledgmentsThis work was partially supported by the Innovation and Technology Fund (Ref: ITS/132/15) of the Innovation and Technology Commission, the Government of the Hong Kong Special Administrative Region.
Readability Assessment for Text Simplification. Sandra Aluisio, Lucia Specia, Caroline Gasperin, Carolina Scarton, Proc. 5th Workshop on Innovative Use of NLP for Building Educational Applications. 5th Workshop on Innovative Use of NLP for Building Educational ApplicationsSandra Aluisio, Lucia Specia, Caroline Gasperin, and Carolina Scarton. 2010. Readability Assessment for Text Simplification. In Proc. 5th Workshop on Inno- vative Use of NLP for Building Educational Appli- cations. pages 1-9.
Towards Brazilian Portuguese Automatic Text Simplification Systems. Sandra Aluísio, Lucia Specia, T A Pardo, E G Maziero, R P Fortes, Proc. 8th ACM Symposium on Document Engineering. 8th ACM Symposium on Document EngineeringSandra Aluísio, Lucia Specia, T. A. Pardo, E. G. Maziero, and R. P. Fortes. 2008. Towards Brazilian Portuguese Automatic Text Simplification Systems. In Proc. 8th ACM Symposium on Document Engi- neering.
Lexico-syntactic text simplification and compression with typed dependencies. M A Angrosh, Tadashi Nomoto, Advaith Siddharthan, Proc. COLING. COLINGM. A. Angrosh, Tadashi Nomoto, and Advaith Sid- dharthan. 2014. Lexico-syntactic text simplification and compression with typed dependencies. In Proc. COLING.
Text Simplification for Children. J , De Belder, M F Moens, Proc. SIGIR Workshop on Accessible Search Systems. SIGIR Workshop on Accessible Search SystemsJ. De Belder and M. F. Moens. 2010. Text Simplifi- cation for Children. In Proc. SIGIR Workshop on Accessible Search Systems.
A Hybrid System for Spanish Text Simplification. Stefan Bott, Horacio Saggion, David Figueroa, Proc.Workshop on Speech and Language Processing for Assistive Technologies. .Workshop on Speech and Language essing for Assistive TechnologiesStefan Bott, Horacio Saggion, and David Figueroa. 2012. A Hybrid System for Spanish Text Simplifi- cation. In Proc.Workshop on Speech and Language Processing for Assistive Technologies.
Motivations and Methods for Text Simplification. R Chandrasekar, Christine Doran, B Srinivas, Proc. COLING. COLINGR. Chandrasekar, Christine Doran, and B. Srinivas. 1996. Motivations and Methods for Text Simplifi- cation. In Proc. COLING.
Learning to Simplify Sentences using Wikipedia. William Coster, David Kauchak, Proc. Workshop on Monolingual Text-to-Text Generation. Workshop on Monolingual Text-to-Text GenerationWilliam Coster and David Kauchak. 2011. Learning to Simplify Sentences using Wikipedia. In Proc. Work- shop on Monolingual Text-to-Text Generation.
Relative Clause Extraction for Syntactic Simplification. Iustin Dornescu, Richard Evans, Constantin Orǎsan, Proc. Workshop on Automatic Text Simplification: Methods and Applications in the Multilingual Society. Workshop on Automatic Text Simplification: Methods and Applications in the Multilingual SocietyDublin, IrelandIustin Dornescu, Richard Evans, and Constantin Orǎsan. 2014. Relative Clause Extraction for Syn- tactic Simplification. In Proc. Workshop on Auto- matic Text Simplification: Methods and Applications in the Multilingual Society. Dublin, Ireland.
Text Simplification for Reading Assistance: A Project Note. Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Proc. 2nd International Workshop on Paraphrasing. 2nd International Workshop on ParaphrasingRyu Iida, and Tomoya IwakuraKentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text Simplifica- tion for Reading Assistance: A Project Note. In Proc. 2nd International Workshop on Paraphrasing.
Improving Text Simplification Language Modeling using Unsimplified Text Data. David Kauchak, Proc. ACL. ACLDavid Kauchak. 2013. Improving Text Simplification Language Modeling using Unsimplified Text Data. In Proc. ACL.
The Stanford CoreNLP Natural Language Processing Toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mc-Closky, Proc. ACL System Demonstrations. ACL System DemonstrationsChristopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP Natural Lan- guage Processing Toolkit. In Proc. ACL System Demonstrations. pages 55-60.
Hybrid Simplification using Deep Semantics and Machine Translation. Shashi Narayan, Claire Gardent, Proc. ACL. ACLShashi Narayan and Claire Gardent. 2014. Hybrid Simplification using Deep Semantics and Machine Translation. In Proc. ACL.
Benchmarking Lexical Simplification Systems. Gustavo H Paetzold, Lucia Specia, Proc. LREC. LRECGustavo H. Paetzold and Lucia Specia. 2016. Bench- marking Lexical Simplification Systems. In Proc. LREC.
Making It Simplext: Implementation and Evaluation of a Text Simplification System for Spanish. Horacio Saggion, Stefan Sanjaštajner, Simon Bott, Luz Mille, Biljana Rello, Drndarevic, ACM Transactions on Accessible Computing (TACCESS). 64Horacio Saggion, SanjaŠtajner, Stefan Bott, Simon Mille, Luz Rello, and Biljana Drndarevic. 2015. Making It Simplext: Implementation and Evaluation of a Text Simplification System for Spanish. ACM Transactions on Accessible Computing (TACCESS) 6(4).
An Architecture for a Text Simplification System. Advaith Siddharthan, Proc. Language Engineering Conference (LEC). Language Engineering Conference (LEC)Advaith Siddharthan. 2002a. An Architecture for a Text Simplification System. In Proc. Language En- gineering Conference (LEC).
Resolving attachment and clause boundary ambiguities for simplifying relative clause constructs. Advaith Siddharthan, Proc.Student Workshop, ACL. .Student Workshop, ACLAdvaith Siddharthan. 2002b. Resolving attachment and clause boundary ambiguities for simplifying rel- ative clause constructs. In Proc.Student Workshop, ACL.
Text Simplification using Typed Dependencies: A Comparison of the Robustness of Different Generation Strategies. Advaith Siddharthan, Proc. 13th European Workshop on Natural Language Generation. 13th European Workshop on Natural Language GenerationAdvaith Siddharthan. 2011. Text Simplification us- ing Typed Dependencies: A Comparison of the Robustness of Different Generation Strategies. In Proc. 13th European Workshop on Natural Lan- guage Generation.
A Survey of Research on Text Simplification. Advaith Siddharthan, International Journal of Applied Linguistics. 1652Advaith Siddharthan. 2014. A Survey of Research on Text Simplification. International Journal of Ap- plied Linguistics 165(2):259-298.
Hybrid Text Simplification using Synchronous Dependency Grammars with Hand-written and Automatically Harvested Rules. Advaith Siddharthan, M A Angrosh, Proc. EACL. EACLAdvaith Siddharthan and M. A. Angrosh. 2014. Hy- brid Text Simplification using Synchronous Depen- dency Grammars with Hand-written and Automati- cally Harvested Rules. In Proc. EACL.
Annotating and Predicting Non-Restrictive Noun Phrase Modifications. Gabriel Stanovsky, Ido Dagan, Proc. ACL. ACLGabriel Stanovsky and Ido Dagan. 2016. Annotating and Predicting Non-Restrictive Noun Phrase Modi- fications. In Proc. ACL.
Sentence Simplification for Semantic Role Labeling. David Vickrey, Daphne Koller, Proc. ACL. ACLDavid Vickrey and Daphne Koller. 2008. Sentence Simplification for Semantic Role Labeling. In Proc. ACL.
Shared task on quality assessment for text simplification. Maja Sanjaštajner, Horacio Popović, Lucia Saggion, Mark Specia, Fishel, Proc. LREC. LRECSanjaŠtajner, Maja Popović, Horacio Saggion, Lu- cia Specia, and Mark Fishel. 2016. Shared task on quality assessment for text simplification. In Proc. LREC.
Sentence Simplification by Monolingual Machine Translation. Sander Wubben, Van Den, Emiel Bosch, Krahmer, Proc. ACL. ACLSander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence Simplification by Mono- lingual Machine Translation. In Proc. ACL.
Problems in current text simplification research: New data can help. Wei Xu, Chris Callison-Burch, Courtney Napoles, Transactions of the Association for Computational Linguistics. 3Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification re- search: New data can help. Transactions of the As- sociation for Computational Linguistics 3:283-297.
A Monolingual Tree-based Translation Model for Sentence Simplification. Zhemin Zhu, Delphine Bernhard, Iryna Gurevych, Proc. ACL. ACLZhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A Monolingual Tree-based Translation Model for Sentence Simplification. In Proc. ACL. |
232,765,104 | [] | Development of Smartcall Vietnamese Text-to-Speech for VLSP 2020
Manh Cuong Nguyen
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Khuong Duy
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Trieu Smartcall
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Jsc
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Ba Quyen Dam
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Thu Phuong Nguyen
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Bao Quoc
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Nguyen
Smartcall JSC and ICTU
Smartcall JSC
Smartcall JSC
ICTU
Thai Nguyen University
Thai Nguyen University
Development of Smartcall Vietnamese Text-to-Speech for VLSP 2020
Index Terms-End-to-end TTSTacotron-2HiFi-GANWaveGlowvocoder
An end-to-end text-to-speech (TTS) system (e.g. consisting of Tacotron-2 and WaveGlow vocoder) can achieve the state-of-the art quality in the presence of a large, professionallyrecorded training database. However, the drawbacks of using neural vocoders such as WaveGlow include 1) a time-consuming training process, 2) a slow inference speed, and 3) resource hunger when synthesizing waveform from spectral features. Moreover, the synthesized waveform from the neural vocoder can inherit the noise from an imperfect training data. This paper deals with the task of building Vietnamese TTS systems from moderate quality training data with noise. Our system utilizes an end-to-end TTS system that takes advantage of the Tacotron-2 acoustic model, and a custom vocoder combining a High Fidelity Generative Adversarial Networks (HiFiGAN)-based vocoder and a Wave-Glow denoiser. Specifically, we used the Hi-FiGAN vocoder to achieve a better performance in terms of inference efficiency, and speech quality. Unlike previous works, we used WaveGlow as an effective denoiser to address the noisy synthesized speech. Moreover, the provided training data was thoroughly preprocessed using voice activity detection, automatic speech recognition and prosodic punctuation insertion. Our experiment showed that the proposed TTS system (as a combination of Tacotron-2, HiFiGAN-based vocoder, and WaveGlow denoiser) trained on the preprocessed data achieved a mean opinion score (MOS) of 3.77 compared to 4.22 for natural speech, which is the best result among participating systems of VLSP 2020's TTS evaluation.
Introduction
Text-to-speech synthesis plays a crucial role in speech-based interaction systems. In the last two decades, there have been many attempts to build high quality Vietnamese TTS systems. A data processing scheme proved its efficacy in optimizing naturalness of end-to-end TTS systems trained on Vietnamese found data (Phung et al., 2020). Text normalization methods were explored; utilizing regular expressions and language model (Tuan et al., 2012). New prosodic features (e.g. phrase breaks) were investigated, which showed their efficacy in improving naturalness of Vietnamese hidden Markov models (HMM)-based TTS systems Trang et al., 2013;. Different types of acoustic models were investigated such as HMM , deep neural networks (DNN) (Nguyen et al., 2019), and sequence-to-sequence models (Phung et al., 2020). For postfiltering, it was shown that a global variance scaling method may destroy the tonal information; therefore, exemplar-based voice conversion methods were utilized in postfiltering to preserve the tonal information (Tuan et al., 2016). To our knowledge, there is little to none research on vocoders for Vietnamese TTS systems, especially when the training data is moderately noisy.
In the International Workshop on Vietnamese Language and Speech Processing (VLSP) 2020, a TTS challenge (Trang et al., 2020) required participants to build Vietnamese TTS systems from a provided moderately noisy corpus. The corpus included raw text and corresponding audio files. However, the corpus has incorrect pronunciation of a foreign language, the slight buzzer sounds in audio data, and many incorrectly labeled words, which pose significant challenges to participants. For example, a general neural vocoder will learn the buzzer sounds from the corpus, and introduce it to the synthesized speech.
In previous VLSP 2019's TTS evaluation, Tacotron-2 and WaveGlow neural vocoder were combined to achieve the best speech quality in Vietnamese speech synthesis (Lam et al.). However, HiFiGAN vocoder significantly outperformed WaveGlow vocoder in term of vocoding quality and efficiency (Kong et al., 2020). In the paper, we present the complete steps of building our endto-end TTS system combining data preprocessing (Phung et al., 2020) and end-to-end modeling which showed that the system addressed the data problems and achieved high performance and high efficiency.
In particular, we introduced a solution that combines HiFiGAN and WaveGlow denoiser as a custom vocoder to enhance the quality of the final synthesized sound. Specifically, in Section II, we present the TTS system architecture consisting of a Tacotron-2 network followed by the HiFiGAN model as a vocoder and the WaveGlow model as a denoiser. The use of HiFiGAN has both improved aggregation speed and reduced resource size, and utilizing WaveGlow denoiser significantly reduces unexpected noise of synthesized speech. The challenges of naturalness, background noise and buzzer noises in the artificial sound were also overcome by combining Tacotron-2, a HiFiGAN-based vocoder and a WaveGlow denoiser.
SYSTEM ARCHITECTURE
Data Preprocessing
We inherited the data processing method (as shown in Figure 1) proposed in (Phung et al., 2020). We remove non-speech segments from the audio files using Voice Activity Detection (VAD) model (Kim and Hahn, 2018). As for textual data, we normalized the original text to lower case without punctuation, then use the results from an Automatic Speech Recognition (ASR) (Peddinti et al., 2015) model to define unvoiced intervals to automatic punctuation to improve the naturalness and prosody of synthesized voices (Phung et al., 2020). Moreover, there is an enormous number of English words in the provided databases, so our solution is to borrow Vietnamese sounds to read the English words. Even, the English words can consist of Vietnamese syllables and English fricative sounds (for example, x sound) if necessary (for instance, "study" becomes 'x-ta-di'), which can make it easier for the model to learn the fricative sounds. Also, by selecting the pronunciation of English words, we introduced uncommon Vietnamese syllables, which enriched the vocabulary of the training data set. The overall text normalization was carried out using regular expressions and a dictionary. Finally, we manually reviewed and corrected the transcription. The data processing scheme is shown in Figure 1
Voice Activity Detection
We used the Voice Activity Detection (VAD) module to split long audio files of many sentences into short speech segments corresponding to many new sentences. Additionally, large silences at the beginning and the end of each audio were removed. We utilized the a VAD model (Kim and Hahn, 2018) including a Long Short Term Memory Recurrent Neural Network (LSTM-RNN)-based classification.
Automatic Speech Recognition and Speech Punctuation
We utilized a Automatic Speech Recognition (ASR) system to obtain the time stamps of each word or each sound in each sentence. Moreover, the within-sentence pauses were identified and considered as potential punctuation. We marked a pause as a punctuation when its duration is greater than a threshold of 0.12 seconds. Then, the punctuation was added to input text. Without the added punctuation, the Tacotron-2 may align short pauses to any word or phoneme; which significantly reduce the quality of the synthesized voice. The ASR acoustic model is the state-of-the art Time Delay Neural Network (Peddinti et al., 2015). To achieve the best performance on provided VLSP data, the language model is trained to over-fit the provided data.
Proposed text-to-speech systems
We proposed a text-to-speech system which is robust to noisy training data. Our system (as shown in Figure 2) was composed of a recurrent sequence-to-sequence feature prediction network called Tacotron-2, which mapped text embedding to acoustic features, followed by a Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis (HiFiGAN)-based vocoder. When using the HiFiGAN-based vocoder alone, we realized that the synthesized speech was noisy. As a result, we utilized the WaveGlow model to denoise the synthesized sound. Therefore, our proposed speech synthesis system includes a Tacotron-2 as a -Tacotron-2: In previous VLSP 2019's TTS evaluation, Tacotron-2 was utilized in Vietnamese speech synthesis to achieve the best speech quality (Lam et al.). Therefore, we utilized Tacotron-2 as our TTS acoustic model. Our network architecture was almost similar to (Shen et al., 2017), with some modifications. Firstly, character embedding was used instead of phoneme embedding, which can take advantage of a more flexible and diverse pronunciation dictionary for the Vietnamese dataset. Lastly, we changed some parameters to better fit the data set which has a sampling rate of 22050 Hz, a minimum frequency of 75 Hz, and a amaximum frequency of 7600 Hz.
-HiFiGAN: To achieve better vocoding quality and higher efficiency, we utilized a HiFiGANbased vocoder instead of WaveGlow vocoder. Our network architecture was similar to config V1 (Kong et al., 2020). A mel-spectrogram was used as input of generator and upsamples it through transposed convolutions until the length of the output sequence matches the temporal resolution of a raw waveform.
-WaveGlow: Our network architecture was similar to (Prenger et al., 2019). However, we only use WaveGlow for audio's noise reduction. First, we generate bias audio with mel-spectrogram from Tacotron-2 (sigma=0.0). And then we transform bias audio to bias mel-spectrogram. Next, for audio's noise reduction, we took the converted melspectrogram from the HiFi-GAN output minus the mel-spectrogram bias by a "denoiser strength" of 0.15. Finally, we obtained the last mel-spectrogram and converted it back to sound.
Experiments
The goal of the subjective experiments is to show the efficacy of our proposed method when the training data is noisy. We used the Tacotron-2 acoustic model in combination with different vocoders including 1) WaveGlow vocoder (denoted as WaveGlow), 2) HiFiGAN vocoder (denoted as HiFiGAN), and 3) our proposed method combining HiFiGANbased vocoder and WaveGlow denoiser (denoted as HiFiGAN+Denoiser). The target natural speech is denoted as NAT.
Network Training
The original corpus contained 9 hours and 23 minutes of speaking from a female speaker. And after removing the unvoiced parts, the corpus had 8 hours and 21 minutes of speech. All data has been entered to train from scratch for the Tacotron-2 model. We also trained our HiFiGAN and WaveGlow model on the ground truth-aligned predictions.
Experimental Results
We submitted our proposed system (described in Section 2) to the VLSP 2020's TTS evaluation. The system was evaluated using the VLSP organizer's subjective MOS test. There were 24 participants listening to the stimuli of synthesized and natural speech. The participants gave each utterance a score on a 5-point scale including "very bad", "bad", "fair", "good", and "very good". Details of the results of the second MOS test are given in Table 1.
Our system NAT 3.77 4.22 Table 1: Average MOS of our proposed system (described in Section 2) from VLSP's TTS evaluation
We conducted the second Mean Opinion Score (MOS) test to evaluate the performance of four vocoders (WaveGlow, HiFiGAN, and HiFi-GAN+WaveGlow) in speech synthesis. Each listener listened to 20 test sentences and rate the quality of each sentence in a 5-point scale including "very bad", "bad", "fair", "good", and "very good". In total, there are 20 (sentences) × 4 (systems) = 80 (trials) 1 in a Latin-square design. We need 80 ÷ 20 = 4 listeners to cover all the trials.There were 12 participants in the the test.
We summarize the perceptual characteristics of each speech synthesis systems in Table 2. The Figure 3 showed that our proposed system (denoted as HiFiGAN+Denoiser) has a highest MOS. The proposed system is better than natural speech (NAT) due to the fact that the target natural speech is noisy. The results showed that HiFiGAN vocoder outperformed WaveGlow vocoder when the training data is noisy.
We also ran the benchmarks for three models on the same Nvidia GTX 1080 Ti GPU hardware,
Systems Evaluate
WaveGlow Each pronouncing word has a buzzer, however, the background noise is noticeable HiFiGAN
The sound quality of each word has been improved, the background noise is moderate HiFiGAN+Denoiser The sound is clean Table 2: Experimental reviews with the same set of samples to show the inference efficicenty of using HiFiGAN-based vocoder. Statistics of real-time factor (RTF) values, which tells how many seconds of speech are generated in 1 second of wall time, are shown in Table 3. The results show that the speech synthesis rate of the model with HiFiGAN vocoder compared to the model with WaveGlow vocoder is 1.8 times, which hugely improves the speed performance of the system. For the system with both HiFiGAN and WaveGlow, the speed performance is approximate to the model using only HiFiGAN, because the denoising process of WaveGlow is not computationally exhausting. The results indicate that the HiFiGAN-based vocoder has better inference efficiency than the WaveGlow vocoder.
On the other hand, the resource consumption of our proposed model increases due to the use of both HiFiGAN and WaveGlow denoiser. While the number of HiFiGAN's parameters is 13.92 million, the WaveGlow has six times more parameters than HiFiGAN (as shown in Table 4). And the total
CONCLUSION AND FUTURE WORKS
In this report, we have presented our Vietnam TTS system for VLSP 2020. As for the challenge, our approach yields MOS result pretty close to this of natural speech. By testing various solutions to these challenges, we found that combining the methods to develop a custom vocoder played a significant role in the quality of synthesized speech.
And the system efficiency was also significantly improved. As a result, the challenges of naturalness, background noise and buzzer noises in the artificial sound have been overcome. We plan to investigate other types of neural vocoders for improving the quality of speech synthesis.
Figure 1 :
1Data
Figure 3 :
3Average MOS of four systems. Dashed lines show statistically significant differences with p-value < 10 −8
Table 3 :
3RTF results number of parameters using both models is 101.65 million.Models
Param (M)
WaveGlow
87.73
HiFiGAN
13.92
HiFiGAN and WaveGlow
101.65
Table 4 :
4Number of parameters
Samples are available at: https://proptitclub.github.io/paper/index.html
Vietnamese hmmbased speech synthesis with prosody information. Anh Tuan Dinh, Thanh Son Phan, Tat Thang Vu, Chi Mai Luong, Eighth ISCA Workshop on Speech Synthesis. Barcelona, SpainAnh Tuan Dinh, Thanh Son Phan, Tat Thang Vu, and Chi Mai Luong. 2013. Vietnamese hmm- based speech synthesis with prosody information. In Eighth ISCA Workshop on Speech Synthesis, Barcelona, Spain.
Voice activity detection using an adaptive context attention model. J Kim, M Hahn, 10.1109/LSP.2018.2811740IEEE Signal Processing Letters. 258J. Kim and M. Hahn. 2018. Voice activity detection us- ing an adaptive context attention model. IEEE Sig- nal Processing Letters, 25(8):1181-1185.
Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. Jungil Kong, Jaehyeon Kim, Jaekyoung Bae, abs/2010.05646ArXiv. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. Hifi-gan: Generative adversarial networks for ef- ficient and high fidelity speech synthesis. ArXiv, abs/2010.05646.
Development of zalo vietnamese textto-speech for vlsp 2019. Phung Viet Lam, Phan Huy Kinh, Anh Dinh, Tuan, Nguyen Quoc Trieu Khuong Duy, Bao, Phung Viet Lam, Phan Huy Kinh, Dinh Anh Tuan, Trieu Khuong Duy, and Nguyen Quoc Bao. Development of zalo vietnamese text- to-speech for vlsp 2019. http://vlsp. org.vn/sites/default/files/2019-10/
. Vlsp2019-Tts-Phungvietlam, VLSP2019-TTS-PhungVietLam.pdf. Accessed: Oct, 2019.
Development of vietnamese speech synthesis system using deep neural networks. Bao Quoc Thinh Van Nguyen, Kinh Nguyen, Hai Huy Phan, Van Do, 10.15625/1813-9663/34/4/13172In Journal of Computer Science and Cybernetics. 34Thinh Van Nguyen, Bao Quoc Nguyen, Kinh Huy Phan, and Hai Van Do. 2019. Development of viet- namese speech synthesis system using deep neural networks. In Journal of Computer Science and Cy- bernetics, volume 34, pages 349-363.
A time delay neural network architecture for efficient modeling of long temporal contexts. V Peddinti, D Povey, S Khudanpur, Sixteenth Annual Conference of the International Speech Communication Association. V. Peddinti, D. Povey, and S. Khudanpur. 2015. A time delay neural network architecture for efficient mod- eling of long temporal contexts. In Sixteenth Annual Conference of the International Speech Communica- tion Association.
Improvement of naturalness for an hmm-based vietnamese speech synthesis using the prosodic information. T Phan, T Duong, A Dinh, T Vu, C Luong, 10.1109/RIVF.2013.6719907The 2013 RIVF International Conference on Computing Communication Technologies -Research, Innovation, and Vision for Future (RIVF). T. Phan, T. Duong, A. Dinh, T. Vu, and C. Luong. 2013. Improvement of naturalness for an hmm-based viet- namese speech synthesis using the prosodic infor- mation. In The 2013 RIVF International Confer- ence on Computing Communication Technologies - Research, Innovation, and Vision for Future (RIVF), pages 276-281.
Data processing for optimizing naturalness of vietnamese text-to-speech system. Phan Viet Lam Phung, Anh Tuan Huy Kinh, Quoc Bao Dinh, Nguyen, arXiv:2004.09607Viet Lam Phung, Phan Huy Kinh, Anh Tuan Dinh, and Quoc Bao Nguyen. 2020. Data processing for opti- mizing naturalness of vietnamese text-to-speech sys- tem. arXiv:2004.09607.
Waveglow: A flow-based generative network for speech synthesis. R Prenger, R Valle, B Catanzaro, 10.1109/ICASSP.2019.8683143ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). R. Prenger, R. Valle, and B. Catanzaro. 2019. Wave- glow: A flow-based generative network for speech synthesis. In ICASSP 2019 -2019 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3617-3621.
Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, R J Skerry-Ryan, Rif A Saurous, abs/1712.05884Yannis Agiomyrgiannakis, and Yonghui Wu. 2017. Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. CoRR. Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, R. J. Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, and Yonghui Wu. 2017. Natural TTS synthesis by con- ditioning wavenet on mel spectrogram predictions. CoRR, abs/1712.05884.
Vietnamese text-to-speech shared task vlsp 2020: Remaining problems with state-of-the-art techniques in proceedings of the seventh international workshop on vietnamese language and speech processing. Thu Nguyen Thi, Nguyen Trang, Hoang Ky, Pham Quang Minh, Vu Duy, Manh, International workshop on Vietnamese Language and Speech Processing. VLSP 2020Nguyen Thi Thu Trang, Nguyen Hoang Ky, Pham Quang Minh, and Vu Duy Manh. 2020. Vietnamese text-to-speech shared task vlsp 2020: Remaining problems with state-of-the-art tech- niques in proceedings of the seventh international workshop on vietnamese language and speech processing (vlsp 2020). In International workshop on Vietnamese Language and Speech Processing (VLSP 2020).
Prosodic phrasing modeling for vietnamese tts using syntactic information. Thu Nguyen Thi, Albert Trang, Tran Do Rilliard, Christophe Dat, Proceedings of Interspeech. InterspeechLyon, FranceNguyen Thi Thu Trang, Albert Rilliard, Tran Do Dat, and Christophe d'Alessandro. 2013. Prosodic phras- ing modeling for vietnamese tts using syntactic in- formation. In Proceedings of Interspeech, Lyon, France.
A study of text normalization in vietnamese for text-to-speech system. Phi Tung Dinh Anh Tuan, Phan Dang Lam, Hung, Proceedings of Oriental COCOSDA Conference. Oriental COCOSDA ConferenceMacau, ChinaDinh Anh Tuan, Phi Tung Lam, and Phan Dang Hung. 2012. A study of text normalization in vietnamese for text-to-speech system. In Proceedings of Orien- tal COCOSDA Conference, Macau, China.
Quality improvement of vietnamese hmmbased speech synthesis system based on decomposition of naturalness and intelligibility using nonnegative matrix factorization. Anh Dinh, Tuan, Thanh Phan, Masato Son, Akagi, Advances in Information and Communication Technology. ICTA 2016. Advances in Intelligent Systems and Computing. ChamSpringer538Dinh Anh Tuan, Phan Thanh Son, and Masato Akagi. 2016. Quality improvement of vietnamese hmm- based speech synthesis system based on decompo- sition of naturalness and intelligibility using non- negative matrix factorization. In Advances in Infor- mation and Communication Technology. ICTA 2016. Advances in Intelligent Systems and Computing, vol 538. Springer, Cham. |
||
8,012,923 | Deep Linguistic Processing with GETARUNS for spoken dialogue understanding | In this paper we will present work carried out to scale up the system for text understanding called GETARUNS, and port it to be used in dialogue understanding. The current goal is that of extracting automatically argumentative information in order to build argumentative structure. The long term goal is using argumentative structure to produce automatic summarization of spoken dialogues. Very much like other deep linguistic processing systems, our system is a generic text/dialogue understanding system that can be used in connection with an ontology -WordNet -and other similar repositories of commonsense knowledge. We will present the adjustments we made in order to cope with transcribed spoken dialogues like those produced in the ICSI Berkeley project. In a final section we present preliminary evaluation of the system on two tasks: the task of automatic argumentative labeling and another frequently addressed task: referential vs. non-referential pronominal detection. Results obtained fair much higher than those reported in similar experiments with machine learning approaches. | [
17186498,
200520,
18153644,
1245740,
5509911,
9475256,
1840697
] | Deep Linguistic Processing with GETARUNS for spoken dialogue understanding
Rodolfo Delmonte [email protected]
Department of Language Science
°Department of Computer Science
Università "Ca' Foscari"
Webster University
30123 -VENEZIAGenevaSwitzerland
Antonella Bristot
Department of Language Science
°Department of Computer Science
Università "Ca' Foscari"
Webster University
30123 -VENEZIAGenevaSwitzerland
Pallotta [email protected]
Department of Language Science
°Department of Computer Science
Università "Ca' Foscari"
Webster University
30123 -VENEZIAGenevaSwitzerland
Deep Linguistic Processing with GETARUNS for spoken dialogue understanding
In this paper we will present work carried out to scale up the system for text understanding called GETARUNS, and port it to be used in dialogue understanding. The current goal is that of extracting automatically argumentative information in order to build argumentative structure. The long term goal is using argumentative structure to produce automatic summarization of spoken dialogues. Very much like other deep linguistic processing systems, our system is a generic text/dialogue understanding system that can be used in connection with an ontology -WordNet -and other similar repositories of commonsense knowledge. We will present the adjustments we made in order to cope with transcribed spoken dialogues like those produced in the ICSI Berkeley project. In a final section we present preliminary evaluation of the system on two tasks: the task of automatic argumentative labeling and another frequently addressed task: referential vs. non-referential pronominal detection. Results obtained fair much higher than those reported in similar experiments with machine learning approaches.
Introduction
In this paper we will present work carried out to scale up the system for text understanding called GETARUNS, and port it to be used in dialogue understanding. Very much like other deep linguistic processing systems (Allen et al., 2007), our system is a generic text/dialogue understanding system that can be used in connection with an ontology -WordNet -and/or a repository of commonsense knowledge like CONCEPTNET. Word sense disambiguation takes place at the level of semantic interpretation and is represented in the Discourse Model. We will present the adjustments we made in order to cope with transcribed spoken dialogues like those produced in the ICSI Berkeley project. The low level component is organized according to LFG theory; the system also does pronominal binding, quantifier raising and temporal interpretation. The high level component is where the Discourse Model is created from the Logical Form of an utterance. For longer sentences the system switches from the topdown to the bottomup system. In case of failure it will backoff to the partial system which produces a very lean and shallow semantics with no inference rules.
The system presented here has been developed for over two decades with the goal of developing a broadcoverage, domain general natural language understanding system. The underlying grammar, lexicon, the semantics and all intermediate modules are intended to be domain-general and to be easily portable to different applications. As is the case with all rulebased systems, (but see also Allen et al., 2007), we have no need to collect and annotate corpora for specific subtasks because the system already has good performance in all current parsing and semantic related tasks (see Delmonte et al. 2006;Delmonte 2007 and2008). However, when we started last year to use the system to parse ICSI dialogues, we realized that the semantic representation and the output of the parser were both inadequate. So we started to work at deficiencies that we detected in an empirical manner. This approach made us aware of the peculiarities of spoken dialogue texts such as the ones made available in ICSI Berkeley project. These dialogues are characterized by the need to argument in a exhaustive manner the topics to be debated which are the theme of each multiparty dialogue. The mean length of utterances/turns in each dialogue we parsed was rather long. This makes ICSI dialogues hard to compute. From a count of number of words x turn, we came up with the following mean figures: -percent of turns made of one single word: 30% -percent of turns made of up to three words: 40% -number of words x turn overall: 7 -number of words x turn after subtracting short utterances: 11 These values correspond to those found for PennTreebank corpus where we can count up to 94K sentences for 1M words -again 11 words per sentence. In analyzing ICSI, we found turns with as much as 54 words depending on the topic under discussion and on the people on the floor. Computing semantic representations for spoken dialogues is a particularly hard task which requires at least the following information to be made available: -adequate treatment of fragments; -adequate treatment of short turns, in particular oneword turns; -adequate treatment of first person singular and plural pronominal expressions; -adequate treatment of disfluencies, thus including cases of turns made up of just one or more such expressions, or cases when they are found inside the utterance; -adequate treatment of overlaps; -adequate treatment of speaker identity for pronominal coreference; In addition, we decided that every dialogue turn had to receive one polarity label, indicating negativity or positivity, and this is computed by looking into a dictionary of polarity items.
We will address each such topics in what follows.
The Spoken Dialogue Additions
We will proceed by addressing each problem presented above in the order with which it is coped with in the system, i.e. as follows: -overlaps -dialogue act labeling -fragment analysis -disfluency treatment -pronominal binding special routines -non-referential linguistic element -anaphora resolution routines -current speaker as Subject of Point of View
The Algorithm for Overlaps
Overlaps are an important component of all spoken dialogue analysis. In all dialogue transcription, overlaps are treated as a separate turn from the one in which they occur, which usually follows it. This is clearly wrong from a computational point of view. For this reason, when computing overlaps we set as our first goal that of recovering the temporal order. This is done because: -overlaps may introduce linguistic elements which influence the local context; -eventually, they may determine the interpretation of the current utterance; For these reasons, they cannot be moved to a separate turn because they must be semantically interpreted where they temporally belong. In addition, overlaps are very frequent. The algorithm we built looks at time stamps, and every time the following turn begins at a time preceding the ending time of current turn it enters a special recursive procedure. It looks for internal interruption in the current turn and splits the utterance where the interruption occurs. Then it parses the split initial portion of current utterance and continues with the overlapping turn. This may be reiterated in case another overlap follows which again begins before the end of current utterance. Eventually, it returns to the analysis of the current turn with the remaining portion of current utterance.
In Table 1 below we present data related to overlaps for the first 10 dialogues we computed. We classified overlaps into two types -WHILE and AFTERaccording to whether they take place inside the turn of the current speaker or at the end. The second case being regarded as normal and non-disrupting of the current speaker's conversational plan. As can be easily noticed, the case constituted by Inter_Change which is the most interesting from a semantic and pragmatic point of view is in fact the less frequent. We assume, however, that this may be determined by other factors attaining to the type of conversation being entertained by the participants, as well as by the nature of the topics discussed, and eventually by the personalities of the interlocutors.
The Treatment of Fragments and Short Turns
Fragments and short turns are filtered by a lexical lookup procedure that searches for specific linguistic elements which are part of a list of backchannels, acknowledgements expressions and other similar speech acts. In case this procedure has success, no further computation takes place. However, this only applies to utterances shorter than 5 words, and should be made up only of such special words. No other linguistic element should be present apart from non-words, that is words which are only partially produced and have been transcribed with a dash at the end.
-graceful failure procedures for ungrammatical sentences, which might be full-fledged utterances but semantically non interpretable due to the presence of repetitions, false starts and similar disfluency phenomena. Or else they may be just fragments, i.e. partial or incomplete utterances, hence non-interpretable as such; this is done by imposing grammatical constraints of wellformedness in the parser; -failure procedures for utterances which are constituted just by disfluency items and no linguistically interpretable words. These must be treated as semantically empty utterances and are recognizable by the presence of orthographic signs indicating that the word/s have not been completed and are just incomprehensible; this is done by inspecting the input in search of special orthographic marks and preventing the utterance to be passed down to the partial/deep parser. On the contrary, we implemented a principled treatment of elliptical utterances and contribute one specific speech act or communicative act. They may express agreement/ disagreement, acknowledgement, assessment, continuers etc. All these items are computed as being complements of abstract verb SAY which is introduced in the analysis, and has as subject, the name of current speaker.
Automatic Argumentative Annotation
At first we shall provide a state of the art and then we shall comment in detail our approach.
Detecting Argumentative structure -issues and theories
As shown by Rosemberg and Silince (1999), tracking argumentative information from meeting discussions is of central importance for building summaries of project memories since, in addition to the "strictly factual, technical information", these memories must also store relevant information about decision-making processes. In a business context, the information derived from meetings is useful for future business processes, as it can explain phenomena and past decisions and can support future actions by mining and assessment (Pallotta et al., 2004). In a section below we will describe in detail how discourse processing takes place. Here we want to highlight the main features of this process. This first level of processing is based on the shallow dialogue model proposed in (Armstrong, 2003), of which it is a modified version. This model provides a simple operational structure of dialogues based on three categories: • a dialog is a non-empty set of episodes; a new episode is identified by a topic/speaker shift.
• an episode is a non empty set of turns; turns are individuated at prosodic level -more on turns below. • a turn is a non-empty sequence of clauses/utterances and their boundary is a long pause. In addition to the shallow dialogue model, we consider the adoption of a deeper structured representation based on argumentation theory. We assume that meeting dialogues are better viewed from the Collaborative Decision Making (CDM) perspective. In CDM, a meeting is defined as a multi-party (multi-agent) decision making process: a collaborative process, where agents follow a series of communicative actions in order to establish a common ground on the dimension of the problem. The main four dimensions of CDM process are:
• an overall task issue; • a set of alternative proposals; • a set of arguments in favor or against each proposals; • a collection of choice criteria (perspectives and preferences) settled upon the participants; • a decision (or evaluation) function that combines criteria to judge the alternatives. This definition focuses on the processes, which take place during meetings and how these processes contribute to the accomplishment of a joint goal. In order to capture the above dimensions, we then adopted and extended a suitable argumentative model of discussions, namely the IBIS model proposed by (Kunz and Rittel, 1970). The IBIS model provides us with an abstract description of the discussion's rationale by outlining the important points discussed, the conflicts arisen and, hopefully solved, and the decisions that have been made. The IBIS model abstracts from the dynamics of the discussion, which needs to be modeled as well in order to extract the IBIS structures from meeting events. Relevant meeting events are special types of Dialogue Acts that have an argumentative force. This type of Dialogue Acts called Argumentative Acts, are backward-looking acts with forward-looking expectations (Goffman 1981).
Within the Adjacency Pairs model (Schegloff & Sacks 1973), the importance of tracking agreement and disagreement in discussions has been recognized also in (Galley et al., 2004;Hillard, Ostendorf, and Shriberg, 2003). Although these methods have the great advantage of being automatic, they only partially help in reconstructing the argumentative information we need in order to answer real user queries. This model has been adopted by (Niekrasz et al. 2005) for the real-time reconstruction of an argumentative structure by overhearing discussions in design meetings. Finally, (Rienks and Verbree 2006) propose the Twente Annotation Schema that is based on fewer categories but more relation types being inspired by the Rhetorical Structure Theory (Mann and Thompson 1988). The argumentative structure defines the different patterns of argumentation used by participants in the dialogue, as well as their organization and synchronization in the discussion. The limits of sequential analysis of conversation (Schegloff & Sacks 1973) have been already pointed out by (Goffman 1981), who proposed to extend the notion of adjacency pair with that of chains of interaction rounds. As for other related work, we also see similarities of our approach with the argumentation dependency grammar proposed by (Lo Cascio 1991), although in his work only argumentative structure of monologues is considered. In fact, when analyzing dialogues, adjacency pairs are not enough to represent the hierarchical structure of the discussion. To that end we need to add a relation that links non adjacent pairs. We call this relation "replies_to". The "replies_to" links a (re)action to one or more previous (possibly in time) actions and induces an argumentative chain structure on the dialogue, which is local to each action and which enables the visualization of its context. For instance, the context of the action of "accepting a clarification" will be a chain of linked actions, namely the action of the clarification, that of the proposal that is clarified and the action of raising an issue for which the proposal was made. Argumentative actions can overlap in time, as for instance in those cases where the acceptance of a justification is uttered in the form of "backchannel" during the presentation of the justification.
Argumentative actions such as REQUEST, ACCEPT, REJECT might correspond to basic dialogue acts (Clark and Popescu-Belis 2004). In this case we have refined the concept of dialogue act and adjacency pairs by specifying the role of dialogue acts in constructing the argumentative structure of the discussion through the "replies_to" relation. When using the IBIS mark-up labels, a meeting is decomposed into several stages such as issues, proposals, and positions, each stage being possibly related to specific aggregations of elementary dialogue acts. Moreover, argumentative interactions may be viewed as specific parts of the discussion where several dialogue acts are combined to build such an interaction; as for instance, a disagreement could be seen as an aggregation of several acts of reject and accept of the same proposal. From this perspective, we elaborated an argumentative coding scheme, the Meeting Description Schema (Pallotta et al. 2004), which takes into account the different stages (or episodes) defined by the IBIS model and extend the concept of adjacency pairs to relate these episodes to each other and to the corresponding argumentative function. In MDS, the argumentative structure of a meeting is composed of a set of topic discussion episodes (a discussion about a specific topic). In each discussing topic, there exists a set of issue discussion episodes. An issue is generally a local problem in a larger topic to be discussed and solved. Participants propose alternatives, solutions, opinions, ideas, etc. in order to achieve a satisfactory decision. Meanwhile, participants either express their positions and standpoints through acts of accepting or rejecting proposals, or by asking questions related to the current proposals. Hence, for each issue, there is a corresponding set of proposal episodes (solutions, alternatives, ideas, etc.) that are linked to a certain number of related position episodes (for example a rejection to a proposed alternative in a discussing issue) or questions and answers.
Our Approach
Automatic Argumentative Annotation, is carried out by a special module activated at the very end of the computation of the each dialogue. This module takes as input the complete semantic representation produced by the system recorded in Prolog facts in the Discourse Model (hence DM). The elements of semantic representation we use are the following ones: -all facts in Situation Semantics contained in the Discourse Model, which include individuals, sets, classes, cardinality, properties related to entities by means of their semantic indices; -facts related to spatiotemporal locations of events with logical operators and semantic indices; -vectors of informational structure containing semantic information at propositional level, computed for each clause; -vectors of discourse structure with discourse relations computed for each clause from informational structure and previous discourse state (for an evaluation of system's performance see Delmonte et al. 2007); -dialogue acts labels associated to each utterance or turn following ICSI classification; -overlaps information computed at utterance level; -topic labels associated to semantic indices of each entity marked as topic of discourse; -all utterances with their indices as they have been automatically split by the system. To produce Argumentative annotation, the system uses the following 21 Discourse Relations labels:
statement, narration, adverse, result, cause, motivation, explanation, question, hypothesis, elaboration, permission, inception, circumstance, obligation, evaluation, agreement, contrast, evidence, hypoth, setting, prohibition
These are then mapped onto five general argumentative labels. In addition we use the label DISFLUENCY for all those turns that contain fragments which are nonsentences and are semantically non interpretable.
ACCEPT, REJECT/DISAGREE, PROPOSE/SUGGEST, EXPLAIN/JUSTIFY, REQUEST DISFLUENCY
The algorithm works in the following manner: 1. It recovers Dialogue Acts for each dialogue turn as they have been assigned by the system. These labels coincide with ICSI labels (BKC, ACK, FGB, FHD, RHQ, -that is Floor Grabber, Floor Holder, Backchannel, Acknowledge, RhetoricQuestion -with the addition of NEGation, ASSent, MTVation, PRPosal, GRTeeing, CNLusion; 2. It recovers Overlaps as they have been marked during the analysis; 3. It produces an Opinion label which we call Polarity, which can take one of two values: Positive or Negative according to whether the sentence contains positive or negative linguistic descriptions; 4. It produces a list of Hot Spots and builds up Episodes, where Hot Spots is simply a set of turns in sequence where the interlocutors overlap each other frequently. Episodes on the contrary are a set of turns in which a single speaker "arguments" his/her topics which may occasionally be interrupted by overlaps or by short continuers, backchannel or other similar phenomena by other speakers without however grabbing the floor; 5. Then the main predicate that assigns argumentative labels is called by a recursive routine:
i. at first it tries exceptions -which are strongly pragmatically marked -on the basis of the actual words contained in the turn. These exceptions may be constituted by Greetings, specific Speech Acts, Conventional utterances pronounced in specific situations like Thanking, etc.;
ii. then Short utterances are checked. In case they end up with a question mark they are labeled as Questions. Else, the Dialogue Act label is considered. Negations are also computed here;
iii. now the main call is activated. In order to start matching the rules, the semantic information is recovered for the current turn, in a recursive clause by clause manner;
iv. when semantic information has been recovered the rules are fired. There are some 33 rules which take as input the following vector of features: assignargument(NoCl, [Pol,DialAct], DiscDom, DiscRel, Relev, DomPointView, Output) where Output is the output label chosen by the rule; DiscDom may be Factive or NonFactive, Suggestion or Proposal; Relevance may be foreground or background; DomPointView may be objective or subjective. Rules are applied by matching input labels in a Finite State Automaton manner. However sometimes conditions and constraints are made to apply. For instance, analyzecontext(NoCl), checks to verify whether the current speaker holds the floor in the 2 preceding or following clauses.
v. the rules produce a set of argumentative labels, one for each clause. The system then chooses the label to associate to the turn utterance from a hierarchy of argumentative labels graded for Pragmatic Relevance which establishes that, for instance, Question is more relevant than Negation, which is more relevant than Raise Issue, etc. Here below we report a portion of the General Summary extracted from Dialogue 1. Eventually we are able to evaluate the degree of collaboration vs. competitiveness of each participant in the conversation and make a general statement like this one produced automatically by means of canned sentences:
GENERAL INFORMATION ON PARTICIPANTS
The participants to the meeting are 6. Participants less actively involved are Adam and Andreas who only intervened respectively for 9 and 78 turns.
LEVEL OF INTERACTIVITY IN THE DISCUSSION
The speaker that has held the majority of turns is Don with a total of 549 turns, followed by Morgan with a total of 512 turns, followed by Jane with a total of 292. The speaker that has undergone the majority of overlaps is Morgan followed by Don. The speaker that has done the majority of overlaps is Morgan followed by Jane. Morgan is the participant that has been most competitive. Andreas only intervened after turn no. 1091.
DISCUSSION TOPICS
The main topics have been introduced by the second most important speaker of the meeting, Jane. The most frequent entities in the whole dialogue partly coincide with the best topics, and are the following, in decreasing order : level, format, stuff, tag, utterance, guess, frame, file, type, representation, phone, annotations, sentence, information, x_m_l, point, p_file, start, segment, equals, 'ATLAS', prosodic, mean, link, database, data, change, structure, diff, a_p_i_, tool, sort, sequence, program, pitch, external, end, channel, boundaries, work, versions, translate, timeline, text, speaker, overlap, file_format, value, transcripts, store, prosody, phrase, perl, need, lattice, idea, feature, useful, turn, structured, separate, segmentation, search, output, node, meeting, library, language, input, help, handle, example, codes, bunch, alignment, NIST, ICSI. ARGUMENTATIVE CONTENT The following participants Andreas, Dave, Don, Jane, Morgan expressed their dissent 44 times. However Andreas, Dave and Morgan expressed dissent in a consistently smaller percentage. The following participants Adam, Andreas, Dave, Don, Jane, Morgan asked questions 53 times. The remaining 1239 turns expressed positive content by proposing, explaining or raising issues. However Adam, Dave and Andreas suggested and raised new issues in a consistently smaller percentage. The following participants Adam, Andreas, Dave, Don, Jane, Morgan expressed acceptance 320 times.
The system has been used to parse the first 10 dialogues of the ICSI corpus for a total number of 98523 words and 13803 turns. This has been done to "train" the system: what happened was that, for the first 5 dialogues, we had to take care of failures. We also had to tune all the modules and procedures carefully. In particular, the module for argumentative automatic classification was incrementally improved in order to cover all conventional ways to express Agreement. For this reason, we then chose two random additional dialogues to test this second task.
Experimental Results
We had one skilled linguist to provide a turn level annotation for argumentative labels: we don't have any agreement measure in this case, even though we expect the annotation to be in line with current experiments on the same subject (Pallotta et al. 2007). In the following table we report data related to the experiment of automatic annotation of argumentative categories. On a total of 2304 turns, 2251 have received an argumentative automatic classification, with a Recall of 97.53%. As can be gathered from the following table 2., the F-score is fairly high compared to current results reported in the literature on the same topic which are all below 80%. Accept 662 16 678 Reject 64 18 82 Propose 321 74 395 Request 180 1 181 Explain 580 312 892 Disfluency 19 19 Total 1826 421 2247 Table 2. Overall count of argumentative labels
Correct Incorrect Total Found
We computed Precision as the ratio between Correct Argumentative Labels/Found Argumentative Labels, which corresponds to 81.26%. The F-score is 88.65%.
The Anaphora Resolution Module
The problem represented by pronominal expressions in dialogues needs to be addressed fully and not by means of ad hoc solutions. This requires a full-fledged system for anaphora resolution. One such system is shown in Fig. 1 below, where we highlight the architecture and main processes undergoing at the anaphora level. First of all, the subdivision of the system into two levels: Clause level -intrasentential pronominal phenomenawhere all pronominal expressions contained in modifiers, adjuncts or complement clauses receive their antecedent locally. Possessive pronouns, pronouns contained in relative clauses and complement clauses choose preferentially their antecedents from list of higher level referring expressions. Not so for those pronouns contained in matrix clauses. In particular the ones in subject position are to be coreferred in the discourse. This requires the system to be equipped with a History List of all referring expressions to be used when needed. In the system, three levels are indicated:
Clause level, i.e. simple sentences; Utterance level, i.e. complex sentences; Discourse level, i.e. intersententially. Our system computes semantic structures in a sentence by sentence fashion and any information useful to carry out anaphoric processes is made available to the following stretch of dialogue.
Figure 1. Anaphoric Processes in GETARUNS
The Experiments
We set up a number of experiments in order to test the new version of the system. However we will concentrate only on one of them, that is detecting referential from nonreferential uses of personal pronouns YOU, WE and the pronoun IT. Here below is a table containing total values for pronouns WE/YOU/IT in all the 10 dialogues analysed.
Referent
Generic Total Found WE 1186 706 1892 1356 YOU 1045 742 1787 1132 IT 1593 1008 2601 1627 Total 3824 2456 6280 4115 Table 3. Overall count of pronominal expressions We had two skilled linguists to annotate pronominal WE/IT/YOU properties as either referential/ nonreferential. Their agreement on this task was very high with a kappa-score of 0.71. Results for the experiment are as follows,
Conclusions and Future Work
We have presented work carried out to extend and adapt a system for text understanding in order to make it fit for dialogue understanding. We proposed a set of expansions to cope with typical dialogue related problems, such as presence of non-sentential fragments, elliptical fragments interpretable as speech acts, massive presence of generic non-referential pronominal expressions, etc. We implemented a number of additional components: an algorithm that takes care of overlaps and uses that information to split current utterances and temporally realign the conversational flow. A module that computes Argumentative automatic classification labels out of a small set, on top of discourse relations and other semantic markers determined by the semantic component of the system. The system has been evaluated for two of its most important components, the newly implemented pronominal binding module and the argumentative classification module. Results are very encouraging. However, we note that in that task, labels which may cause great uncertainty and are highly ambiguous, have been lumped together to facilitate the classification task. Of course we intend to complete the analysis of all dialogues contained in the ICSI corpus and refine our algorithms. Then we would like to use the system with a totally different scenario, as for instance the Switchboard two parties dialogues and see whether the "training" carried out on the basis of multiparty dialogues may be fruitfully applied to such reduced conversational framework. In particular we still need to work at the level of DECISION labeling, which is something that we intend to do at Episode level. We also need to improve the discrimination of really argumentative from pragmatically irrelevant utterance, a choice that in some cases is hard to make on an automatic basis.
Table 1 .
1Overlaps and their effects on Planning.We use
. Results for pronominal expressionsRecall
Precision
F-Score
WE
71.67%
81.2%
76.14%
YOU
63.34%
89.3%
74.11%
IT
62.52%
84.6%
72.19%
Table 4
Deep linguistic processing for spoken dialogue systems. J Allen, M Dzikovska, M Manshadi, M Swift, ACL 2007 Workshop on Deep Linguistic Processing. Allen, J., M. Dzikovska, M. Manshadi, and M. Swift, (2007), Deep linguistic processing for spoken dialogue systems. In ACL 2007 Workshop on Deep Linguistic Processing, pp. 49-56.
Natural language queries on natural language data: a database of meeting dialogues. S Armstrong, Proceedings of NLDB'2003 conference. NLDB'2003 conferenceBurg/Cottbus, GermanyArmstrong, S. et al., (2003), Natural language queries on natural language data: a database of meeting dialogues. In Proceedings of NLDB'2003 conference, Burg/Cottbus, Germany.
Distributional Identification of Non-Referential Pronouns. Shane Bergsma, Dekang Lin, Randy Goebel, ACL-HLT. Columbus, OhioBergsma, Shane , Dekang Lin and Randy Goebel, (2008), Distributional Identification of Non- Referential Pronouns, In ACL-HLT 2008, Columbus, Ohio, June 16-18, 2008, pp. 10-18.
Lexical-Functional Syntax. J Bresnan, BlackwellBresnan, J., (2000), Lexical-Functional Syntax, Blackwell.
Conversational principles in questionanswer dialogues. H Bunt, Zur Theory der Frage. D. KrallmannTübingenNarr VerlagBunt, H. (1979), Conversational principles in question- answer dialogues. In D. Krallmann, editor, Zur Theory der Frage, pp. 119-141. Narr Verlag, Tübingen.
Multilevel Dialogue Act Tags. A Clark, Andrei Popescu-Belis, Proceedings of SIGDIAL'04. SIGDIAL'04Cambridge, MA, USAClark A., and Andrei Popescu-Belis, (2004), Multi- level Dialogue Act Tags. In Proceedings of SIGDIAL'04, pp.163-170. Cambridge, MA, USA.
Another Evaluation of Anaphora Resolution Algorithms and a Comparison with GETARUNS' Knowledge Rich Approach. R A Delmonte, M A Bristot, S Boniforti, Tonelli, ROMAND 2006, 11th EACL. TrentoAssociation for Computational LinguisticsDelmonte, R. A. Bristot, M. A. Piccolino Boniforti and S. Tonelli, (2006), Another Evaluation of Anaphora Resolution Algorithms and a Comparison with GETARUNS' Knowledge Rich Approach, ROMAND 2006, 11th EACL, Trento, Association for Computational Linguistics, 3-10.
Computational Linguistic Text Processing -Logical Form, Semantic Interpretation, Discourse Relations and Question Answering. R Delmonte, Nova Science PublishersNew YorkDelmonte R., (2007), Computational Linguistic Text Processing - Logical Form, Semantic Interpretation, Discourse Relations and Question Answering, Nova Science Publishers, New York.
Semantic and Pragmatic Computing with GETARUNS. R Delmonte, Bos & DelmonteSTEP, College Pub. LondonDelmonte R., (2008), Semantic and Pragmatic Computing with GETARUNS, in Bos & Delmonte (eds.), STEP, College Pub. London, pp. 287-298.
WordNet: An Electronic Lexical Database. Christiane Fellbaum, MIT PressCambridge MAFellbaum, Christiane, (ed.), (1998), WordNet: An Electronic Lexical Database. MIT Press, Cambridge MA.
Discourse Segmentation of Multi-Party Conversation. Michel Galley, Kathleen Mckeown, Eric Fosler-Lussier, Hongyan Jing, Proceedings of ACL 2003. ACL 2003Sapporo, JapanGalley, Michel , Kathleen McKeown, Eric Fosler- Lussier and Hongyan Jing, (2003), Discourse Segmentation of Multi-Party Conversation. In Proceedings of ACL 2003, pp. 562-569, Sapporo, Japan.
E Goffman, Forms of Talk. PhiladelphiaUniversity of Pennsylvania PressGoffman E., (1981), Forms of Talk. Philadelphia: University of Pennsylvania Press.
Disambiguating Between Generic and Referential "You" in Dialog. Surabhi Gupta, Matthew Purver, Dan Jurafsky , Proceedings of ACL 2007 short papers. ACL 2007 short papersPrague, Czech RepublicGupta, Surabhi, Matthew Purver and Dan Jurafsky. (2007), Disambiguating Between Generic and Referential "You" in Dialog. Proceedings of ACL 2007 short papers, Prague, Czech Republic.
Detection of agreement vs. disagreement in meetings: Training with unlabeled data. D Hillard, M Ostendorf, E Shriberg, Proceedings of HLT-NAACL. HLT-NAACLHillard, D., Ostendorf, M. and Shriberg, E., (2003), Detection of agreement vs. disagreement in meetings: Training with unlabeled data. In Proceedings of HLT-NAACL 2003.
Towards Adressee Identification in Multi-party dialogues. N Javanovich, R Op Den, Akker , Proceedings of the 5 th Sigdial Workshop on Discourse and Dialogue, ACL, Pennsylvania. the 5 th Sigdial Workshop on Discourse and Dialogue, ACL, PennsylvaniaJavanovich N. and R. op den Akker, (2004), Towards Adressee Identification in Multi-party dialogues, in Proceedings of the 5 th Sigdial Workshop on Discourse and Dialogue, ACL, Pennsylvania, 2004, pp. 89-92.
Issues as elements of information systems. W Kunz, H W J Rittel, WP-131BerkeleyUniversity of CaliforniaTechnical ReportKunz W. and Rittel H. W. J., (1970), Issues as elements of information systems. Technical Report WP-131, Berkeley: University of California.
Lo Cascio, V , Grammatica dell'Argomentare: strategie e strutture. Firenze: La Nuova Italia. Lo Cascio V., (1991), Grammatica dell'Argomentare: strategie e strutture. Firenze: La Nuova Italia.
W Mann, S Thompson, Rhetorical Structure Theory: Towards a Functional Theory Text Organization. Text. 8Mann, W.C and S.A Thompson, (1988), Rhetorical Structure Theory: Towards a Functional Theory Text Organization. Text, 8(3):243-281.
Resolving It, This, and That in Unrestricted Multi-Party Dialog. Christoph Müller, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. the 45th Annual Meeting of the Association for Computational LinguisticsPrague, Czech RepublicMüller, Christoph. (2007), Resolving It, This, and That in Unrestricted Multi-Party Dialog. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Prague, Czech Republic, pp. 816-823.
Automatic Detection on Nonreferential It In Spoken Multi-Party Dialog. Christoph Müller, Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. the 11th Conference of the European Chapter of the Association for Computational LinguisticsTrento, ItalyMüller, Christoph, (2006), Automatic Detection on Nonreferential It In Spoken Multi-Party Dialog. In: Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy, pp. 49-56.
Ontology-Based Discourse Understanding for a Persistent Meeting Assistant. J Niekrasz, M Purver, J Dowding, S Peters, Proceedings of the AAAI Spring Symposium Persistent Assistants: Living and Working with AI. the AAAI Spring Symposium Persistent Assistants: Living and Working with AIStanfordNiekrasz J., Purver M., Dowding J. and Peters S., (2005), Ontology-Based Discourse Understanding for a Persistent Meeting Assistant. In: Proceedings of the AAAI Spring Symposium Persistent Assistants: Living and Working with AI. Stanford.
Towards meeting information systems: Meeting knowledge management. Vincenzo Pallotta, Hatem Ghorbel, Afzal Ballim, Agnes Lisowska, Stéphane Marchand-Maillet, Proceedings of ICEIS 2005. ICEIS 2005Porto, PortugalPallotta, Vincenzo, Hatem Ghorbel, Afzal Ballim, Agnes Lisowska and Stéphane Marchand-Maillet, (2004), Towards meeting information systems: Meeting knowledge management. In Proceedings of ICEIS 2005, pp. 464-469, Porto, Portugal.
Common ground in computer-supported collaborative argumentation. Rosemberg Duska, John A A Silince, Proceedings of the CLSCL99. the CLSCL99Stanford, CA, USARosemberg Duska and John A.A. Silince, (1999), Common ground in computer-supported collaborative argumentation. In Proceedings of the CLSCL99, Stanford, CA, USA.
Opening up closings. E Schegloff, H Sacks, Semiotica. 8Schegloff E. and Sacks H., (1973), Opening up closings. Semiotica 8: 289-327.
A Machine Learning Approach to Pronoun Resolution in Spoken Dialogue. M Strube, C Müller, Proceedings of the 41st Annual Meeting of the ACL. the 41st Annual Meeting of the ACLSapporo, JapanStrube, M. and C. Müller, (2003), A Machine Learning Approach to Pronoun Resolution in Spoken Dialogue, in Proceedings of the 41st Annual Meeting of the ACL, Sapporo, Japan, pp. 168-175. |
14,728,664 | Using word similarity lists for resolving indirect anaphora | In this work we test the use of word similarity lists for anaphora resolution in Portuguese corpora. We applied an automatic lexical acquisition technique over parsed texts to identify semantically similar words. After that, we made use of this lexical knowledge to resolve coreferent definite descriptions where the head-noun of the anaphor is different from the head-noun of its antecedent, which we call indirect anaphora. | [
989721,
14122076,
2058981,
1192761
] | Using word similarity lists for resolving indirect anaphora
Caroline Gasperin [email protected]
PIPCA -Unisinos São Leopoldo
Brazil
Renata Vieira [email protected]
PIPCA -Unisinos São Leopoldo
Brazil
Using word similarity lists for resolving indirect anaphora
In this work we test the use of word similarity lists for anaphora resolution in Portuguese corpora. We applied an automatic lexical acquisition technique over parsed texts to identify semantically similar words. After that, we made use of this lexical knowledge to resolve coreferent definite descriptions where the head-noun of the anaphor is different from the head-noun of its antecedent, which we call indirect anaphora.
Introduction
In this work we investigate the use of word similarity list for treating coreference, especially the cases where the coreferent expressions have semantically related head nouns (instead of same head nouns), which we call indirect anaphora.
We applied a lexical acquisition technique over Portuguese parsed corpora to automatically identify semantically similar words. After that, we made use of this lexical knowledge to resolve the coreferent definite descriptions where the head-noun of the anaphor is different from the head-noun of its antecedent.
Previous work on anaphoric resolution of English texts has used acquired lexical knowledge in different ways, examples are (Poesio et al., 2002;Schulte im Walde, 1997;Bunescu, 2003). This paper is organised as follows. The next section explain our notion of indirect anaphora. Section 3 details the tools and techniques used to the construction of our lexical resource. Section 4 presents our heuristic for solving the indirect anaphors on the basis of such resource. Section 5 details the corpus we are using for evaluating the proposed heuristics. Section 6 reports the implementation of the heuristic and in Section 7 we present our experiments over Portuguese annotated corpora. In Section 8 we discuss our results and compare them to previous works. Finally, Section 9 presents our concluding comments.
Indirect anaphora
Coreference has been defined by (van Deemter and Kibble, 2000) as the relation holding between linguistic expressions that refer to the same extralinguistic entity. A slightly different discourse relation is anaphora. In an anaphoric relation the interpretation of an expression is dependent on previous expressions within the same discourse (in various ways). Therefore, an anaphoric relation may be coreferent or not. An expression may be anaphoric in the strict sense that its interpretation is only possible on the basis of the antecedent, as it is in general the case of pronouns in written discourse. On the other hand, it might be just coreferent, in the sense that the entity has been mentioned before in the text.
In this work, we focus on the expressions that are anaphoric and coreferent, and restricting even more, just the indirect cases, when the antecedent headnoun and the anaphor head-noun are not same but semantically related.
To clarify what we mean by indirect anaphora, we detail the classification we adopted in our previous work . Our classes of analyses were based on the analyses of English texts presented in (Poesio and Vieira, 1998), with the difference that we divided the Bridging class of their analyses into two different classes, separating coreferent (Indirect Anaphora) and noncoreferent (Other Anaphora) cases. Each definite description (d) is classified into one of the following four classes: In (Schulte im Walde, 1997) acquired lexical knowledge is used for solving bridging descriptions, a broader class of anaphoric relations that includes our class, indirect anaphora. (Poesio et al., 2002) presents alternative techniques, based on syntactic patterns, focusing on meronymy relations. Finally, (Bunescu, 2003) deals with another class of anaphoric descriptions, which is also included in the bridging class, called as associative anaphora, following (Hawkins, 1978), where associative anaphora is an anaphoric relation between non-coreferent entities.
Lexical resource
Our lexical resource consists on lists of semantically related words. These lists are constructed automatically by a syntax-based knowledge-poor technique. The technique used is described in , and it is an extension of the technique presented in (Grefenstette, 1994).
Briefly, this technique consists on extracting specific syntactic contexts for every noun in the parsed whole corpus and then applying a similarity measure (the weighted Jaccard measure) to compare the nouns by the contexts they have in common (more contexts they share, more similar they are). As syntactic context, we understand any word that establishes a syntactic relation with a given noun in the corpus. An example of one kind of syntactic context considered is subject/verb, meaning that two nouns that occur as subject of the same verb share this context. Other examples of syntactic contexts are verb/object, modifier/noun, etc. To each context it is assigned a global and a local weight: the first related to the context frequency in the corpus, and the second related to its frequency as a context of the noun in focus. As output, we have a list of the most similar nouns to each noun in the corpus, ordered by the similarity value. We present the similarity list for the noun acusação (accusation) in Table 1 as an example.
denúncia (denunciation) escândalo (scandal) crime (crime) pedido (demand) declaração (declaration) proposta (proposal) notícia (news) acusação carta (letter) (accusation) lista (list) cargo (post) ataque (attack) arma (gun) caso (case) impressão (impression) reclamação (complain)
The similarity lists can contain any kind of semantic relation (e.g. synonymy, hyponymy, etc.) between the words, but they are not classified. In general, the similarity lists for the less frequent words in the corpus contain some non-semantically related words (noise), since the relations were based on few syntactic contexts they shared along the corpus.
The main advantage of this technique is the possibility of having a corpus-tunned lexical resource built completely automatically. This resource reflects closely the semantic relations present in the corpus used to create the lists. So, we believe the similarity lists are more suitable for being used as lexical knowledge for resolving the anaphoras than a generic lexical base (e.g. Wordnet), since it focus on the semantic relations between the terms that appear in the corpus, without considering extra meanings that some words could have. New lists could be generated from each corpus that one aims to resolve the anaphoras.
To generate the similarity lists for Portuguese we utilised a 1,400,000-words corpus from the Brazilian newspaper 'Folha de São Paulo', containing news about different subjects (sports, economics, computers, culture, etc.). This corpus includes the set of texts that was hand-annotated with coreference information in previous work . The corpus was parsed by the Portuguese parser PALAVRAS (Bick, 2000), provided by VISL project 1 .
We created two different sets of similarity lists: one considering just nouns and the other considering nouns and proper names. So, the first set of lists includes one list for each noun in the corpus and each list is composed by other common nouns. The second set of lists has one list for each noun and proper name in the corpus, and each list is composed by other nouns and proper names. The first set contains 8019 lists and the second 12275, corresponding to the different nouns (and proper names) appearing in the corpus. Each similarity list contains the 15 words that are more similar to the word in focus, according to the calculated similarity values.
Having lexical information about the proper names in the corpus is important, since we have many coreference cases whose anaphor or antecedent is a proper name. But when generating the similarity lists, proper names bring noise (in general they are less frequent then common nouns) and the lists became more heterogeneous (includes more non semantically related words).
Using similar words lists to solve indirect anaphora
From the manual annotation and classification of 680 definite descriptions we selected those cases classified as indirect anaphora (95). For each of them there is a list of candidate antecedents. This list is formed by all NPs that occur in the text. We consider as candidates all the NPs that occur in the text before the anaphor being mentioned. Our heuristic for solving indirect anaphoras using lists of similar words is the following. Consider:
• H ana is the head-noun of the anaphor We call (1) 'right direction', (2) 'opposite direction', and (3) 'indirect way'.
We consider (1) > (2) > (3) when regarding the reliability of the semantic relatedness between H ana and H can i .
If the application of the heuristic resulted in more than one possible antecedent, we adopted a weighting scheme to choose only one among them. The candidate with the lowest weight wins. For ranking the possible antecedents, we considered two parameters:
• reliability: how the possible antecedent was select, according to (1), (2) or (3). A penalising value is added to its weight: 0, 40, 200, respectively. The higher penalty for the 'indirect way' is because we expected it could cause many false positives;
• recency: we consider the distance in words between the anaphor and the possible antecedent.
The penalty values for the reliability parameter were chosen in such a way they could be in the same magnitude as the recency parameter values, that are measured in words. For example, if candidate A is 250 words far from the anaphor and was selected by (1) (getting weight=250) and a candidate B is 10 words far from the anaphor and was selected by (3) (getting weight=210), candidate B will be selected as the correct antecedent.
Our evaluation corpus
As result of previous work , we have a Portuguese corpus manually annotated with coreference information. This corpus is considered our gold-standard to evaluate the performance of the heuristic presented in the previous section. The study aimed to verify if we could get a similar distribution of types of definite descriptions for Portuguese and English, which would serve as an indication that the same heuristics tested for English (Vieira et al., 2000) could apply for Portuguese. The main annotation task in this experiment was identifying antecedents and classifying each definite description according to the four classes presented in section 2.
For the annotation task, we adopted the MMAX annotation tool (Müller and Strube, 2001), that requires all data to be encoded in XML format. The corpus is encoded by <word> elements with sequential identifiers, and the output -the anaphors and its antecedents -are enconded as <markable> elements, with the anaphor markable pointing to the antecedent markable by a 'pointer' attribute.
The annotation process was split in 4 steps: selecting coreferent terms; identifying the antecedent of coreferent terms; classifying coreferent terms (direct or indirect); classifying non-coreferent terms (discourse new or other anaphora). About half of the anaphoras were classified as discourse new descriptions, which account for about 70% of noncoreferent cases. Among the coreferent cases the number of direct coreference is twice the number of indirect coreference. This confirms previous work done for English.
For the present work, we took then the 95 cases classified as indirect coreference to serve as our evaluation set. In 14 of this cases, the relation between anaphor and antecedent is synonymy, in 43 of the cases the relation is hyponymy, and in 38, the antecedent or the anaphor are a proper name.
Implementing heuristics for indirect anaphora in ART
Our heuristics were implemented as an XSL stylesheet on the basis of the Anaphora Resolution Tool (ART) . The tool integrates a set of heuristics corresponding to one or more stylesheets to resolve different sorts of anaphora. The heuristics may be applied in a sequence defined by the user. As resolving direct anaphoric descriptions (the ones where anaphor and antecedent have the same head noun) is a much simpler problem with high performance rates as shown in previous results (Vieira et al., 2000;Bean and Riloff, 1999), these heuristics should be applied first in a system that resolves definite descriptions. In this work, however, we decided to consider for the experiments just the anaphoras that were previously annotated as indirect and check if the proposed heuristic is able to find the correct antecedent.
ART allows the user to define the set of anaphors to be resolved, in our case they are selected from previously classified definite descriptions. The stylesheet for indirect anaphora takes as input this list of indirect anaphors, a list of the candidates and the similarity lists. We consider all NPs in the text as candidates, and for each anaphor we consider just the candidates that appear before it in the text (we are ignoring cataphora at moment).
All the input and output data is in XML format, based on the data format used by MMAX. Our stylesheet for solving indirect anaphora takes the <markable> elements with empty 'pointer' attribute (coming unsolved from passing by the previ- .
Experiments
We run two experiments: one using the similarity lists with proper names and another with the lists containing just common nouns.
With these experiments we verify the values for precision, recall and false positives on the task of choosing an semantically similar antecedent for each indirect anaphor. Our annotated corpus has 95 indirect anaphors with nominal antecedents, where 57 of them do not include proper names (as anaphor or as antecedent). We use a non annotated version of this corpus for the experiments. It contains around 6000 words, from 24 news texts of 6 different newspaper sections.
Firstly, we reduced both sets of similarity lists to contain just the list for the words present in this portion of the corpus (660 lists without proper names and 742 including proper names).
Experiment 1
Considering the 57 indirect anaphoras to be solved (the ones that do not include any proper name), we could solve 19 of them. It leads to a precision of 52.7% and a a recall of 33.3%. Table 2 shows the result of our study considering the set of common noun lists.
Most of the cases could be resolved by 'right direction', that represents the more intuitive way. 21 of the cases didn't get any antecedent. We got 17 false positives, with different causes:
1. the right antecedent was not in the lists, therefore it could not be found but other wrong antecedents were retrieved. For example, in meu amigo Ives Gandra da Silva Martins escreveu para esse jornal ... o conselheiro Ives (my friend Ives_Gandra_da_Silva_Martins wrote to this newspaper ... the councillor Ives), two more candidates head-nouns are similar words to "conselheiro" (councillor): "arquiteto" (architect) and "consultor" (consultant), but not "amigo" (friend); 2. the right antecedent was in the lists but another wrong antecedent was given higher weights, because of proximity to the anaphora, as in the example a rodovia Comandante João Ribeiro de Barros ... próximo a ponte ... ao tentar atravessar a estrada (the highway Comandante Joao Ribeiro de Barros ... near to the bridge ... while trying to cross the road). Here, the correct antecedent to "a estrada" (the road) is "rodovia" (the highway) and it is present in "estrada"'s similarity list (right direction), but also is "ponte" (the bridge) and it is closer to the anaphor in the text.
As expected, most of the false positives (11 cases) were 'resolved' by "indirect way".
Considering all similar words found among the candidates, not just the one with highest weight, we could find the correct antecedent in 24 cases (42%). The average number of similar words among the candidates was 2.8, taking into account again the positive and false positive cases. These numbers report how much the similarity lists encode the semantic relations present in the corpus. 64% of the synonymy cases and 28% of the hyponymy cases could be resolved. 35% of the hyponymy cases resulted in false positives, the same happened with just 14% of the synonymy cases.
Experiment 2
We replicated the previous experiment now using the similarity lists that include proper names. Table 3 shows the results considering the set of lists for nouns and proper names. Considering the 95 indirect anaphoras to be solved, we could solve 21 of them. It leads to a precision of 36.8% and a a recall of 22.1%. There was no antecedent found for 38 anaphors, and 36 anaphors got wrong antecedents (half of them by "inderect way"). We observed the same causes for false positives as the two presented for experiment 1.
Considering all cases resolved (correct and false ones), we could find the correct antecedent among the similar words of the anaphor in 31 cases (32.6%). The average number of similar words among the candidates was 2.75. The numbers for synonymy and hyponymy cases were the same as in experiment 1 -64% and 28% respectively. The numbers for proper names were 50% of false positives and 50% of unresolved cases. It means none of the cases that include proper names could be resolved, but do not means they hadn't any influence in other nouns similarity lists. In 26% of the false positive cases, the correct antecedent (a proper name) was in the anaphor similarity list (but was not selected due to the weighting strategy).
The experiment with the similarity lists that include proper names was able to solve more cases, but experiment 1 got better precision and recall values.
Related work
An evaluation of the use of WordNet for treating bridging descriptions is presented in (Poesio et al., 1997). This evaluation considers 204 bridging descriptions, distributed as follows, where NPj is the anaphora and NPi is antecedent.
• synonymy relation between NPj and NPi: 12 cases;
• hypernymy relation between NPj and NPi: 14 cases;
• meronymy between NPj and NPi: 12;
• NPj related with NPi being a proper name: 49;
• NPj sharing a same noun in NPi other than head (compound nouns): 25;
• NPj with antecedent being an event 40;
• NPj with antecedents being an implicit discourse topic: 15;
• other types of inferences holding between NPj and antecedent: 37.
Due to the nature of the relations, only some of them were expected to be found in WordNet. For Synonymy, hypernymy and meronymy, 39% of the 38 cases could be solved on the basis of WordNet. From this related work we can see the large variety of cases one can found in a class such as bridging. In our work we concentrated on coreference relations, these can be related to synonymy, hypernymy, and proper name sub-classes evaluated in (Poesio et al., 1997).
The technique presented in (Schulte im Walde, 1997) based on lexical acquisition from the British National Corpus was evaluated against the same cases in (Poesio et al., 1997). For synonymy, hypernymy and meronymy, it was reported that 22% of the 38 cases were resolved. In (Poesio et al., 2002) the inclusion of syntactic patterns improved the resolution of meronymy in particular, resulting in 66% of the meronymy cases being resolved. Bunescu (Bunescu, 2003) reports for his method on resolving associative anaphora (anaphoric relation between non-coreferent entities) a precision of 53% when his recall is 22.7%.
Concluding remarks
We tested the use of word similarity lists on resolving indirect anaphoras on Portuguese newspaper texts. We presented our heuristic for searching word similarity lists to be able to find the relation between an anaphor and its antecedent. We considered similarity lists containing proper names and lists containing just common nouns. Our heuristic was able to resolve 33.3% of the cases, with precision of 52.7% when considering just common nouns, and we got 22.1%recall with precision of 36.8% when including proper names. Even though considering proper names give us the possibility of treating more anaphora cases, we got lower precision than using the lists with only nouns, since such lists are more homogeneous. These results are comparable to previous work dealing with such complex anaphora.
As future work, we intend to integrate our heuristic for indirect anaphora with other heuristics for anaphora resolution into ART and investigate the best combination of application of these. Concerning refining the proposed heuristic, we intend to run more experiments aiming to tune the penalising weights when choosing an antecedent among the candidates already selected by the search on the similarity lists.
Table 1 :
1Similarity list for the noun acusação
• H can i is the head-noun of the antecedent candidate i• L ana is the anaphor's list of similar nouns• L can i is the list of similar nouns for the candi-
date i
• So, H can i is considered the antecedent of H ana
if
(1)H can i ∈ L ana
or
(2)H ana ∈ L can i
1 See http://visl.hum.sdu.dk/visl/pt/
or
(3)L ana H j ∈ L can i
Table 2 :
2Results considering just nounsDescription
Numbers
Total indirect anaphors
57
Correctly
resolved
anaphors
Right direction
8
Opposite direction
5
Indirect way
6
TOTAL
19 (33.3%)
Unsolved anaphors
21
ously applied stylesheets/heuristics) and create and
intermediate file with <anaphor> elements to be re-
solved. The resolved <anaphor>s are again encoded
as <markable>s, with the 'pointer' filled. A de-
tailed description of our data encoding is presented
in
Table 3 :
3Results considering nouns and proper namesDescription
Numbers
Total indirect anaphors
95
Correctly
resolved
anaphors
Right direction
13
Opposite direction
3
Indirect way
5
TOTAL
21 (22.1%)
Unsolved anaphors
38
AcknowledgementsWe would like to thank CNPq (Brazil) / INRIA (France) for their financial support, and Susanne Salmon-Alt, for her collaboration in this work.
Corpus-based identification of non-anaphoric noun phrases. D Bean, E Riloff, Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL-99). the 37th Annual Meeting of the Association for Computational Linguistics (ACL-99)D. Bean and E. Riloff. 1999. Corpus-based identi- fication of non-anaphoric noun phrases. In Pro- ceedings of the 37th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL-99).
The Parsing System PALAVRAS: Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework. Eckhard Bick, ÅrhusÅrhus UniversityPh.D. thesisEckhard Bick. 2000. The Parsing System PALAVRAS: Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Frame- work. Ph.D. thesis, Århus University, Århus.
Associative anaphora resolution: A web-based approach. Razvan Bunescu, Proceedings of EACL 2003 -orkshop on The Computational Treatment of Anaphora. EACL 2003 -orkshop on The Computational Treatment of AnaphoraBudapestRazvan Bunescu. 2003. Associative anaphora res- olution: A web-based approach. In Proceedings of EACL 2003 -orkshop on The Computational Treatment of Anaphora, Budapest.
Using syntactic contexts for measuring word similarity. Caroline Gasperin, Pablo Gamallo, Alexandre Agustini, Gabriel Lopes, Vera Lima, Proceedings of the Workshop on Semantic Knowledge Acquisition and Categorisation. the Workshop on Semantic Knowledge Acquisition and CategorisationHelsink, FinlandCaroline Gasperin, Pablo Gamallo, Alexandre Agustini, Gabriel Lopes, and Vera Lima. 2001. Using syntactic contexts for measuring word sim- ilarity. In Proceedings of the Workshop on Se- mantic Knowledge Acquisition and Categorisa- tion, Helsink, Finland.
Extracting xml syntactic chunks from portuguese corpora. Caroline Gasperin, Renata Vieira, Rodrigo Goulart, Paulo Quaresma, Traitement automatique des langues minoritaires -TALN 2003. Btaz-sur-mer, FranceCaroline Gasperin, Renata Vieira, Rodrigo Goulart, and Paulo Quaresma. 2003. Extracting xml syntactic chunks from portuguese corpora. In Traitement automatique des langues minoritaires -TALN 2003, Btaz-sur-mer, France.
Extração automática de relações semânticas a partir de relações sintáticas. Caroline Varaschin Gasperin, ; Pucrs, Porto Alegre, Master's thesisCaroline Varaschin Gasperin. 2001. Extração au- tomática de relações semânticas a partir de re- lações sintáticas. Master's thesis, PUCRS, Porto Alegre.
Explorations in Automatic Thesaurus Discovery. Gregory Grefenstette, Kluwer Academic PublishersUSAGregory Grefenstette. 1994. Explorations in Auto- matic Thesaurus Discovery. Kluwer Academic Publishers, USA.
Definiteness and Indefiniteness. John A Hawkins, Humanities PressAtlantic Highland, NJJohn A. Hawkins. 1978. Definiteness and Indef- initeness. Humanities Press, Atlantic Highland, NJ.
MMAX: A tool for the annotation of multi-modal corpora. Christoph Müller, Michael Strube, Proceedings of the IJCAI. the IJCAISeattleChristoph Müller and Michael Strube. 2001. MMAX: A tool for the annotation of multi-modal corpora. In Proceedings of the IJCAI 2001, pages 45-50, Seattle.
A corpus-based investigation of definite description use. Massimo Poesio, Renata Vieira, Computational Linguistics. 242Massimo Poesio and Renata Vieira. 1998. A corpus-based investigation of definite description use. Computational Linguistics, 24(2):183-216.
Resolving bridging descriptions in unrestricted texts. Massimo Poesio, Renata Vieira, Simone Teufel, Proceedings of the Practical, Robust, Anaphora Resolution for Unrestricted Texts, Workshop on Operational Factors. Madrid. Massimo Poesio, Tomonori Ishikawa, Sabine Schulte Im Walde, and Renata Vieirathe Practical, Robust, Anaphora Resolution for Unrestricted Texts, Workshop on Operational FactorsLas Palmas De Gran CanariaProceedings of LREC 2002Massimo Poesio, Renata Vieira, and Simone Teufel. 1997. Resolving bridging descriptions in unre- stricted texts. In Proceedings of the Practical, Robust, Anaphora Resolution for Unrestricted Texts, Workshop on Operational Factors, Madrid. Massimo Poesio, Tomonori Ishikawa, Sabine Schulte Im Walde, and Renata Vieira. 2002. Ac- quiring lexical knowledge for anaphora resolu- tion. In Proceedings of LREC 2002, Las Palmas De Gran Canaria.
Nominal expressions in multilingual corpora: Definites and demonstratives. Susanne Salmon-Alt, Renata Vieira, Proceedings of the LREC. the LRECLas Palmas de Gran CanariaSusanne Salmon-Alt and Renata Vieira. 2002. Nominal expressions in multilingual corpora: Definites and demonstratives. In Proceedings of the LREC 2002, Las Palmas de Gran Canaria.
Resolving Bridging Descriptions in High-Dimensional Space. Sabine Schulte Im Walde, Institut für Maschinelle Sprachverarbeitung, University of Stuttgart, and Center for Cognitive Science, University of EdinburghMaster's thesisSabine Schulte im Walde. 1997. Resolving Bridging Descriptions in High-Dimensional Space. Master's thesis, Institut für Maschinelle Sprachverarbeitung, University of Stuttgart, and Center for Cognitive Science, University of Edinburgh.
On coreferring: Coreference in muc and related annotation schemes. K Van Deemter, R Kibble, Computational Linguistics. 426K. van Deemter and R. Kibble. 2000. On corefer- ring: Coreference in muc and related annotation schemes. Computational Linguistics, 26(4).
Multilingual corpora annotation for processing definite descriptions. Renata Vieira, Susanne Salmon-Alt, Emmanuel Schang, Proceedings of the PorTAL. the PorTALFaroRenata Vieira, Susanne Salmon-Alt, and Emmanuel Schang. 2002. Multilingual corpora annotation for processing definite descriptions. In Proceed- ings of the PorTAL 2002, Faro.
From manual to automatic annotation of coreference. Renata Vieira, Caroline Gasperin, Rodrigo Goulart, Proceedings of the International Symposium on Reference Resolution and Its Applications to Question Answering and Summarization. the International Symposium on Reference Resolution and Its Applications to Question Answering and SummarizationVeniceRenata Vieira, Caroline Gasperin, and Rodrigo Goulart. 2003. From manual to automatic anno- tation of coreference. In Proceedings of the In- ternational Symposium on Reference Resolution and Its Applications to Question Answering and Summarization, Venice.
Extração de sintagmas nominais para o processamento de co-referência. In Anais do V Encontro para o processamento computacional da Língua Portuguesa escrita e falada -PROPOR. Vieira, AtibaiaVieira et al. 2000. Extração de sintagmas nominais para o processamento de co-referência. In Anais do V Encontro para o processamento computa- cional da Língua Portuguesa escrita e falada - PROPOR, Atibaia. |
6,861,543 | Negotiation of Antibiotic Treatment in Medical Consultations: A Corpus based Study | Doctor-patient conversation is considered a contributing factor to antibiotic overprescription. Some language practices have been identified as parent pressuring doctors for prescribing; other practices are considered as likely to engender parent resistance to non-antibiotic treatment recommendations. In social science studies, approaches such as conversation analysis have been applied to identify those language practices. Current research for dialogue systems offer an alternative approach. Past research proved that corpusbased approaches have been effectively used for research involving modeling dialogue acts and sequential relations. In this proposal, we propose a corpus-based study of doctor-patient conversations of antibiotic treatment negotiation in pediatric consultations. Based on findings from conversation analysis studies, we use a computational linguistic approach to assist annotating and modeling of doctor-patient language practices, and analyzing their influence on antibiotic over-prescribing. | [
2336451,
780171,
10079468,
10766958,
215825908,
6133269,
5371286,
3148637,
136511
] | Negotiation of Antibiotic Treatment in Medical Consultations: A Corpus based Study
July 30 -August 4, 2017. July 30 -August 4, 2017
Nan Wang [email protected]
University of California
Los Angeles
Negotiation of Antibiotic Treatment in Medical Consultations: A Corpus based Study
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics-Student Research Workshop
the 55th Annual Meeting of the Association for Computational Linguistics-Student Research WorkshopVancouver, Canada; Vancouver, CanadaJuly 30 -August 4, 2017. July 30 -August 4, 201710.18653/v1/P17-3023
Doctor-patient conversation is considered a contributing factor to antibiotic overprescription. Some language practices have been identified as parent pressuring doctors for prescribing; other practices are considered as likely to engender parent resistance to non-antibiotic treatment recommendations. In social science studies, approaches such as conversation analysis have been applied to identify those language practices. Current research for dialogue systems offer an alternative approach. Past research proved that corpusbased approaches have been effectively used for research involving modeling dialogue acts and sequential relations. In this proposal, we propose a corpus-based study of doctor-patient conversations of antibiotic treatment negotiation in pediatric consultations. Based on findings from conversation analysis studies, we use a computational linguistic approach to assist annotating and modeling of doctor-patient language practices, and analyzing their influence on antibiotic over-prescribing.
Introduction
"How to do things with words" has long been a topic interested to researchers from various disciplines, such as pragmatics (Austin, 1962;Levinson, 1983), conversation analysis (CA) (Drew and Heritage, 1992;Heritage and Maynard, 2006), and computational linguistics (Stolcke et al., 2000;Williams et al., 2013;Schlöder and Fernandez, 2015). Although computational methods have been widely used to conduct text mining tasks such as detecting reader bias and predicting mood shift in vast populations (Flaounas et al., 2013;Lansdall-Welfare et al., 2012;Ritter et al., 2011), studies on computational modeling of human natural conversational acts are rare, especially for investigating associations with social behavioral outcomes.
Doctor-patient conversations have been proved highly consequential on a lot of worldwide public health problems and population health outcomes (Zolnierek and DiMatteo., 2009;Mangione-Smith et al., 2015). Over-prescription of antibiotics is often related to interaction-generated problems arising from doctor-patient conversations, which has little to do with rational medical judgments (Macfarlane et al., 1997). For example, some parent language practices are frequently understood by physicians as advocating antibiotics, resulting in significantly higher likelihood of inappropriate prescriptions (Mangione-Smith et al., 1999;Stivers, 2002Stivers, , 2007.
This antibiotic resistance and over-prescription phenomenon also has its presence in China. Prescription rates of antibiotics is high (Li et al., 2012;Wang et al., 2014;Xiao et al., 2012); multiple types of antibiotic resistant pathogens have been discovered nationwide. However, determinants of the over-prescription problem in China have not been well studied, especially the impact of doctorpatient conversation in medical consultations.
In this proposal, we propose a corpus based study to examine doctor-patient conversation of antibiotic treatment negotiation in Chinese pediatric settings, using a mixed methodology of conversation analysis and computational linguistics. Particularly, we aim to discover (1) how parent requests of antibiotic prescriptions are made in doctor-patient conversations and their effects on prescribing decision outcomes; (2) how physicians' non-antibiotic treatment recommendations are delivered and responded by parents; In conducting this study, our findings about doctor-patient conversation are expected to be extended beyond medical setting to natural human conversations. These findings include:
• How actions are formulated with various forms of language practice in conversations;
• How meaning of language practices is understood by speakers as performing a certain action;
• How choice of one form of language practices in performing an action is associated with its response of various kinds.
In conducting this study, we attempt to bridge the gap between social scientific methods and computational methods in researching the aforementioned questions.
In the following sections, we will introduce our corpus, preliminary findings from CA, and related computational approaches. This is followed by a discussion of contributions of the proposed study.
Data
The corpus of this study is constructed from natural human conversations. In order to obtain the conversations, 318 video-recorded doctor-patient conversations were collected in 6 Chinese hospitals between September and December in 2013. Each conversation is around 5 minutes in length, resulting in 30 hours of video-recordings in total. The conversations were mostly between doctors and patients' caregivers regarding patients' health conditions and lifestyle-related issues that are commonly discussed in pediatrics.
Video-recordings were then transcribed manually. Six researchers were employed to transcribe the data, including one manager and five annotators. All of them are native speakers of Chinese. The five annotators received basic training in CA and its transcribing conventions before they started transcribing. The manager is a specialist in CA, who controlled the work flow and troubleshot during the transcribing process.
Following the Jeffersonian transcribing conventions (Jefferson, 2004), the video-recorded conversational data were transcribed with considerable details with respect to speech production, including the speech text verbatim and other paralinguistic features such as intonations, overlaps, visible non-verbal activities and noticeable timed silence (Auer et al., 1992). To answer our research questions, we developed an annotation schema, capturing the following aspects of the conversations, including (1) turn-taking and speakership (TID, UID), (2) multiturn dependency relations, such as adjacency pair 1 (SID) and rhetorical relations 2 (RID). In addition, the speech text was also word segmented corresponding to Chinese Penn Tree Bank segmentation guideline (Xia et al., 2000). An example of the corpus is shown in Table 1.
The current annotated corpus contains 318 conversations with nearly 40K turns and 470K Chinese characters. It has on average 123 turns and 81 adjacency pairs in each conversation. The average number of participants is 3 in each conversation, with a minimum of 2 speakers and a maximum of 8 speakers.
Conversation Analysis
Conversation analysis (CA) is used to identify the dialogue acts in the corpus. CA views sequence organization a core feature of conversation that is important for understanding the meaning of an utterance and its significance as an action in conversation (Schegloff, 1968). The idea is that the action which some talk is doing can be grounded in its position, not just its composition. Therefore, some talk (e.g. "It's raining.") can be heard as an answer to a question (e.g. "Are we going to the game?"), even they are apparently semantically unrelated. The relationship of adjacency between turns is central to the ways in which talk in conversation is organized and understood (Schegloff, 2007). The adjacency relationship most powerfully operates in two ways: (1) backwards -next turns are understood by co-participants to display their speaker's understanding of the prior turn and to embody an action responsive to the prior turn so understood; (2) prospective -a first pair part in an adjacency pair projects some prospective relevance rules for the second pair part. Specifically, it makes relevant a limited set of possible second pair parts, and thereby sets some terms by which a next turn will be understood (Schegloff, 2007).
The methodology of CA relies on audio or video-recordings of naturally occurring conversations, which are then transcribed in details for analyses of turns and sequences in the conversation (Sidnell et al., 2013) and the embodied actions that speakers use to accomplish their goals in social interactions (Drew and Heritage, 1992;Drew et al., 2001). In general, CA looks for patterns in the conversation which form evidence of systematic usage that can be identified as a 'practice' through which people accomplish a social action. To be identified as a practice, a particular communication behavior must be seen to be recurrent and to be routinely treated by recipients in a way such that it can be discriminated from related or similar practices (Heritage, 1984;Stivers, 2005).
Utilizing CA, we identify parent practices of making requests and physician practices making treatment recommendations in our corpus. These findings are then used for developing an annotation schema for computational modeling of these dialogue acts and the associations with their responses or action outcomes.
Preliminary Results
Based on conversation analytical study, we find that four parent language practices are recurrently treated by physicians as requesting antibiotic treatment:
• Explicit requests of an antibiotic treatment;
• Desire statements of an antibiotic treatment;
• Inquiries about an antibiotic treatment;
• Evaluations of a past treatment.
Among the four language practices, only the first practice takes a canonical form of request (e.g., "Can you prescribe me some antibiotics?"), while the other three practices take less explicit language formats, putting varying degree of impositions on physicians' responsive acts.
For example, an explicit request of antibiotic treatment is the strongest form of request as it puts the highest degree of impositions on physicians' responsive action, by making physicians' grant or rejection of the request relevant in the next turn. In contrast, a statement of desire for antibiotic treatment does not put physicians under any constraint for granting an antibiotic prescription, but it generates an understanding that prescribing antibiotics is a desirable act under this circumstance. Similarly, an inquiry about antibiotics raises antibiotic treatment as a topic for discussion and implicates a preference for the treatment, yet it does not put physicians under the constraint as an explicit request does. Moreover, a positive evaluation of past experience with antibiotics may be subject to physicians' understanding as desiring for antibiotics for the current condition, yet it does not even require any response physicians as an inquiry about antibiotics does.
The CA study of the requesting practices enables us to identify the utterances that are recurrently understood or subject to speakers' understanding as doing the act of requesting. In addition, we find that explicit requests are least frequently used by parents, while less explicit forms of requests occur more frequently. Table 2 describes the frequency (number of cases) and percentage of the requesting practices out of total number of cases in the corpus.
In order to quantitatively investigate the correlation between the presence of the requesting practices and the prescribing decision outcomes, we conduct a Pearson's χ 2 test between the two variables X and Y , where X is whether parents use at least one of the four requesting practices, and Y is whether they receive an antibiotic treatment by the end of the consultation. The χ 2 test suggests that parents using at least one of the four requesting practices is significantly associated with that they receive an antibiotic treatment (χ 2 =5.625, df = 1 3 , p = 0.018 4 ). It is worth noting that this is an approximation of the correlation between parent use of the requesting practices and the prescribing outcomes. Investigation of correlations between individual parent requesting practices and the prescribing outcomes will be carried out in our ongo- ing work. Moreover, computational methods will also be introduced to examine the correlations.
In examining what kind of treatment recommendations are more likely to be resisted by parents, we investigate the association between physicians' non-antibiotic treatment recommendations and parents' responses in the next turn.
One way to distinguish the delivery format of a non-antibiotic treatment recommendation is whether it is negative-format or positive-format (Stivers, 2005). A negative-format recommendation is to recommend against a particular treatment (e.g., "She doesn't need any antibiotics."); while a positive-format one is to recommend for a particular treatment (e.g., "I'm gonna give her some cough medicine."). Findings from the American pediatric settings show that physicians' positiveformat recommendations are less likely to engender parent resistant response to a non-antibiotic treatment recommendation than negative-format recommendations, and thus suggests that recommendations delivered in an affirmative, specific form are most receptive to parents for nonantibiotic treatment (Stivers, 2005).
Beyond distinguishing the recommendations into positive-format and negative-format, there are many other features which could be taken into consideration regarding to their consequences on parents' responses (e.g. epistemic stances 5 and deontic stances 6 that are embodied in the recommending practices). For example, physicians' treatment recommendations can be produced with the following types, including assertions, proposals and offers. The assertions are recommen-5 The epistemic stance refers to speakers' orientation toward the relative primacy/subordination in terms of their knowledge access. See (Heritage and Raymond, 2005) for more details. 6 The deontic stance refers to speakers' orientation toward their relative primacy/subordination in terms of their rights to decide future events. See (Stevanovic and Peräkylä, 2014) for more details. dations such as "You have to take some fever medicine.". Proposals are such as "Why don't you take some cough syrup?". Offers are mostly recommendations that are offered following parent indication of their treatment preference or desires, e.g. "I'll give you some fever medicine if you want.". The assertions index higher physician epistemic and deontic rights in terms of who knows the best about the treatment and who determines what the patient needs to do respectively. Compared to assertions, physicians claim less epistemic and deontic authority by using the proposal format; and offers embody the least amount epistemic and deontic primacy. Table 3 describes the distribution of physicians' practices of making treatment recommendations across the corpus. We also conduct a Pearson's χ 2 test between physicians' choice of recommending practice and parent response. The test shows that we cannot reject the null hypothesis that physicians' choices of recommending practice type are independent of parent response types (χ 2 =0.327, df = 2, p = 0.849). Thus our ongoing work is to examine other complexities of treatment recommending practices and their effect on parents' response.
Computational Approach
Conversation analysis allows us to manually identify language practices that are recurrently understood and subject to speaker understanding of doing a particular act; while computational approach is used to assist tasks such as entity type recognition, dialogue act classification, and analyses of interested correlations in a more scalable way.
Early research (Jurafsky et al., 1998;Stolcke et al., 2000) on computational modeling of conversational language has demonstrated that automatic modeling based on manually transcribed conversational data by including features such as speakership, dependency relations have achieved supe-rior performance results compared to datasets otherwise. In using the computational approach in our study, several techniques will be used. In general, we can divide our computational tasks into two categories, fundamental and dialogue specific tasks.
Fundamental Tasks
Fundamental tasks mainly involve solving general problems that are across all language processing tasks, e.g. named entity recognition and coreference resolution. This part of work lays foundations for more advanced dialogue specific tasks to be discussed in the next section.
Named Entity Recognition
Entities are very important in spoken language understanding, as it conveys key information in determining task objectives, intents, etc. In the medical domain, entity recognition is particularly crucial in identifying information such as treatment or prescriptions. As a fundamental natural language processing (NLP) technique for various tasks, e.g. machine translation (Babych and Hartley, 2003), information retrieval (Guo et al., 2009), named entity recognition (NER) (Nadeau and Sekine, 2007) is also used in our study. Using NER in our study has several challenges. For example, utterances in dialogues are shorter compared to other types of texts. Also, NER is conducted on Chinese. Thus, domain specific word segmentation ) is a prerequisite if we extend our work to larger datasets in a more scalable way. However, using NER in our study has the advantage that utterances in dialogues are not isolated. The sequential relations between the utterances thus potentially provides us with more information to build a better model. Previous work (Weston et al., 2015) proved that information extraction which takes into account information from previous utterances with recurrent neural networks was more effective. NER in our study can provide more in-depth annotations to the corpus, allowing models trained on the corpus to incorporate more information. To accelerate the annotation process, semi-supervised methods are used for dialogue acts recognition and classification. Specifically, we annotate some seed data, use the trained model to automatically annotate the rest, and finally check the automatically generated annotations manually.
Coreference Resolution
In natural language, reference is used widely for communication efficiency. In dialogue environments, person reference and even omissions are very common. Therefore, coreference resolution can help us add useful semantic information into our language models (Recasens et al., 2013;Clark and Manning, 2015). General coreference resolution is usually performed on multiple sentences in a document; however, the relations of these sentences are vague. Based on our multi-turn rhetorical relation annotations, information that are absent or abstract in a turn can be extracted from turns that are rhetorically related. This could effectively enhance the performance of coreference resolution and provide more accurate information about the referent. For example, the pronoun that may not be clear about what it refers to in one utterance; however, the co-reference resolution technique links it to previous turns which contain the information of its referent.
Dialogue Specific Tasks
Our research is closely related to the studies on dialogue systems (Henderson, 2015), in which models are built to structure conversations. To achieve our research goals, models are built to track states in a dialogue and to build connections between utterances and action outcomes.
Dialogue State Modeling
One important task is to classify types of an utterance and types of the action required. For example, to judge whether an utterance is a question, answer, or other dialogue act, classification can be performed, taking into account turns in previous context. Previous work (Henderson et al., 2013;Ren et al., 2014) demonstrated that using a classifier was effective for modeling user intents and utterance types. In our research, we will use this approach to classify utterances into different types such as dialogue acts, parent responses and treatment decisions. In order to perform such classification, further annotations are conducted based on the findings of conversation analyses, including:
• Dialogue act -parent requests for antibiotic treatment, physician treatment recommendations;
• Treatment type -antibiotic or non-antibiotic treatment;
• Response type -grant or rejection to recommendation.
By using these classifiers, it allows us to investigate the features that are most important for classifying the utterances, and then align them with the qualitative findings from CA studies. Another way to model dialogue states is treating dialogues as a sequence of observations and then build models (e.g., CRF (Lafferty et al., 2001), LSTM (Hochreiter and Schmidhuber, 1997)) to perform labeling or classification based on that. This is a natural way of modeling dialogues in terms of the problem proximity. Current state-ofthe-art studies suggest that LSTM is a good choice for modeling not only sequences of turns, but also sequences of words (or other basic units) within a turn (Zilka and Jurcícek, 2015). Using our corpus, an LSTM model can be trained to achieve the same goal as static classifiers for practice type classification, and to model the sequential relationship between turns in real conversations.
Previous studies (Lee and Eskenazi, 2013;Williams, 2014;Henderson et al., 2014) found that systems combining the classifier approach and the sequence model approach showed competitive results. In doing so, one can train several different models with different sets of parameters and join their results accordingly (Henderson et al., 2014). For the aforementioned classification and sequence modeling tasks, the combined model is expected to outperform individual models.
Domain Adaptation
Since our data is of the particular domain of medicine, domain adaptation is another task involved in our research. Almost all of the aforementioned tasks can be affected by domain specific variance. Besides, conversational data in medical domain is also lacking. Therefore, acquiring more data from other or general domain can be useful in completing the tasks in the medical domain, and improving the capability of conversational understanding, Training data selection/acquisition (Axelrod et al., 2011; could be the first step to solve the problem of domain variance, without the need to modify the existing models to fit our domain. Moreover, when this work has to be extended to other domains, e.g., law, education, etc., domain adaptation is required to transfer the knowledge from this domain to another.
Discussion
In this proposal, we propose a study on doctorpatient conversations based on a corpus of naturally occurring medical conversation that are transcribed and annotated manually. With the combination of the social science research method of conversation analysis and computational methods for language modeling, we aim to discover how language practices in doctor-patient conversation influence antibiotic over-prescribing.
Although previous studies (Macfarlane et al., 1997;Mangione-Smith et al., 1999;Stivers, 2007) proved that doctor-patient conversation were consequential on medical decision-making and population health outcomes, findings from the extant social science research are still limited in answering the question "in what way the language practices that doctors and patients use in medical consultations influence the decision outcomes".
Based on our preliminary findings from the CA studies, we propose to use the computational approach to help answer our research questions. In doing so, language patterns that are interested in CA studies can be automatically modeled and predicted with classifier or sequence models, leading us to more interesting findings. Also, by using the computational approach, we can also build a dialogue system based on our corpus. This system can be useful for analyzing doctor-patient conversation and assisting decision-making process in medical consultations.
In addition, we constructed a manually transcribed and annotated corpus. Our ongoing work involves formalizing and adding additional annotations to the corpus. We will release the corpus to the community in near future. It will be a unique resource for both social scientific and computational linguistic studies of conversations in the medical domain.
Table 1 :
1An example of annotated conversation.
Table 2 :
2Distribution of requesting practices in the
corpus. Each cell reports the number of cases (and
percentage) containing the practice.
Table 3 :
3Distribution of recommending practices
in the corpus. Each cell reports the number of
cases (and percentage) containing the correspond-
ing practice.
A basic sequential relationship defined in conventional conversation analysis literature, to be explained in the next section.2 RID: Information reflecting topical relevances across turns.
Degrees of freedom. 4 At the 0.05 significance level.
Language in Time: The Rhythm and Tempo of Spoken Interaction. Peter Auer, Frank Muller, Elizabeth Couper-Kuhlen, Oxford University PressNew YorkPeter Auer, Frank Muller, and Elizabeth Couper- Kuhlen. 1992. Language in Time: The Rhythm and Tempo of Spoken Interaction. Oxford University Press, New York.
How to do things with words. Austin John Langshaw, Oxford University PressJohn Langshaw Austin. 1962. How to do things with words. William James Lectures. Oxford University Press.
Domain adaptation via pseudo in-domain data 147 selection. Amittai Axelrod, Xiaodong He, Jianfeng Gao, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-2011). the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-2011)Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data 147 selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-2011). pages 355-362.
Improving machine translation quality with automatic named entity recognition. Bogdan Babych, Anthony Hartley, Proceedings of the 7th International EAMT Workshop on MT and Other Language Technology Tools, Improving MT Through Other Language Technology Tools: Resources and Tools for Building MT. Association for Computational Linguistics. the 7th International EAMT Workshop on MT and Other Language Technology Tools, Improving MT Through Other Language Technology Tools: Resources and Tools for Building MT. Association for Computational LinguisticsStroudsburg, PA, USA, EAMT '03Bogdan Babych and Anthony Hartley. 2003. Im- proving machine translation quality with automatic named entity recognition. In Proceedings of the 7th International EAMT Workshop on MT and Other Language Technology Tools, Improving MT Through Other Language Technology Tools: Resources and Tools for Building MT. Association for Computa- tional Linguistics, Stroudsburg, PA, USA, EAMT '03, pages 1-8.
Entity-centric coreference resolution with model stacking. Kevin Clark, Christopher D Manning, Association for Computational Linguistics (ACL). Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Association for Computational Linguis- tics (ACL).
Conversation Analysis: A Aethod for Research into Interactions between Patients and Health-care Professionals. Paul Drew, John Chatwin, Sarah Collins, Health Expectations. 4Paul Drew, John Chatwin, and Sarah Collins. 2001. Conversation Analysis: A Aethod for Research into Interactions between Patients and Health-care Pro- fessionals. Health Expectations 4:58-70.
Analyzing talk at work: an introduction. Paul Drew, John Heritage, Talk at work: interaction in institutional settings. Paul Drew and John HeritageCambridgeCambridge University PressPaul Drew and John Heritage. 1992. Analyzing talk at work: an introduction. In Paul Drew and John Heritage, editors, Talk at work: interaction in insti- tutional settings, Cambridge University Press, Cam- bridge, pages 3-65.
Research methods in the age of digital journalism. Ilias Flaounas, Omar Ali, Thomas Lansdall-Welfare, Tijl De Bie, Nick Mosdell, Justin Lewis, Nello Cristianini, Digital Journalism. 11Ilias Flaounas, Omar Ali, Thomas Lansdall-Welfare, Tijl De Bie, Nick Mosdell, Justin Lewis, and Nello Cristianini. 2013. Research methods in the age of digital journalism. Digital Journalism 1(1):102- 116.
Named entity recognition in query. Jiafeng Guo, Gu Xu, Xueqi Cheng, Hang Li, Proceedings of the 32Nd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 32Nd International ACM SIGIR Conference on Research and Development in Information RetrievalNew York, NY, USA, SIGIR '09ACMJiafeng Guo, Gu Xu, Xueqi Cheng, and Hang Li. 2009. Named entity recognition in query. In Proceed- ings of the 32Nd International ACM SIGIR Confer- ence on Research and Development in Information Retrieval. ACM, New York, NY, USA, SIGIR '09, pages 267-274.
Machine learning for dialog state tracking: A review. Matthew Henderson, Proceedings of The First International Workshop on Machine Learning in Spoken Language Processing. The First International Workshop on Machine Learning in Spoken Language ProcessingMatthew Henderson. 2015. Machine learning for dia- log state tracking: A review. In Proceedings of The First International Workshop on Machine Learning in Spoken Language Processing.
Deep neural network approach for the dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Steve Young, Proceedings of the SIGDIAL 2013 Conference. the SIGDIAL 2013 ConferenceMetz, FranceMatthew Henderson, Blaise Thomson, and Steve Young. 2013. Deep neural network approach for the dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference. Metz, France, pages 467-471.
Word-based dialog state tracking with recurrent neural networks. Matthew Henderson, Blaise Thomson, Steve Young, Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). Philadelphia. the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). PhiladelphiaPA, U.S.A.Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). Philadel- phia, PA, U.S.A., pages 292-299.
Garfinkel and Ethnomethodology. John Heritage, Polity PressCambridgeJohn Heritage. 1984. Garfinkel and Ethnomethodol- ogy. Polity Press, Cambridge.
Communication in Medical Care: Interaction between Primary Care Physicians and Patients. John Heritage, Douglas W Maynard, Number 20 in Studies in Interactional Sociolinguistics. CambridgeCambridge University PressJohn Heritage and Douglas W. Maynard. 2006. Com- munication in Medical Care: Interaction between Primary Care Physicians and Patients. Number 20 in Studies in Interactional Sociolinguistics. Cam- bridge University Press, Cambridge.
The terms of agreement: Indexing epistemic authority and subordination in assessment sequences. John Heritage, Geoffrey Raymond, Social Psychology Quarterly. 68John Heritage and Geoffrey Raymond. 2005. The terms of agreement: Indexing epistemic authority and subordination in assessment sequences. Social Psychology Quarterly 68:15-38.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735- 1780.
Glossary of transcript symbols with an introduction. Gail Jefferson , Conversation Analysis: Studies from the First Generation. Gene H. LernerAmsterdam / PhiladelphiaJohn BenjaminsGail Jefferson. 2004. Glossary of transcript symbols with an introduction. In Gene H. Lerner, editor, Conversation Analysis: Studies from the First Gen- eration, John Benjamins, Amsterdam / Philadelphia, chapter 2, pages 13-31.
Switchboard discourse language modeling project report. Daniel Jurafsky, Rebecca Bates, Noah Coccaro, Rachel Martin, Marie Meteer, Klaus Ries, Elizabeth Shriberg, Andreas Stolcke, Paul Taylor, Carol Van Ess-Dykema, Center for Speech and Language Processing. Baltimore, MDJohns Hopkins UniversityDaniel Jurafsky, Rebecca Bates, Noah Coccaro, Rachel Martin, Marie Meteer, Klaus Ries, Elizabeth Shriberg, Andreas Stolcke, Paul Taylor, and Carol Van Ess-Dykema. 1998. Switchboard discourse lan- guage modeling project report. Center for Speech and Language Processing, Johns Hopkins Univer- sity, Baltimore, MD .
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning. the Eighteenth International Conference on Machine LearningSan Francisco, CA, USA, ICML '01Morgan Kaufmann Publishers IncJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning. Mor- gan Kaufmann Publishers Inc., San Francisco, CA, USA, ICML '01, pages 282-289.
Effects of the recession on public mood in the uk. Thomas Lansdall-Welfare, Vasileios Lampos, Nello Cristianini, Proceedings of the 21st International Conference on World Wide Web. the 21st International Conference on World Wide WebNew York, NY, USAACMThomas Lansdall-Welfare, Vasileios Lampos, and Nello Cristianini. 2012. Effects of the recession on public mood in the uk. In Proceedings of the 21st In- ternational Conference on World Wide Web. ACM, New York, NY, USA, pages 1221-1226.
Recipe for building robust spoken dialog state trackers: Dialog state tracking challenge system description. Sungjin Lee, Maxine Eskenazi, Proceedings of the SIGDIAL 2013 Conference. Association for Computational Linguistics. the SIGDIAL 2013 Conference. Association for Computational LinguisticsMetz, FranceSungjin Lee and Maxine Eskenazi. 2013. Recipe for building robust spoken dialog state trackers: Dialog state tracking challenge system description. In Pro- ceedings of the SIGDIAL 2013 Conference. Associ- ation for Computational Linguistics, Metz, France, pages 414-422.
Pragmatics. C Stephen, Levinson, Cambridge University PressStephen C. Levinson. 1983. Pragmatics. Cambridge University Press.
Overprescribing in china, driven by financial incentives, results in very high use of antibiotics, injections, and corticosteroids. Yongbin Li, Jing Xu, Fang Wang, Bin Wang, Liqun Liu, Wanli Hou, Hong Fan, Yeqing Tong, Juan Zhang, Zuxun Lu, Health Affairs31Project HopeYongbin Li, Jing Xu, Fang Wang, Bin Wang, Liqun Liu, Wanli Hou, Hong Fan, Yeqing Tong, Juan Zhang, and Zuxun Lu. 2012. Overprescribing in china, driven by financial incentives, results in very high use of antibiotics, injections, and corticos- teroids. Health Affairs (Project Hope) 31(5):1075- 1082.
Influence of patients' expectations on antibiotic management of acute lower respiratory tract illness in general practice: questionnaire study. J Macfarlane, W Holmes, R Macfarlane, N Britten, BMJ. 3157117J. Macfarlane, W. Holmes, R. Macfarlane, and N. Brit- ten. 1997. Influence of patients' expectations on an- tibiotic management of acute lower respiratory tract illness in general practice: questionnaire study. BMJ 315(7117):1211-1214.
The relationship between perceived parental expectations and pediatrician antimicrobial prescribing behavior. R Mangione-Smith, E A Mcglynn, M N Elliott, P Krogstad, R H Brook, Pediatrics. 1034R. Mangione-Smith, E. A. McGlynn, M. N. Elliott, P. Krogstad, and R. H. Brook. 1999. The relation- ship between perceived parental expectations and pediatrician antimicrobial prescribing behavior. Pe- diatrics 103(4):711-718.
Communication practices and antibiotic use for acute respiratory tract infections in children. R Mangione-Smith, C Zhou, J D Robinson, J A Taylor, M N Elliott, J Heritage, Annals of Family Medicine. 133R. Mangione-Smith, C. Zhou, J. D. Robinson, J. A. Taylor, M. N. Elliott, and J. Heritage. 2015. Com- munication practices and antibiotic use for acute res- piratory tract infections in children. Annals of Fam- ily Medicine 13(3):221-227.
A survey of named entity recognition and classification. Linguisticae Investigationes. David Nadeau, Satoshi Sekine, 30David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lin- guisticae Investigationes 30(1):3-26.
Same referent, different words: Unsupervised mining of opaque coreferent mentions. Marta Recasens, Matthew Can, Dan Jurafsky, North American Association for Computational Linguistics (NAACL). Marta Recasens, Matthew Can, and Dan Jurafsky. 2013. Same referent, different words: Unsupervised mining of opaque coreferent mentions. In North American Association for Computational Linguis- tics (NAACL).
Markovian discriminative modeling for dialog state tracking. Hang Ren, Weiqun Xu, Yonghong Yan, Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)Philadelphia, PA, U.S.A.Hang Ren, Weiqun Xu, and Yonghong Yan. 2014. Markovian discriminative modeling for dialog state tracking. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Di- alogue (SIGDIAL). Philadelphia, PA, U.S.A., pages 327-331.
Data-driven Response Generation in Social Media. Alan Ritter, Colin Cherry, William B Dolan, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingStroudsburg, PA, USAAlan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven Response Generation in Social Media. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing. Stroudsburg, PA, USA, EMNLP '11, pages 583- 593.
Sequence organization in interaction. Emanuel Schegloff, A primer in conversation analysis. Cambridge University Press1Emanuel Schegloff. 2007. Sequence organization in interaction: Volume 1: A primer in conversation analysis. Cambridge University Press.
Sequencing in Conversational Openings. Emanuel A Schegloff, American Anthropologist. 706Emanuel A. Schegloff. 1968. Sequencing in Con- versational Openings. American Anthropologist 70(6):1075-1095.
Clarifying Intentions in Dialogue: A Corpus Study. J Julian, Raquel Schlöder, Fernandez, Proceedings of the 11th International Conference on Computational Semantics. the 11th International Conference on Computational SemanticsLondon, UKJulian J. Schlöder and Raquel Fernandez. 2015. Clar- ifying Intentions in Dialogue: A Corpus Study. In Proceedings of the 11th International Conference on Computational Semantics. London, UK, pages 46- 51.
The Handbook of Conversation Analysis. Jack Sidnell, Tanya Stivers, and EdsWiley-BlackwellJack Sidnell, Tanya Stivers, and Eds. 2013. The Hand- book of Conversation Analysis. Wiley-Blackwell.
Entropy-based Training Data Selection for Domain Adaptation. Yan Song, Prescott Klassen, Fei Xia, Chunyu Kit, Proceedings of COLING-2012. COLING-2012Mumbai, IndiaYan Song, Prescott Klassen, Fei Xia, and Chunyu Kit. 2012. Entropy-based Training Data Selection for Domain Adaptation. In Proceedings of COLING- 2012. Mumbai, India, pages 1191-1200.
Using a Goodness Measurement for Domain Adaptation: A Case Study on Chinese Word Segmentation. Yan Song, Fei Xia, Proceedings of LREC-2012. LREC-2012Istanbul, TurkeyYan Song and Fei Xia. 2012. Using a Goodness Mea- surement for Domain Adaptation: A Case Study on Chinese Word Segmentation. In Proceedings of LREC-2012. Istanbul, Turkey, pages 3853-3860.
Three Orders in the Organization of Human Action: On the Interface between Knowledge, Power, and Emotion in interaction and social relations. Melisa Stevanovic, Anssi Peräkylä, Language in Society. 432Melisa Stevanovic and Anssi Peräkylä. 2014. Three Orders in the Organization of Human Action: On the Interface between Knowledge, Power, and Emo- tion in interaction and social relations. Language in Society 43(2):185-207.
Participating in Decisions about Treatment: Overt Parent Pressure for Antibiotic Medication in Pediatric Encounters. Tanya Stivers, Social Science and Medicine. 547Tanya Stivers. 2002. Participating in Decisions about Treatment: Overt Parent Pressure for Antibiotic Medication in Pediatric Encounters. Social Science and Medicine 54(7):1111-1130.
Non-antibiotic treatment recommendations: Delivery formats and implications for parent resistance. Tanya Stivers, Social Science & Medicine. 560Tanya Stivers. 2005. Non-antibiotic treatment rec- ommendations: Delivery formats and implications for parent resistance. Social Science & Medicine 5(60):949-964.
Prescribing under Pressure: Parent-physician Conversations and Antibiotics. Tanya Stivers, Oxford University PressLondonTanya Stivers. 2007. Prescribing under Pressure: Parent-physician Conversations and Antibiotics. Oxford University Press, London.
Dialogue act modeling for automatic tagging and recognition of conversational speech. Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, Marie Meteer, Computational Linguistics. 263Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Eliza- beth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for au- tomatic tagging and recognition of conversational speech. Computational Linguistics 26(3):339-373.
Use and prescription of antibiotics in primary health care settings in china. Jin Wang, Pan Wang, Xinghe Wang, Yingdong Zheng, Yonghong Xiao, JAMA Internal Medicine. 17412Jin Wang, Pan Wang, Xinghe Wang, Yingdong Zheng, and Yonghong Xiao. 2014. Use and prescription of antibiotics in primary health care settings in china. JAMA Internal Medicine 174(12):1914-1920.
Towards ai-complete question answering: A set of prerequisite toy tasks. Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, ArXiv pre-prints abs/1502.05698Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete ques- tion answering: A set of prerequisite toy tasks. ArXiv pre-prints abs/1502.05698.
Deepak Ramachandran, and Alan Black. Jason Williams, Antoine Raux, Proceedings of the SIGDIAL 2013 Conference. the SIGDIAL 2013 ConferenceMetz, FranceThe Dialog State Tracking ChallengeJason Williams, Antoine Raux, Deepak Ramachan- dran, and Alan Black. 2013. The Dialog State Tracking Challenge. In Proceedings of the SIGDIAL 2013 Conference. Metz, France, pages 404-413.
Web-style ranking and slu combination for dialog state tracking. Jason D Williams, Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)Philadelphia, PA, U.S.A.Jason D Williams. 2014. Web-style ranking and slu combination for dialog state tracking. In Proceed- ings of the 15th Annual Meeting of the Special Inter- est Group on Discourse and Dialogue (SIGDIAL). Philadelphia, PA, U.S.A., pages 282-291.
Developing Guidelines and Ensuring Consistency for Chinese Text Annotation. Fei Xia, Martha Palmer, Nianwen Xue, Mary Ellen Okurowski, John Kovarik, Fudong Chiou, Shizhe Huang, Tony Kroch, Mitch Marcus, Proceedings of the Second Language Resources and Evaluation Conference (LREC). the Second Language Resources and Evaluation Conference (LREC)Fei Xia, Martha Palmer, Nianwen Xue, Mary Ellen Okurowski, John Kovarik, Fudong Chiou, Shizhe Huang, Tony Kroch, and Mitch Marcus. 2000. De- veloping Guidelines and Ensuring Consistency for Chinese Text Annotation. In Proceedings of the Second Language Resources and Evaluation Con- ference (LREC).
Mohnarin report of 2011, monitoring of bacterial resistance in China. Yh Xiao, Shen, Wei, H S Chen, Kong, Chinese Journal of Nosocomiology. 22YH Xiao, P Shen, ZQ Wei, YB Chen, and HS Kong. 2012. Mohnarin report of 2011, monitoring of bac- terial resistance in China. Chinese Journal of Noso- comiology 22:4946-4952.
Incremental lstm-based dialog state tracker. Lukás Zilka, Filip Jurcícek, ArXiv pre-prints abs/1507.03471Lukás Zilka and Filip Jurcícek. 2015. Incremental lstm-based dialog state tracker. ArXiv pre-prints abs/1507.03471.
Physician communication and patient adherence to treatment: A meta-analysis. B Kelly, M Robin Haskard Zolnierek, Dimatteo, Medical care. 478Kelly B. Haskard Zolnierek and M. Robin DiMatteo. 2009. Physician communication and patient adher- ence to treatment: A meta-analysis. Medical care 47(8):826-834. |
244,464,096 | [] | Disambiguating Grammatical Number and Gender With BERT
Sep 1-3, 2021
Annegret Janzso [email protected]
Department of Language Science and Technology
Saarland University
66123SaarbrückenGermany
Disambiguating Grammatical Number and Gender With BERT
Proceedings of the Student Research Workshop associated with RANLP-2021
the Student Research Workshop associated with RANLP-2021Sep 1-3, 202110.26615/issn.2603-2821.2021_01169
Accurately dealing with any type of ambiguity is a major task in Natural Language Processing, with great advances recently reached due to the development of context dependent language models and the use of word or sentence embeddings. In this context, our work aimed at determining how the popular language representation model BERT handles ambiguity of nouns in grammatical number and gender in different languages. This work shows that models trained on one specific language achieve better results for the disambiguation process than multilingual models. Also, ambiguity is generally better dealt with in grammatical number than it is in grammatical gender, reaching greater distance values from one to another in direct comparisons of individual word sense embeddings. The overall results show also that the amount of data needed for training monolingual models as well as application should not be underestimated.
Introduction
A challenge in Natural Language Processing (NLP) resides in the accurate automatic sense disambiguation of words and phrases. An often cited example of ambiguity is the one between bank ("An institution where one can place and borrow money and take care of financial affairs.") and bank ("An edge of river, lake, or other watercourse.") in English. 1 While a person is able to get the right meaning of the word from context, the same skill is now expected from contextual language models.
The goal of our work, however, is to find out if and how well contextual meaning representations can handle sense ambiguities in grammatical gender and number. Further, this paper aims to show differences in disambiguation in between different languages, ambiguity types, and pre-existing models. For these tasks, German and Spanish are used for gender and number ambiguities, and English for number ambiguities only (see section 3). Ambiguity in grammatical number is similar to the example mentioned above, as the plural form of a word might also mean something different from the mere quantity. In this case, the Language Model (LM) needs to disambiguate the meaning of plurals considering only the current context. For grammatical gender ambiguity -that is, words that can occur in more than one gender, with their meaning dependent on that factor -additional cues to the current meaning can be found in words that are connected to the currently observed ambiguous word, such as accordingly gendered determina or adjectives. This specific skill of disambiguation of LMs is tested on BERT word embeddings (see section 4.2), based on data of three different languages: German (section 3.1) and Spanish (section 3.2), which contain both types of ambiguity, and English (section 3.3), which contains only ambiguity in grammatical number, but has been included due to the large amount of available data for testing (see section 4.1).
Related Work
While existing work on ambiguity mostly focuses on a more general definition of the term, grammatical gender and number have mostly been left aside. However, there has been research on gender bias in Language Models. Bartl et al. (2020) found a gender-bias for English in BERT similar to the gender-bias occurring with names of professions. For German, this effect has been found to be less strong, due to morphological gender markings. Seeing how a language without grammatical gender shows a higher gender bias than a language with such, this topic appears to be rather drifting in a direction of biological gender and social prejudice connected with gender, which, according to Bartl et al. (2020), can be solved by grammatical gender in form of morphological markings, as this occurs in German. These results show a possibility to search for a gender bias in the disambiguation process of BERT for languages such as German or Spanish.
Concerning ambiguity in grammatical number, Gromann and Declerck (2019) analyses how plural entries included in Princeton WordNet (Princeton University, 2010) are encoded in word embeddings using Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014), thereby showing the convergence between distributed representations of plurals bearing a specific sense, as well as their symbolic representation proposed in WordNet. They bring up examples like people -peoples: Both words occur in plural form. However, people can be treated as a singular term, meaning 'the body of citizens of a state or country' (Definition given by Princeton Wordnet (Princeton University, 2010)). Peoples can be used as the plural of people, but can also refer to 'the human beings of a particular nation or community or ethnic group' (Definition given by Princeton Wordnet (Princeton University, 2010)). This shows that:
1. the singular and plural form of a word don't necessarily share all their meanings and 2. a word can function as a plural form for some senses, but as a singular form for other senses This shows further that there is a lot of possibility for ambiguity in grammatical number, making it just as relevant as grammatical gender in a task of disambiguation.
Considering sense already during training, instead of researching on existing models, has been shown to be a promising approach (see Blevins and Zettlemoyer (2020), Levine et al. (2019)) for English data. These approaches add information on senses or glosses to the training process, thereby considering this additional knowledge in all applicational usage.
Morphological approaches have also shown promising results in English, German (Cotterell and Schütze, 2018) and Hebrew (Avraham and Goldberg, 2017). Future research in that direction will show if these approaches can improve disambiguation of those ambiguities that contain relevant morphological cues (see Corbett (2007)).
Observations of morphological content of sentences, including grammatical number and gender, have shown that BERT achieves better results for morphologically simple languages, in comparison to morphologically more complex languages such as German or Russian (see Edmiston (2020)). This work shows that BERT is able to detect and disambiguate such morphological content based on contextual cues, but does not achieve human-like results.
Ambiguity in Grammatical Gender and Number
The present work takes focus on ambiguity occurring in grammatical number and gender. Ambiguity in grammatical number occurs if a word w has a plural p that has a standing meaning of its own, that is not only expressing a quantity associated with the word. An example in English is given by the word glass; its plural glasses can either mean multiple items of the object named glass, or an object that is worn on one's face to increase eyesight. While both meanings of glasses appear to be etymologically related, they do mean different things, and one meaning is independent of the meaning of the singular word glass.
Ambiguity in grammatical gender is given when a word w is ambiguous, while w can be assigned to at two or more different grammatical genders g and g', and the standing meaning is dependent on the assigned gender. A German example is displayed in Table 1. This work deals with grammatical number ambiguity in English, German, and Spanish, and grammatical gender ambiguity in German and Spanish. The following sections handle languages-specific details related to such ambiguities.
German
The German language contains both grammatical gender and grammatical number ambiguity.
Grammatical number ambiguity often occurs when two words w and p that look like they are forming a singular-plural pair have different meanings. In fact the two words are not forming such a pair, but are the cases of a singulare tantum and a case of plurale tantum, as can be seen in Table 2 Number Word Meaning Singular Schuld guilt Plural
Schulden debt Here, w equals Schuld and p equals Schulden. The morphological -en ending is often added to German nouns to create a plural, making w and p appear as a singular-plural pair, but they are a pair of singulare tantum and plurale tantum, each with their particular meaning. German has three different grammatical genders: masculine, feminine, and neutral. When the grammatical gender of a word changes, so do its article and adjectives in the sentence, which are a good clue to gender detection, and therefore possibly also for meaning disambiguation. Grammatical gender ambiguity, as described in section 3, occurs when an ambiguous word w can be assigned two or more grammatical genders, and is assigned a different meaning for each. The last part is important, because some words can be used in more than one grammatical gender, without having a different meaning, as shown in Table 3. An example of actual gender ambiguity in German is shown in Table 1.
Spanish
Just like in German, it is possible to observe both types of ambiguity defined above in section 3 in Spanish. Grammatical number ambiguity usually occurs when the plural p of a word w is ambiguous, and one meaning has no singular term, or one that is different to w. An example can be found in the word esposa, as shown in Table 4. For grammatical gender ambiguity, there are two grammatical genders in Spanish: masculine and feminine. Other than that, it is defined the same way it is for German in section 3.1. An example can be the word cólera, as shown in Table 5 Gender
English
Since English is (in its majority) genderless, it is enough to state that the present work in English deals only with ambiguity in grammatical number. Number ambiguity in English usually occurs when the plural p of a word w is ambiguous, and one meaning has no singular term, or one that is different to w. Other cases -such as a plural p being able to used as a singular term w, which can be assigned another plural term p' -have been observed in other works (e.g. Gromann and Declerck (2019)) with examples such as person for w, people for p, and peoples for p'. These types of occurrences, however, have not been included in this work.
An example on grammatical number ambiguity in English is described in section 3 with the terms glass and glasses.
Methodology
In the following sections, the methods used to compute and compare embedding vectors are described. Section 4.1 describes the data and its necessary information. Section 4.2 lists the pretrained BERT models that have been used to retrieve results, which then have been evaluated using the methods described in section 4.3. The actual results and evaluation outcomes are later described in section 5.
Data
Wiktionary (Wikimedia, 2021b) is a free online dictionary, created by the Wikimedia Foundation (Wikimedia, 2021a), providing a range of XML data dumps. It contains information of over 170 languages, and provides detailed information on every word, including Pronunciation, Etymology, Translations, and of course grammatical number and gender, as well as according meanings and example sentences, when any information of a type is available.
Our work uses three so-called Wiktionary dumps (one per language) from July 2021. Each dump has been parsed by taking individual changes in between dumps into account, and information relevant to the task (title, possible senses, example sentences, possible plurals, grammatical number, grammatical gender) have been filtered out for all noun entries. Some entries of the dump of one language are written for words in another language, e.g. there might be an entry for a Spanish word in the English dump. When this happened for one of the three languages used in this work, these entries have also been parsed and saved in additional files, so they could later be used as additional data for their languages.
Entries without the necessary information (e.g. missing example sentences for context) have been discarded, if there was no other way to retrieve the needed information from another dump.
There are significant differences in the size of each dataset, which have been further adjusted to eliminate entries irrelevant for the tasks at handsuch as word types other than nouns, proper names, and entries without any of the observed ambiguities.
To gather additional example sentences for English entry data, NLTK WordNet (Princeton University, 2010) has been used. For any English noun without an example sentence, the word was looked up in WordNet and all available senses and example sentences have been added to the collected data. 2 An idea of how much data is available to use per language can be given by an overview of iterations used when computing word embeddings (see Table 12 in the appendix). For each language, only a few examples will be used for representation purposes in this paper. Unfortunately, only a small amount of data is available for Spanish, which means only a small amount of possible comparisons. Overall 2 Some example sentences in WordNet contain a synonym of a word instead of the word that is needed. In these cases, the synonym has been replaced by the word in question. This may lead to grammatical errors within the sentence, which has been ignored. there was not enough data on Spanish to compute vector embedding distances, which is why this part is not included in the results in section 5.
BERT
BERT (Devlin et al., 2018) is a bidirectional language model and is by now a commonly used tool for NLP. The goal is to test its ability of disambiguation with regard to grammatical gender and number, as a representative of current methods, by computing word embeddings of ambiguous words in specific contexts and specifics meanings. Also, the many pre-existing models available for BERT eliminate the need to train any models for the purposes of our work.
Four pre-trained models have been used here:
• BERT base uncased (Devlin et al., 2018) • BERT base multilingual uncased (Devlin et al., 2018) • BERT base German uncased 3
• BERT base Spanish wwm uncased (Cañete et al., 2020) By now, for every language, there is one languagespecific pre-trained model; the multilingual model includes all three languages, among others.
Evaluation Methods
To compare the word embedding vectors created by BERT, three methods for computing distances in vector space have been used: Cosine, Euclidean, and Manhattan distance. Cosine Distance represents the angle between the origins of two vectors. If cosine distance equals 0, the vectors are identical. The higher the distance value, the more different the vectors are from each other. Under the assumption that words with the same meaning will also have the same or similar embedding vectors, and words with distinguished meanings shouldn't, this means that if BERT can properly disambiguate words in this task, the cosine distance should be rather large for ambiguous words with different meanings. If, however, cosine distance is small and close to 0, this would translate as BERT not properly disambiguating the word in it's multiple meanings.
Given that computing distance by angle in a multi-dimensional vector space might be imprecise in some cases, additional methods have been used. Euclidean Distance portraits the shortest distance from one point in vector space to another, regardless of dimensionality. Similar to Euclidean Distance, a value of 0 means the embedding vectors appear to be identical, whereas larger values portrait the opposite.
A third method used is Manhattan Distance, which, given a rectangle formed by the two points in vector space, is computed by the sum of the lengths of this rectangle. The resulting values are to be treated similarly to the values of Cosine and Euclidean Distance.
Results
Distances are compared between ambiguity-types: Comparing results of number-ambiguity to genderambiguity shows which of the two is processed better. Comparison of either to 'classic' ambiguity, without a change in number or gender, shows if there is any improvement due to additional contextual cues. Distances between two different, nonambiguous words are treated as baseline: When ambiguous words have a similar distance as nonambiguous words, the disambiguation process was successful. These results are presented in section 5.1.
Differences occurring due to the use of different models (multilingual or language specific) show the relevance of being specific to one language. These comparisons can be found in section 5.2.
Due to a huge lack of data, especially in example sentences that would be required to provide context, no distances could be computed on Spanish data. Computed data on Spanish ambiguity in grammatical gender only contained words where the sense stays the same with either possibility of gender. Computed data on Spanish ambiguity in grammatical number only contained words in singular, but none in plural. Therefore, this language is left out in further comparisons. For all other languages, the words and senses that have been used for comparisons in this paper can be found in Table 13 in the appendix.
Cosine Distances have been left out of the tables for results, as most were computed to be 0.0. Those with different values showed similar tendencies to the results of euclidean and manhattan distance, which can be compared better.
Sg
Pl Euc Man kitchening kitchenings 1.54e +75 7.68e +75 wood woods 0.321458 8.897009 gen gens 1.55e +75 7.74e +75 W W' kitchening wood 1.54e +75 7.69e +75 kitchening gen 1.54e +75 7.69e +75 wood gen 0.0 8.550618
Number and Gender Ambiguity
Cosine distance, euclidean distance, and manhattan distance have been computed for word pairs on grammatical gender for German and grammatical number for English and German. Comparisons of individual words out of those pairs, which do not contain such a relation of grammatical gender or number (respectively), are used as a baseline. For ambiguity in grammatical number in English, results are found in Tables 6 and 7 for mono-and multilingual models, respectively. Results regarding the differences between the two models can be found in section 5.2. The results show similar results for ambiguous and non-ambiguous word pairs.
For ambiguity in grammatical gender in German, results are found in Tables 8 and 9 for mono-and multilingual models, respectively. The results show similar results for ambiguous and non-ambiguous word pairs, however the ambiguous word pairs achieve slightly larger distances in some cases.
For ambiguity in grammatical number in German, results are found in Tables 10 and 11 for mono-and multilingual models, respectively. Just like for ambiguity in grammatical gender, results for ambiguous and non-ambiguous word pairs are similar, but slightly larger distances have been computed for non-ambiguous word pairs, with exception of the word pair of Schuld and Schulden, which overall achieved rather high distance values.
Monolingual and Multilingual Models
For ambiguity in grammatical number in English, results can be found in Table 6 and Table 7 for the mono-and multilingual models, respectively. Overall, the monolingual model was able to disambiguate better than the multilingual model, resulting in greater distances for both methods of distance computations. Distances are greater in the monolingual model for the ambiguous as well as the non-ambiguous case.
For ambiguity in grammatical gender in German, results can be found in Table 8 and Table 9 for the mono-and multilingual models, respectively. Overall, the models achieved very similar results. Both models perform slightly better on ambiguous than on non-ambiguous data. For ambiguity in grammatical number in German, results can be found in Table 10 and 11 for the mono-and multilingual models, respectively. Overall, the monolingual model was able to disam-biguate better than the multilingual model, resulting in greater distances for both methods of distance computations. Distances are slightly greater in the monolingual model for the ambiguous case, but much greater for the non-ambiguous case.
Discussion
Given the results found in section 5.1, BERT can disambiguate ambiguity in grammatical number in English similarly well as non-ambiguous word pairs. The same goes for ambiguity in grammatical gender in German. Overall, ambiguity in grammatical number in German could also be disambiguated well, but not as well as for the same type of data in English. A reason for this outcome might be more available data to compute word embeddings from or possible differences in amounts of data per language during training of the individual models.
Ambiguity in grammatical number could be dismbiguated better than for grammatical gender, which is possibly due to additional morphological cues within the word, or in some cases a more notable difference within the provided context.
Given the results found in section 5.2, the monolingual model bert-base-uncased outperformed the multilingual model bert-base-multilingual on English data. Concerning ambiguity in grammatical gender in German, both models performed similarly well on both ambiguous and non-ambiguous data. In the case of ambiguity in grammatical number in German, the monolingual model outperformed the multilingual model, achieving slightly greater results for ambiguous data, and much greater results for non-ambiguous data. Overall did the multilingual model perform better on German data, especially on ambiguity in grammatical gender, while it performs similarly well on ambiguity in grammatical number for both languages. The English monolingual model achieved much larger distances overall in comparison to the German monolingual model.
Using monolingual models is rewarded with better disambiguation, showing that it is well worth the time invested in the creation of such models, as the multilingual counterpart does well, but by far not as good. It is, however, a great way to include minority languages with not enough data to create individual language models. For languages like English and German, however, monolingual models appear to be the way to go, at least for disambiguation tasks.
Conclusion
This work shows that basic BERT models for English are better adapt to specific types of ambiguity than those for other languages like German, shown by the greater distances reached in the given evaluation methods. However, the multilingual model did perform better on German data than on English data, showing that a change of focus can improve results on languages with less data, but overall does not perform as well as language specific models.
This work shows a need for well-trained monolingual models, which appear to provide a better possibility of focus on disambiguation tasks. Further, available data such as provided by Wiktionary (Wikimedia, 2021b) needs to increase.
Interesting expansions of this work could come with approaches such as SenseBERT (Levine et al., 2019) -a BERT model that is trained on WordNet (Princeton University, 2010) supersenses -which might be able to achieve better results in disambiguation, and goes along with the findings of Blevins and Zettlemoyer (2020). Switching from word level to a morphological level could also bring up interesting results (Avraham and Goldberg (2017), Cotterell and Schütze (2018)).
Table 2 :
2Different meanings depending on the grammatical number in German.
Table 3 :
3Differences in gender for the German word Paprika without a change in meaning.
Table 4 :
4Change in meaning depending on grammatical number in Spanish.
Table 5 :
5Change in meaning depending on grammatical gender in Spanish.
Table 6 :
6Results for ambiguity in grammatical number
in English in comparison to words without relation of
number (monolingual model)
Sg=Singular, Pl=Plural, Euc=Euclidean Distance,
Man=Manhattan Distance, W=Word
Sg
Pl
Euc
Man
kitchening kitchenings 0.213219 5.906484
wood
woods
0.072258 2.002483
gen
gens
1.51e +75 7.54e +75
W
W'
kitchening wood
0.098116 2.719078
kitchening gen
0.106438 2.949704
wood
gen
0.008322 0.230625
Table 7 :
7Results for ambiguity in grammatical number
in English in comparison to words without relation of
number (multilingual model)
Sg=Singular, Pl=Plural, Euc=Euclidean Distance,
Man=Manhattan Distance, W=Word
W
G
G' Euc
Man
Band
m
n
0.209174 5.799105
Gehalt n
m 0.086309 2.403371
Leiter f
m 0.194705 5.395817
W
W'
G Euc
Man
Band
Leiter m 0.219066 3.717639
Band
Gehalt n
0.160602 4.445368
Leiter Gehalt m 0.144773 4.006747
Table 8 :
8Results for ambiguity in grammatical gender
in German in comparison to words without relation of
gender (monolingual model)
W=Word, G=Gender, Euc=Euclidean Distance,
Man=Manhattan Distance
Table 9 :
9Results for ambiguity in grammatical gender
in German in comparison to words without relation of
gender (multilingual model)
W=Word, G=Gender, Euc=Euclidean Distance,
Man=Manhattan Distance
Sg
Pl
Euc
Man
Schuld Schulden 0.124265 3.444058
Sasse
Sassen
0.093512 2.590606
Barre
Barren
0.0
1.315709
W
W'
Euc
Man
Schuld Sasse
0.173317 4.812258
Schuld Barre
1.48e +75 7.27e +75
Sasse
Barre
1.48e +75 7.27e +75
Table 10 :
10Results for ambiguity in grammatical num-
ber in German in comparison to words without relation
of number (monolingual model)
Sg=Singular, Pl=Plural, Euc=Euclidean Distance,
Man=Manhattan Distance
Sg
Pl
Euc
Man
Schuld Schulden 0.066630 1.846514
Sasse
Sassen
0.167143 4.638568
Barre
Barren
0.017129 0.474700
W
W'
Euc
Man
Schuld Sasse
0.103715 2.899168
Schuld Barre
0.034782 0.963905
Sasse
Barre
0.138497 3.853023
Table 11 :
11Results for ambiguity in grammatical num-
ber in German in comparison to words without relation
of number (multilingual model)
Sg=Singular, Pl=Plural, Euc=Euclidean Distance,
Man=Manhattan Distance
Definitions taken from Wiktionary: https://en. wiktionary.org/wiki/bank
https://huggingface.co/dbmdz/ bert-base-german-uncased
AcknowledgementsThis work has been partly supported by the Horizon 2020 research and innovation programme with the project Prêt-à-LLOD (grant agreement no. 825182) at DFKI Saarbrücken. I thank Thierry Declerck, Josef van Genabith, and Lucia Donatelli for their helpful advice.
The interplay of semantics and morphology in word embeddings. Oded Avraham, Yoav Goldberg, abs/1704.01938CoRROded Avraham and Yoav Goldberg. 2017. The inter- play of semantics and morphology in word embed- dings. CoRR, abs/1704.01938.
Unmasking contextual stereotypes: Measuring and mitigating bert's gender bias. Marion Bartl, Malvina Nissim, Albert Gatt, abs/2010.14534Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Mea- suring and mitigating bert's gender bias. CoRR, abs/2010.14534.
Moving down the long tail of word sense disambiguation with gloss-informed biencoders. Terra Blevins, Luke Zettlemoyer, abs/2005.02590CoRRTerra Blevins and Luke Zettlemoyer. 2020. Mov- ing down the long tail of word sense disam- biguation with gloss-informed biencoders. CoRR, abs/2005.02590.
Spanish pre-trained bert model and evaluation data. José Cañete, Gabriel Chaperon, Rodrigo Fuentes, Jou-Hui Ho, Hojin Kang, Jorge Pérez, PML4DC at ICLR 2020José Cañete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge Pérez. 2020. Span- ish pre-trained bert model and evaluation data. PML4DC at ICLR 2020.
Canonical typology, suppletion, and possible words. G Greville, Corbett, Language. 831Greville G. Corbett. 2007. Canonical typology, supple- tion, and possible words. Language, 83(1):8-42.
Joint semantic synthesis and morphological analysis of the derived word. Ryan Cotterell, Hinrich Schütze, Trans. Assoc. Comput. Linguistics. 6Ryan Cotterell and Hinrich Schütze. 2018. Joint se- mantic synthesis and morphological analysis of the derived word. Trans. Assoc. Comput. Linguistics, 6:33-48.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.
A systematic analysis of morphological content in BERT models for multiple languages. Daniel Edmiston, abs/2004.03032CoRRDaniel Edmiston. 2020. A systematic analysis of mor- phological content in BERT models for multiple lan- guages. CoRR, abs/2004.03032.
Towards the Detection and Formal Representation of Semantic Shifts in Inflectional Morphology. Dagmar Gromann, Thierry Declerck, 10.4230/OASIcs.LDK.2019.212nd Conference on Language, Data and Knowledge (LDK 2019). 7015, Dagstuhl, Germany. Schloss Dagstuhl-Leibniz-Zentrum fuer InformatikDagmar Gromann and Thierry Declerck. 2019. To- wards the Detection and Formal Representation of Semantic Shifts in Inflectional Morphology. In 2nd Conference on Language, Data and Knowledge (LDK 2019), volume 70 of OpenAccess Series in Informatics (OASIcs), pages 21:1-21:15, Dagstuhl, Germany. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
Sensebert: Driving some sense into BERT. Yoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, Yoav Shoham, abs/1908.05646CoRRYoav Levine, Barak Lenz, Or Dagan, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2019. Sensebert: Driving some sense into BERT. CoRR, abs/1908.05646.
Exploiting similarities among languages for machine translation. Tomás Mikolov, V Quoc, Ilya Le, Sutskever, abs/1309.4168CoRRTomás Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.
GloVe : Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on EMNLP. the 2014 Conference on EMNLPJeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe : Global Vectors for Word Representation. Proceedings of the 2014 Confer- ence on EMNLP, pages 1532-1543.
. Princeton University, About wordnet. https: //wordnet.princeton.edu/Princeton University. 2010. About wordnet. https: //wordnet.princeton.edu/.
Wikimedia. 2021a. Wikimedia foundation. Wikimedia. 2021a. Wikimedia foundation. https:// wikimediafoundation.org/.
. 2021b Wikimedia, Wiktionary, Wikimedia. 2021b. Wiktionary. https://www. wiktionary.org/. |
||
892,757 | The RWTH Phrase-based Statistical Machine Translation System Human Language Technology and Pattern Recognition | We give an overview of the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the International Workshop on Spoken Language Translation 2005.We use a two pass approach. In the first pass, we generate a list of the N best translation candidates. The second pass consists of rescoring and reranking this N -best list. We will give a description of the search algorithm as well as the models that are used in each pass.We participated in the supplied data tracks for manual transcriptions for the following translation directions: Arabic-English, Chinese-English, English-Chinese and Japanese-English. For Japanese-English, we also participated in the C-Star track. In addition, we performed translations of automatic speech recognition output for Chinese-English and Japanese-English. For both language pairs, we translated the single-best ASR hypotheses. Additionally, we translated Chinese ASR lattices. | [
14386564,
1659910,
8252230,
3151217,
1854610,
284436,
1559412,
7164502,
1435098,
5219389
] | The RWTH Phrase-based Statistical Machine Translation System Human Language Technology and Pattern Recognition
Richard Zens [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
Oliver Bender [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
Saša Hasan [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
Shahram Khadivi [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
Evgeny Matusov [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
Jia Xu [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
Yuqi Zhang [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
Hermann Ney [email protected]
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen University
D-52056AachenGermany
The RWTH Phrase-based Statistical Machine Translation System Human Language Technology and Pattern Recognition
We give an overview of the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the International Workshop on Spoken Language Translation 2005.We use a two pass approach. In the first pass, we generate a list of the N best translation candidates. The second pass consists of rescoring and reranking this N -best list. We will give a description of the search algorithm as well as the models that are used in each pass.We participated in the supplied data tracks for manual transcriptions for the following translation directions: Arabic-English, Chinese-English, English-Chinese and Japanese-English. For Japanese-English, we also participated in the C-Star track. In addition, we performed translations of automatic speech recognition output for Chinese-English and Japanese-English. For both language pairs, we translated the single-best ASR hypotheses. Additionally, we translated Chinese ASR lattices.
Introduction
We give an overview of the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2005.
We use a two pass approach. First, we generate a word graph and extract a list of the N best translation candidates. Then, we apply additional models in a rescoring/reranking approach.
This work is structured as follows: first, we will review the statistical approach to machine translation and introduce the notation that we will use in the later sections. Then, we will describe the models and algorithms that are used for generating the N -best lists, i.e., the first pass. In Section 4, we will describe the models that are used to rescore and rerank this N -best list, i.e., the second pass. Afterward, we will give an overview of the tasks and discuss the experimental results.
Source-channel approach to SMT
In statistical machine translation, we are given a source language sentence f J 1 = f 1 . . . f j . . . f J , which is to be translated into a target language sentence e I 1 = e 1 . . . e i . . . e I . Among all possible target language sentences, we will choose the sentence with the highest probability:
eÎ 1 = argmax I,e I 1 P r(e I 1 |f J 1 )(1)
= argmax I,e I 1 P r(e I 1 ) · P r(f J 1 |e I 1 )
This decomposition into two knowledge sources is known as the source-channel approach to statistical machine translation [1]. It allows an independent modeling of the target language model P r(e I 1 ) and the translation model P r(f J 1 |e I 1 ) 1 . The target language model describes the well-formedness of the target language sentence. The translation model links the source language sentence to the target language sentence. The argmax operation denotes the search problem, i.e., the generation of the output sentence in the target language.
Log-linear model
An alternative to the classical source-channel approach is the direct modeling of the posterior probability P r(e I 1 |f J 1 ). Using a log-linear model [2], we obtain:
P r(e I 1 |f J 1 ) = exp M m=1 λ m h m (e I 1 , f J 1 ) e I 1 exp M m=1 λ m h m (e I 1 , f J 1 )(3)
The denominator represents a normalization factor that depends only on the source sentence f J 1 . Therefore, we can omit it during the search process. As a decision rule, we obtain:êÎ
1 = argmax I,e I 1 M m=1 λ m h m (e I 1 , f J 1 ) (4)
This approach is a generalization of the source-channel approach. It has the advantage that additional models h(·) can be easily integrated into the overall system. The model scaling factors λ M 1 are trained according to the maximum entropy principle, e.g., using the GIS algorithm. Alternatively, one can train them with respect to the final translation quality measured by an error criterion [3]. For the IWSLT evaluation campaign, we optimized the scaling factors with respect to a linear interpolation of WER, PER, BLEU and NIST using the Downhill Simplex algorithm from [4].
Phrase-based approach
The basic idea of phrase-based translation is to segment the given source sentence into phrases, then translate each phrase and finally compose the target sentence from these phrase translations. This idea is illustrated in Figure 1. Formally, we define a segmentation of a given sentence pair (f J 1 , e I 1 ) into K blocks:
k → s k := (i k ; b k , j k ), for k = 1 . . . K.(5)
Here, i k denotes the last position of the k th target phrase; we set i 0 := 0. The pair (b k , j k ) denotes the start and end positions of the source phrase that is aligned to the k th target phrase; we set j 0 := 0. Phrases are defined as nonempty contiguous sequences of words. We constrain the segmentations so that all words in the source and the target sentence are covered by exactly one phrase. Thus, there are no gaps and there is no overlap. For a given sentence pair (f J 1 , e I 1 ) and a given segmentation s K 1 , we define the bilingual phrases as:
e k := e i k−1 +1 . . . e i k(6)f k := f b k . . . f j k (7) i 3 b 2 j 2 b 1 j 1 b 3 j 3 b 4 j 4 = J i 1 i 2 0 = j 0 0 = i 0 I = i 4
source positions target positions Note that the segmentation s K 1 contains the information on the phrase-level reordering. The segmentation s K 1 is introduced as a hidden variable in the translation model. Therefore, it would be theoretically correct to sum over all possible segmentations. In practice, we use the maximum approximation for this sum. As a result, the models h(·) depend not only on the sentence pair (f J 1 , e I 1 ), but also on the segmentation s K 1 , i.e., we have models h(f J 1 , e I 1 , s K 1 ).
Search algorithms
The RWTH phrase-based system supports two alternative search strategies that will be described in this section. Translating a source language word graph. The first search strategy that our system supports takes a source language word graph as input and translates this graph in a monotone way [5]. The input graph can represent different reorderings of the input sentence so that the overall search can generate nonmonotone translations. Using this approach, it is very simple to experiment with various reordering constraints, e.g., the constraints proposed in [6].
Alternatively, we can use ASR lattices as input and translate them without changing the search algorithm, cf. [7]. A disadvantage when translating lattices with this method is that the search is monotone. To overcome this problem, we extended the monotone search algorithm from [5,7] so that it is possible to reorder the target phrases. We implemented the following idea: while traversing the input graph, a phrase can be skipped and processed later.
Source cardinality synchronous search. For singleword based models, this search strategy is described in [8]. The idea is that the search proceeds synchronously with the cardinality of the already translated source positions. Here, we use a phrase-based version of this idea. To make the search problem feasible, the reorderings are constrained as in [9].
Word graphs and N -best lists. The two described search algorithms generate a word graph containing the most likely translation hypotheses. Out of this word graph we extract N -best lists. For more details on word graphs and Nbest list extraction, see [10,11].
Models used during search
We use a log-linear combination of several models (also called feature functions). In this section, we will describe the models that are used in the first pass, i.e., during search. This is an improved version of the system described in [12]. More specifically the models are: a phrase translation model, a word-based translation model, a deletion model, word and phrase penalty, a target language model and a reordering model.
Phrase-based model
The phrase-based translation model is the main component of our translation system. The hypotheses are generated by concatenating target language phrases. The pairs of source and corresponding target phrases are extracted from the wordaligned bilingual training corpus. The phrase extraction algorithm is described in detail in [5]. The main idea is to extract phrase pairs that are consistent with the word alignment. Thus, the words of the source phrase are aligned only to words in the target phrase and vice versa. This criterion is identical to the alignment template criterion described in [13].
We use relative frequencies to estimate the phrase translation probabilities:
p(f |ẽ) = N (f ,ẽ) N (ẽ)(8)
Here, the number of co-occurrences of a phrase pair (f ,ẽ) that are consistent with the word alignment is denoted as N (f ,ẽ). If one occurrence of a target phraseẽ has N > 1 possible translations, each of them contributes to N (f ,ẽ) with 1/N . The marginal count N (ẽ) is the number of occurrences of the target phraseẽ in the training corpus. The resulting feature function is:
h Phr (f J 1 , e I 1 , s K 1 ) = log K k=1 p(f k |ẽ k )(9)
To obtain a more symmetric model, we use the phrase-based model in both directions p(f |ẽ) and p(ẽ|f ).
Word-based lexicon model
We are using relative frequencies to estimate the phrase translation probabilities. Most of the longer phrases occur only once in the training corpus. Therefore, pure relative frequencies overestimate the probability of those phrases. To overcome this problem, we use a word-based lexicon model to smooth the phrase translation probabilities. The score of a phrase pair is computed similar to the IBM model 1, but here, we are summing only within a phrase pair and not over the whole target language sentence:
h Lex (f J 1 , e I 1 , s K 1 ) = log K k=1 j k j=b k i k i=i k−1 +1 p(f j |e i ) (10)
The word translation probabilities p(f |e) are estimated as relative frequencies from the word-aligned training corpus. The word-based lexicon model is also used in both directions p(f |e) and p(e|f ).
Deletion model
The deletion model [14] is designed to penalize hypotheses that miss the translation of a word. For each source word, we check if a target word with a probability higher than a given threshold τ exists. If not, this word is considered a deletion. The feature simply counts the number of deletions. Last year [15], we used this model during rescoring only, whereas this year, we integrated a within-phrase variant of the deletion model into the search:
h Del (f J 1 , e I 1 , s K 1 ) = K k=1 j k j=b k i k i=i k−1 +1 [ p(f j |e i ) < τ ](11)
The word translation probabilities p(f |e) are the same as for the word-based lexicon model. We use [·] to denote a true or false statement [16], i.e., the result is 1 if the statement is true, and 0 otherwise. In general, we use the following convention:
[ C ] = 1, if condition C is true 0, if condition C is false (12)
Word and phrase penalty model
In addition, we use two simple heuristics, namely word penalty and phrase penalty:
h WP (f J 1 , e I 1 , s K 1 ) = I (13) h PP (f J 1 , e I 1 , s K 1 ) = K(14)
These two models affect the average sentence and phrase lengths. The model scaling factors can be adjusted to prefer longer sentences and longer phrases.
Target language model
We use the SRI language modeling toolkit [17] to train a standard n-gram language model. The smoothing technique we apply is the modified Kneser-Ney discounting with interpolation. The order of the language model depends on the translation direction. For most tasks, we use a trigram model, except for Chinese-English, where we use a fivegram language model. The resulting feature function is:
h LM (f J 1 , e I 1 , s K 1 ) = log I i=1 p(e i |e i−1 i−n+1 )(15)
Reordering model
We use a very simple reordering model that is also used in, for instance, [13,15]. It assigns costs based on the jump width:
h RM (f J 1 , e I 1 , s K 1 ) = K k=1 |b k − j k−1 − 1| + J − j k (16)
Rescoring models
The usage of N -best lists in machine translation has several advantages. It alleviates the effects of the huge search space which is represented in word graphs by using a compact excerpt of the N best hypotheses generated by the system. Especially for small tasks, such as the IWSLT supplied data track, rather small N -best lists are already sufficient to obtain good oracle error rates, i.e., the error rate of the best hypothesis with respect to an error measure (such as WER or BLEU). N -best lists are suitable for easily applying several rescoring techniques because the hypotheses are already fully generated. In comparison, word graph rescoring techniques need specialized tools which traverse the graph appropriately. Additionally, because a node within a word graph allows for many histories, one can only apply local rescoring techniques, whereas for N -best lists, techniques can be used that consider properties of the whole target sentence.
In the next sections, we will present several rescoring techniques.
Clustered language models
One of the first ideas in rescoring is to use additional language models that were not used in the generation procedure. In our system, we use clustered language models based on regular expressions [18]. Each hypothesis is classified by matching it to regular expressions that identify the type of the sentence. Then, a cluster-specific (or sentence-type-specific) language model is interpolated into a global language model to compute the score of the sentence:
h CLM (f J 1 , e I 1 ) = (17) log c R c (e I 1 ) α c p c (e I 1 ) + (1 − α c )p g (e I 1 ) , where p g (e I 1 )
is the global language model, p c (e I 1 ) the cluster-specific language model, and R c (e I 1 ) denotes the true-or-false statement (cf. Equation 12) which is 1 if the c th regular expression R c (·) matches the target sentence e I 1 and 0 otherwise. 2
IBM model 1
IBM model 1 rescoring rates the quality of a sentence by using the probabilities of one of the easiest single-word based translation models:
h IBM1 (f J 1 , e I 1 ) = log 1 (I + 1) J J j=1 I i=0 p(f j |e i ) (18)
Despite its simplicity, this model achieves good improvements [14].
IBM1 deletion model
During the IBM model 1 rescoring step, we make use of another rescoring technique that benefits from the IBM model 1 lexical probabilities:
h Del (f J 1 , e I 1 ) = J j=1 I i=0 [ p(f j |e i ) < τ ](19)
We call this the IBM1 deletion model. It counts all source words whose lexical probability given each target word is below a threshold τ . In the experiments, τ was chosen between 10 −1 and 10 −4 .
Hidden Markov alignment model
The next step after IBM model 1 rescoring is HMM rescoring. We use the HMM to compute the log-likelihood of a 2 The clusters are disjunct, thus only one regular expression matches.
sentence pair (f J 1 , e I 1 )
:
h HMM (f J 1 , e I 1 ) = log a J 1 J j=1 p(a j |a j−1 , I) · p(f j |e a j )(20)
In our experiments, we use a refined alignment probability p(a j − a j−1 |G(e aj ), I) that conditions the jump widths of the alignment positions a j − a j−1 on the word class G(e aj ). This is the so-called homogeneous HMM [19].
Word penalties
Several word penalties are used in the rescoring step:
h WP (f J 1 , e I 1 ) = I (a) I/J (b) 2|I − J|/(I + J) (c)(21)
The word penalties are heuristics that affect the generated hypothesis length. In general, sentences that are too short should be avoided.
Integrating ASR and MT
In the experiments on coupling speech recognition and machine translation, we used the phrase-based MT system described in Section 2 to translate ASR lattices. In addition to the models described in Section 3, we use the acoustic model and the source language model of the ASR system in the loglinear model. These models are integrated into the search and the scaling factors are also optimized.
A significant obstacle for integrating speech recognition and translation is the mismatch between the vocabularies of the ASR and MT system. For the Chinese-English task, the number of out-of-vocabulary (OOV) words was rather high. Ideally, the vocabulary of the recognition system should be a subset of the translation system source vocabulary. In the IWSLT evaluation, we had no control over the recognition experiments. For this reason, the reported improvements might have been larger with a proper handling of the vocabularies.
Tasks and corpora
The experiments were carried out on the Basic Travel Expression Corpus (BTEC) task [20]. This is a multilingual speech corpus which contains tourism-related sentences similar to those that are found in phrase books. The corpus statistics are shown in Table 1. For the supplied data track, 20 000 sentences training corpus and two test sets (C-Star'03 and IWSLT'04) were made available for each language pair. As additional training resources for the C-Star track, we used the full BTEC for Japanese-English and the Spoken Language DataBase (SLDB) [21], which consists of transcriptions of spoken dialogs in the domain of hotel reservations 3 . For the Japanese-English supplied data track, the number of OOVs in the IWSLT'05 test set is rather high, both in comparison with the C-Star'03 and IWSLT'04 test sets and in comparison with the number of OOVs for the other language pairs. As for any data-driven approach, the performance of our system deteriorates due to the high number of OOVs. Using the additional corpora in the C-Star track, we are able to reduce the number of OOVs to a noncritical number.
As the BTEC is a rather clean corpus, the preprocessing consisted mainly of tokenization, i.e., separating punctuation marks from words. Additionally, we replaced contractions such as it's or I'm in the English corpus and we removed the case information. For Arabic, we removed the diacritics and we split common prefixes: Al, w, f, b, l. There was no special preprocessing for the Chinese and the Japanese training corpora.
We used the C-Star'03 corpus as development set to optimize the system, for instance, the model scaling factors and the GIZA++ [19] parameter settings. The IWSLT'04 test set was used as a blind test corpus. After the optimization, we added the C-Star'03 and the IWSLT'04 test sets to the training corpus and retrained the whole system.
We performed speech translation experiments on the Chinese-English and Japanese-English supplied data tracks. For Japanese-English we translated the single-best ASR hypotheses only, whereas for Chinese-English we also translated ASR lattices. The preprocessing and postprocessing steps are the same as for text translation. Table 2 contains the Chinese ASR word lattice statistics for the three test sets. The ASR WER and the graph error rate (GER) were measured at the word level (and not at the character level). The GER is the minimum WER among all paths through the lattice.
Experimental results
The automatic evaluation criteria are computed using the IWSLT 2005 evaluation server. For all the experiments, we report the two accuracy measures BLEU [22] and NIST [23] as well as the two error rates WER and PER. For the primary submissions, we also report the two accuracy measures Meteor [24] and GTM [25]. All those criteria are computed with respect to multiple references (with the exception of English-Chinese where only one reference is available).
Research Laboratories, Kyoto, Japan.
Primary submissions
The translation results of the RWTH primary submissions are summarized in Table 3. Note that for English-Chinese, only one reference was used. Therefore the scores are in a different range.
Results for text input
In Table 4, we compare the translation performance of the RWTH 2004 system [15] and our current system. The evaluation is done on the IWSLT'04 test set for the supplied data track using the IWSLT 2005 evaluation server. Note that the reported numbers for the 2004 system differ slightly from the numbers in [15] due to a somewhat different computation. We observe significant improvements for all evaluation criteria and for both language pairs. For the Chinese-English system, for instance, the BLEU score increases by 4.9% and the WER decreases by 5%. Similar improvements are obtained for the Japanese-English system. In Table 5, we present some translation examples for Japanese-English. As already mentioned in the previous section, our data-driven approach suffers from the high number of OOVs for the supplied data track. This becomes apparent when looking at the translation hypotheses. Furthermore, the incorporation of additional training data improves the translation quality significantly, not only in terms of the official results (cf. Table 3) but also when considering the examples in Table 5. In all three examples, the C-Star data track system is able to produce one of the reference translations. On the other hand, the output of the supplied data track system is of much lower quality. In the first example, we see the effect of a single unknown word. In the second example, the word choice is more or less correct, but the fluency of the output is very poor. The translation in the final example is entirely incomprehensible for the supplied data track system. The effects of the N -best list rescoring for the IWSLT'04 test set are summarized in Table 6. On the development set (C-Star'03), which was used to optimize the model scaling factors, all models gradually help to enhance the overall performance of the system, e.g., BLEU is improved from 45.5% to 47.4%. For the IWSLT'04 blind test set, the results are not as smooth, but still the overall system (using all models that were described in Section 4) achieves improvements in Table 7, we show some examples where the impact of the rescoring models can be seen.
Results for ASR input
The translation results for the IWSLT'05 test set for ASR input in the Chinese-English supplied data track are summa- Table 8.
We report the results for the two search strategies described in Section 2. Using the first strategy (Graph), we are able to translate ASR lattices. We observe significant improvements in translation quality over the translations of the single-best (1-Best) recognition results. This is true for the monotone search (Mon) as well as for the version which allows for reordering of target phrases (Skip). The improvements are consistent among all evaluation criteria. Using the second search strategy (SCSS), we are limited to the single-best ASR hypotheses as input. This is the same system that is used to translate the manual transcriptions. Despite the limitation to the single-best hypotheses, this system performs best in terms of the automatic evaluation measures (except for the NIST score).
The RWTH Chinese-English primary systems for ASR did not include rescoring. After the evaluation, we applied the rescoring techniques (described in Section 4) to the primary system. The improvements from rescoring are similar to the text system, e.g., 1.9% for the BLEU score.
Even if our primary system did not use lattices, a subjective comparison of the two systems showed positive effects when translating lattices for a large number of sentences. Recognition errors that occur in the single-best ASR hypotheses are often corrected when lattices are used. Some translation examples for improvements with lattices are shown in Table 9.
Conclusions
We have described the RWTH phrase-based statistical machine translation system that was used in the evaluation campaign of the IWSLT 2005. We use a two pass approach. In the first pass, we use a dynamic programming beam search algorithm to generate an N -best list. The second pass consists of rescoring and reranking of this N -best list.
One important advantage of our data-driven machine translation systems is that virtually the same system can be used for the different translation directions. Only a marginal portion of the overall performance can be attributed to language-specific methods.
We have shown significant improvements compared to the RWTH system of 2004 [15].
We have shown that the translation of ASR lattices can yield significant improvements over the translation of the ASR single-best hypotheses.
Figure 1 :
1Illustration of the phrase segmentation.
Table 2 :
2Statistics for the Chinese ASR lattices of the three test sets.Test Set
WER [%] GER [%] Density
C-Star'03
41.4
16.9
13
IWSLT'04
44.5
20.2
13
IWSLT'05
42.0
18.2
14
Table 4 :
4Progress over time: comparison of the RWTH systems of the years 2004 and 2005 for the supplied data track on the IWSLT'04 test set.Translation System BLEU NIST WER PER
Direction
[%]
[%]
[%]
Chin.-Engl. 2004
40.4
8.59
52.4 42.2
2005
46.3
8.73
47.4 39.7
Jap.-Engl.
2004
44.8
9.41
50.0 37.7
2005
49.8
9.52
46.5 36.8
Table 1 :
1Corpus statistics after preprocessing.Supplied Data Track
C-Star Track
Arabic Chinese Japanese English
Japanese
English
Train
Sentences
20 000
240 672
Running Words 180 075 176 199
198 453 189 927 1 951 311 1 775 213
Vocabulary
15 371
8 687
9 277
6 870
26 036
14 120
Singletons
8 319
4 006
4 431
2 888
8 975
3 538
C-Star'03
Sentences
506
Running Words
3 552
3 630
4 130
3 823
4 130
3 823
OOVs (Running Words)
133
114
61
65
34
-
IWSLT'04
Sentences
500
Running Words
3 597
3 681
4 131
3 837
4 131
3 837
OOVs (Running Words)
142
83
71
58
36
-
IWSLT'05
Sentences
506
Running Words
3 562
3 918
4 226
3 909
4 226
3 909
OOVs (Running Words)
146
90
293
69
10
-
Table 3 :
3Official results for the RWTH primary submissions on the IWSLT'05 test set.
Data
Input
Translation
Accuracy Measures
Error Rates
Track
Direction
BLEU [%] NIST Meteor [%] GTM [%] WER [%] PER [%]
Supplied Manual Arabic-English
54.7
9.78
70.8
65.6
37.1
31.9
Chinese-English
51.1
9.57
66.5
60.1
42.8
35.8
English-Chinese
20.0
5.09
12.6
55.2
61.2
52.7
Japanese-English
40.8
7.86
58.6
48.6
53.6
44.4
ASR
Chinese-English
38.3
7.39
54.0
48.8
56.5
47.2
Japanese-English
42.7
8.53
62.0
49.6
51.2
41.2
C-Star
Manual Japanese-English
77.6
12.91
85.4
78.7
24.3
18.6
Table 5 :
5Translation examples for the Japanese-English sup-
plied and C-Star data tracks.
Data Track Translation
Supplied
What would you like
C-Star
What would you like for the main course
Reference
What would you like for the main course
Supplied
Is that flight two seats available
C-Star
Are there two seats available on that flight
Reference
Are there two seats available on that flight
Supplied
Have a good I anything new
C-Star
I prefer something different
Reference
I prefer something different
all evaluation criteria. In
Table 6 :
6Rescoring: effect of successively adding models for the Chinese-English IWSLT'04 test set.System
BLEU NIST WER PER
[%]
[%]
[%]
Baseline
45.1
8.56
48.9 40.1
+CLM
45.9
8.24
48.6 40.7
+IBM1
45.9
8.48
47.8 39.7
+WP
45.4
8.91
47.8 39.4
+Del
46.0
8.71
47.8 39.6
+HMM
46.3
8.73
47.4 39.7
rized in
Table 7 :
7Translation examples for the Chinese-English supplied data track: effect of rescoring. System Translation Baseline Your coffee or tea +Rescoring Would you like coffee or tea Reference Would you like coffee or tea Baseline A room with a bath +Rescoring I would like a twin room with a bath Reference A twin room with bath Baseline How much is that will be that room +Rescoring How much is that room including tax Reference How much is the room including tax Baseline Onions +Rescoring I would like onion Reference I would like onions please
Table 8 :
8Translation results for ASR input in the Chinese-English supplied data track on the IWSLT'05 test set ( * : late submissions).System
Input
BLEU NIST WER PER
[%]
[%]
[%]
Graph Mon *
1-Best
31.1
6.18
62.1 52.7
Lattice
34.1
7.20
58.3 48.1
Skip
1-Best
33.1
6.51
61.3 51.7
Lattice
35.1
7.53
57.7 47.2
SCSS (primary) 1-Best
38.3
7.39
56.5 47.2
+Rescoring *
40.2
7.33
55.1 46.5
Table 9 :
9Translation examples for ASR input in the Chinese-English supplied data track. Input Translation 1-Best Is there a pair of room with a bath Lattice I would like a twin room with a bath Reference A double room including a bath 1-Best Please take a picture of our Lattice May I take a picture here Reference Am I permitted to take photos here 1-Best I'm in a does the interesting Lattice I'm in an interesting movie Reference A good movie is on
The notational convention will be as follows: we use the symbol P r(·) to denote general probability distributions with (nearly) no specific assumptions. In contrast, for model-based probability distributions, we use the generic symbol p(·).
The Japanese-English training corpora (BTEC, SLDB) that we used in the C-Star track were kindly provided by ATR Spoken Language Translation
AcknowledgmentsThis work was partly funded by the DFG (Deutsche Forschungsgemeinschaft) under the grant NE572/5-1, project "Statistische Textübersetzung" and by the European Union under the integrated project TC-Star (Technology and Corpora for Speech to Speech Translation, IST-2002-FP6-506738, http://www.tc-star.org).
A statistical approach to machine translation. P F Brown, J Cocke, S A Della Pietra, V J Della Pietra, F Jelinek, J D Lafferty, R L Mercer, P S Roossin, Computational Linguistics. 162P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin, "A statistical approach to machine translation," Computational Linguistics, vol. 16, no. 2, pp. 79-85, June 1990.
Discriminative training and maximum entropy models for statistical machine translation. F J Och, H Ney, Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL). of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)Philadelphia, PAF. J. Och and H. Ney, "Discriminative training and maximum entropy models for statistical machine translation," in Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, July 2002, pp. 295-302.
Minimum error rate training in statistical machine translation. F J Och, Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL). of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)Sapporo, JapanF. J. Och, "Minimum error rate training in statistical machine translation," in Proc. of the 41th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), Sapporo, Japan, July 2003, pp. 160-167.
Flannery, Numerical Recipes in C++. W H Press, S A Teukolsky, W T Vetterling, B P , Cambridge University PressCambridge, UKW. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flan- nery, Numerical Recipes in C++. Cambridge, UK: Cam- bridge University Press, 2002.
Phrase-based statistical machine translation. R Zens, F J Och, H Ney, 25th German Conf. on Artificial Intelligence (KI2002), ser. Lecture Notes in Artificial Intelligence (LNAI), M. Jarke. J. Koehler, and G. LakemeyerAachen, GermanySpringer Verlag2479R. Zens, F. J. Och, and H. Ney, "Phrase-based statistical ma- chine translation," in 25th German Conf. on Artificial Intel- ligence (KI2002), ser. Lecture Notes in Artificial Intelligence (LNAI), M. Jarke, J. Koehler, and G. Lakemeyer, Eds., vol. 2479. Aachen, Germany: Springer Verlag, September 2002, pp. 18-32.
Novel reordering approaches in phrase-based statistical machine translation. S Kanthak, D Vilar, E Matusov, R Zens, H Ney, 43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond. Ann Arbor, MIS. Kanthak, D. Vilar, E. Matusov, R. Zens, and H. Ney, "Novel reordering approaches in phrase-based statistical ma- chine translation," in 43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond, Ann Arbor, MI, June 2005, pp. 167-174.
Phrase-based translation of speech recognizer word lattices using loglinear model combination. E Matusov, H Ney, Proc. IEEE Automatic Speech Recognition and Understanding Workshop. IEEE Automatic Speech Recognition and Understanding WorkshopCancun, Mexikoto appearE. Matusov and H. Ney, "Phrase-based translation of speech recognizer word lattices using loglinear model combination," in Proc. IEEE Automatic Speech Recognition and Under- standing Workshop, Cancun, Mexiko, Nov/Dec 2005, to ap- pear.
Word reordering and a dynamic programming beam search algorithm for statistical machine translation. C Tillmann, H Ney, Computational Linguistics. 291C. Tillmann and H. Ney, "Word reordering and a dynamic programming beam search algorithm for statistical machine translation," Computational Linguistics, vol. 29, no. 1, pp. 97- 133, March 2003.
Reordering constraints for phrase-based statistical machine translation. R Zens, H Ney, T Watanabe, E Sumita, COLING '04: The 20th Int. Conf. on Computational Linguistics. Geneva, SwitzerlandR. Zens, H. Ney, T. Watanabe, and E. Sumita, "Reordering constraints for phrase-based statistical machine translation," in COLING '04: The 20th Int. Conf. on Computational Lin- guistics, Geneva, Switzerland, August 2004, pp. 205-211.
Word graphs for statistical machine translation. R Zens, H Ney, 43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond. Ann Arbor, MIR. Zens and H. Ney, "Word graphs for statistical machine translation," in 43rd Annual Meeting of the Assoc. for Com- putational Linguistics: Proc. Workshop on Building and Us- ing Parallel Texts: Data-Driven Machine Translation and Be- yond, Ann Arbor, MI, June 2005, pp. 191-198.
Generation of word graphs in statistical machine translation. N Ueffing, F J Och, H Ney, Proc. of the Conf. on Empirical Methods for Natural Language Processing (EMNLP). of the Conf. on Empirical Methods for Natural Language essing (EMNLP)Philadelphia, PAN. Ueffing, F. J. Och, and H. Ney, "Generation of word graphs in statistical machine translation," in Proc. of the Conf. on Em- pirical Methods for Natural Language Processing (EMNLP), Philadelphia, PA, July 2002, pp. 156-163.
Improvements in phrase-based statistical machine translation. R Zens, H Ney, Proc. of the Human Language Technology Conf. (HLT-NAACL). of the Human Language Technology Conf. (HLT-NAACL)Boston, MAR. Zens and H. Ney, "Improvements in phrase-based statis- tical machine translation," in Proc. of the Human Language Technology Conf. (HLT-NAACL), Boston, MA, May 2004, pp. 257-264.
Improved alignment models for statistical machine translation. F J Och, C Tillmann, H Ney, Proc. Joint SIG-DAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora. Joint SIG-DAT Conf. on Empirical Methods in Natural Language essing and Very Large CorporaCollege Park, MDUniversity of MarylandF. J. Och, C. Tillmann, and H. Ney, "Improved alignment models for statistical machine translation," in Proc. Joint SIG- DAT Conf. on Empirical Methods in Natural Language Pro- cessing and Very Large Corpora, University of Maryland, College Park, MD, June 1999, pp. 20-28.
Syntax for statistical machine translation. F J Och, D Gildea, S Khudanpur, A Sarkar, K Yamada, A Fraser, S Kumar, L Shen, D Smith, K Eng, V Jain, Z Jin, D Radev, Johns Hopkins University 2003 Summer Workshop on Language Engineering. Baltimore, MDCenter for Language and Speech ProcessingTech. Rep.F. J. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Yamada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev, "Syntax for statistical machine trans- lation," Johns Hopkins University 2003 Summer Workshop on Language Engineering, Center for Language and Speech Processing, Baltimore, MD, Tech. Rep., August 2003.
Alignment Templates: the RWTH SMT System. O Bender, R Zens, E Matusov, H Ney, Proc. of the Int. Workshop on Spoken Language Translation (IWSLT). of the Int. Workshop on Spoken Language Translation (IWSLT)Kyoto, JapanO. Bender, R. Zens, E. Matusov, and H. Ney, "Alignment Tem- plates: the RWTH SMT System," in Proc. of the Int. Work- shop on Spoken Language Translation (IWSLT), Kyoto, Japan, September 2004, pp. 79-84.
R L Graham, D E Knuth, O Patashnik, Concrete Mathematics. Reading, MassAddison-Wesley Publishing Company2nd ed.R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Math- ematics, 2nd ed. Reading, Mass.: Addison-Wesley Publish- ing Company, 1994.
SRILM -an extensible language modeling toolkit. A Stolcke, Proc. Int. Conf. on Spoken Language Processing. Int. Conf. on Spoken Language essingDenver, CO2A. Stolcke, "SRILM -an extensible language modeling toolkit," in Proc. Int. Conf. on Spoken Language Processing, vol. 2, Denver, CO, 2002, pp. 901-904.
Clustered language models based on regular expressions for SMT. S Hasan, H Ney, Proc. of the 10th Annual Conf. of the European Association for Machine Translation (EAMT). of the 10th Annual Conf. of the European Association for Machine Translation (EAMT)Budapest, HungaryS. Hasan and H. Ney, "Clustered language models based on regular expressions for SMT," in Proc. of the 10th Annual Conf. of the European Association for Machine Translation (EAMT), Budapest, Hungary, May 2005.
A systematic comparison of various statistical alignment models. F J Och, H Ney, Computational Linguistics. 291F. J. Och and H. Ney, "A systematic comparison of vari- ous statistical alignment models," Computational Linguistics, vol. 29, no. 1, pp. 19-51, March 2003.
Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. T Takezawa, E Sumita, F Sugaya, H Yamamoto, S Yamamoto, Proc. of the Third Int. Conf. on Language Resources and Evaluation (LREC). of the Third Int. Conf. on Language Resources and Evaluation (LREC)Las Palmas, SpainT. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto, "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world," in Proc. of the Third Int. Conf. on Language Resources and Evaluation (LREC), Las Palmas, Spain, May 2002, pp. 147- 152.
A speech and language database for speech translation research. T Morimoto, N Uratani, T Takezawa, O Furuse, Y Sobashima, H Iida, A Nakamura, Y Sagisaka, N Higuchi, Y Yamazaki, Proc. of the 3rd Int. Conf. on Spoken Language Processing (ICSLP'94). of the 3rd Int. Conf. on Spoken Language essing (ICSLP'94)Yokohama, JapanT. Morimoto, N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sagisaka, N. Higuchi, and Y. Yamazaki, "A speech and language database for speech translation research," in Proc. of the 3rd Int. Conf. on Spo- ken Language Processing (ICSLP'94), Yokohama, Japan, September 1994, pp. 1791-1794.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL). of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)Philadelphia, PAK. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, "Bleu: a method for automatic evaluation of machine translation," in Proc. of the 40th Annual Meeting of the Association for Com- putational Linguistics (ACL), Philadelphia, PA, July 2002, pp. 311-318.
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. G Doddington, Proc. ARPA Workshop on Human Language Technology. ARPA Workshop on Human Language TechnologyG. Doddington, "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics," in Proc. ARPA Workshop on Human Language Technology, 2002.
METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. S Banerjee, A Lavie, 43rd Annual Meeting of the Assoc. for Computational Linguistics: Proc. Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization. Ann Arbor, MIS. Banerjee and A. Lavie, "METEOR: An automatic met- ric for MT evaluation with improved correlation with human judgments," in 43rd Annual Meeting of the Assoc. for Compu- tational Linguistics: Proc. Workshop on Intrinsic and Extrin- sic Evaluation Measures for MT and/or Summarization, Ann Arbor, MI, June 2005.
Evaluation of machine translation and its evaluation. J P Turian, L Shen, I D Melamed, 03-005New York UniversityTech. Rep. Proteus technical reportComputer Science DepartmentJ. P. Turian, L. Shen, and I. D. Melamed, "Evaluation of ma- chine translation and its evaluation," Computer Science De- partment, New York University, Tech. Rep. Proteus technical report 03-005, 2003. |
244,870,174 | On the Stability of System Rankings at WMT | The current approach to collecting human judgments of machine translation quality for the news translation task at WMT -segment rating with document context -is the most recent in a sequence of changes to WMT human annotation protocol. As these annotation protocols have changed over time, they have drifted away from some of the initial statistical assumptions underpinning them, with consequences that call the validity of WMT news task system rankings into question. In simulations based on real data, we show that the rankings can be influenced by the presence of outliers (high-or low-quality systems), resulting in different system rankings and clusterings. We also examine questions of annotation task composition and how ease or difficulty of translating different documents may influence system rankings. We provide discussion of ways to analyze these issues when considering future changes to annotation protocols. | [
16794216,
201741133,
236486317,
16125805,
219573654,
14421595,
6395516,
53247198
] | On the Stability of System Rankings at WMT
November 10-11, 2021
Rebecca Knowles [email protected]
National Research Council
Canada
On the Stability of System Rankings at WMT
Proceedings of the Sixth Conference on Machine Translation (WMT)
the Sixth Conference on Machine Translation (WMT)November 10-11, 2021464
The current approach to collecting human judgments of machine translation quality for the news translation task at WMT -segment rating with document context -is the most recent in a sequence of changes to WMT human annotation protocol. As these annotation protocols have changed over time, they have drifted away from some of the initial statistical assumptions underpinning them, with consequences that call the validity of WMT news task system rankings into question. In simulations based on real data, we show that the rankings can be influenced by the presence of outliers (high-or low-quality systems), resulting in different system rankings and clusterings. We also examine questions of annotation task composition and how ease or difficulty of translating different documents may influence system rankings. We provide discussion of ways to analyze these issues when considering future changes to annotation protocols.
Introduction
At the WMT (now Conference on Machine Translation) shared task on news translation, research groups build machine translation systems to accurately translate news data, as tested on test sets of recent news documents. The systems are clustered and ranked on their performance as judged by human annotators. The way that human judgments of translation quality have been collected has varied over the course of WMT's history.
In this work, we examine how changes in the collection of human judgments over the last three years have resulted in rankings that are now less robust to the effects of outliers (high-or lowperforming systems) and overall annotation task composition. We replicate the human judgment rankings from 2018-2020, perform simulations for reranking, and examine issues of annotation task composition and translation difficulty. We find that sampling sentences for annotators to annotate by document -intended as a step towards evaluating sentences in context -reintroduces a known problem from the earlier era of relative rankings, namely that systems suffer or benefit in their rankings based on the quality of the other data being rated alongside them in the same annotation tasks.
We begin with a discussion of the progression of direct assessment (DA) styles employed in WMT evaluations ( §2) and how scoring is performed ( §3), before delving into theoretical and practical understandings of the z-scores used to rank systems ( §4 and §5), including simulations and analysis of specific examples. We also discuss issues around document distribution and translation difficulty ( §6), and close with considerations for downstream impacts ( §7) and future study ( §8).
Historical Context
In 2016, WMT added direct assessment (DA) scoring of system outputs as an investigatory ranking, with relative ranking (RR) remaining the official scoring mechanism (Bojar et al., 2016). In relative ranking, five system outputs for a given segment were ranked in comparison to one another, from which pairwise translation comparisons were generated; these were then used to produce overall system rankings by means of the TrueSkill algorithm (Herbrich et al., 2007;Sakaguchi et al., 2014). Relative ranking can be used to compare systems, but does not provide an absolute score, thus obscuring how close a good system is to a "perfect" translation or, at the other extreme, how poor a system is as compared to others.
The following year, 2017, WMT adopted DA as its main assessment format on the basis of high Pearson correlations between RR and DA in the previous year's investigations (Bojar et al., 2017). In DA (Graham et al., 2013(Graham et al., , 2014, annotators provide an absolute numerical score (0-100) for MT output adequacy (at the sentence level or at the document level) using a sliding scale.
The use of DA has changed since it was first introduced to WMT. In 2016, it was trialed for monolingual evaluations of translation fluency and monolingual evaluations of adequacy. Here we provide an overview of changes from the 2016 task to the present, based on Findings papers descriptions. Bojar et al. (2016) noted that the 2016 version of DA assessments has the potential to avoid a known bias of the RR setup. In RR, each rating task consisted of ranking the outputs of five systems on the same input segment, and "a system may suffer more losses if often compared to the reference, and similarly it may benefit from being compared to a poor competitor" (Bojar et al., 2011). In the 2016 DA setup, translations were annotated in sets of 100, including quality assurance tasks, but each segment was annotated individually, rather than in direct comparison to other system output for the same segment. 1 Quality assurance tasks can include references (which should score highly), "bad" references (which should score poorly; these are produced by randomly replacing substrings in references to degrade quality), and repeat assessments of a segment (which should be scored consistently).
In 2017, DA was adopted as the main annotation style, with exact duplicate segment translations being able to be scored just once (rather than once per system that produced them) and with human assessment scores "standardized according to each individual human assessor's overall mean and standard deviation score" (Bojar et al., 2017). Bojar et al. (2018) describes two setups for the 2018 DA tasks, a standard structure (with repeat pairs, "bad" references, and references, as quality assurance) and an alternate setup where an additional constraint was imposed, such that within each 100-translation task, for each input the task would include the corresponding output of all MT systems. This makes a tradeoff between the aim of DA (to make absolute score judgments rather than relative ones) and getting a single annotator to provide scores for all systems' output of the same source input (which risks reintroducing some form of relative judgement to the task). This is also the first year that the findings paper explicitly spells out the goal of the way tasks (referred to here using the Amazon Mechanical Turk nomenclature "Human Intelligence Task" or HIT) are built in the standard HIT structure:
[...] within each 100-translation HIT, the same proportion of translations are included from each participating system for that language pair. This ensures the final dataset for a given language pair contains roughly equivalent numbers of assessments for each participating system. This serves three purposes for making the evaluation fair. Firstly, for the point estimates used to rank systems to be reliable, a sufficient sample size is needed and the most efficient way to reach a sufficient sample size for all systems is to keep total numbers of judgments roughly equal as more and more judgments are collected. Secondly, it helps to make the evaluation fair because each system will suffer or benefit equally from an overly lenient/harsh human judge. Thirdly, despite DA judgments being absolute, it is known that judges "calibrate" the way they use the scale depending on the general observed translation quality. With each HIT including all participating systems, this effect is averaged out. 2
The 2018 shared task also introduced sourcebased DA, trialling a bilingual version of the task. Rather than scoring MT output against a reference, this version scores it against the source segment, which allows human references to be scored as a "human system" rather than solely as a QA task. They raise a number of potential cautions against drawing strong conclusions, namely that bilingual DA is not yet validated, the alternate task structure may introduce biases, the year's sample size for source-based DA was smaller than 1,500 judgments per system, and that there may be quality issues with some reference segments.
In 2019, WMT introduced additional versions of DA (Barrault et al., 2019). They used monolingual (reference-based) assessment for translation into English and for language pairs that did not include English at all. For translation out of English, they performed bilingual (source-based) DA. The style of DA used in previous years is renamed to SR-DC (Segment Rating without Document Context), as a new style, SR+DC (Segment Rating with Document Context) is introduced. In the new SR+DC style, the full translation of a single docu-ment by a single MT system is shown to the annotator in order (but still scored segment-by-segment); 3 a task consists of multiple such documents. The generation of annotation tasks is described as follows: all documents translated by all systems are pooled, then sampled (without replacement) until up to 70 segments are selected, at which point quality control documents are added, and finally the order of documents in the task is shuffled. Barrault et al. (2020) uses both SR-DC and SR+DC styles.
Scoring
In order to experiment with questions surrounding human evaluation, it is necessary to understand and be able to replicate the official scores produced by WMT. For the human annotations of interest (segment-level evaluation, with or without document context), there are two main types of scores: raw scores and z-scores, with the latter used as the official ranking. These are presented in a table, ordered by z-score, and clusters of systems deemed statistically significantly different (according to a Wilcoxon rank-sum test p < 0.05) are separated by horizontal lines. 4 Following the approach used at WMT, after removing any HITs deemed unacceptable due to quality issues, we calculate raw and z-scores for systems as follows. First, any worker ID whose scores have a standard deviation of 0 is removed. Given a raw score x generated by the worker with worker ID W , its corresponding z-score z is computed as
z = x − mean(y ∈ W ) std(y ∈ W )(1)
where mean(y ∈ W ) is the mean of all raw scores generated by worker W , and std(y ∈ W ) is the standard deviation of all raw scores generated by that worker. When we say that the mean and standard deviation are computed from all raw scores from a given worker ID, this includes references (which are treated as systems in SR+DC but are treated as quality assurance in SR-DC), "bad references" (which are only ever used for quality assurance), and repeats. 5 However, after computing 3 There is also a Document Rating with Document Context DR+DC, but we do not examine that in this work. 4 A horizontal line is drawn below a system if and only if it is significantly better (p < 0.05) than every system with a lower z-score than it. 5 We compute mean and standard deviation using ad-latest.csv, but use ad-good-raw-redup.csv to compute the individual z-scores and averages. The files are the mean and standard deviation, only a subset of scores are used to actually compute system averages: those with type "SYSTEM" or "REPEAT" (discarding "BAD_REF" and "REF" types). 6 To compute averages (raw or z-score), first an average is computed for any "SYSTEM" or "REPEAT" scores that share the same system ID, the same document ID, and the same sentence ID; that is, if a given sentence of a given document was annotated multiple times for a particular system, we first average those scores (so that more frequently annotated sentences do not receive more weight). Then, for each system, all of its "SYSTEM" or "REPEAT" type scores are averaged, resulting in a system-level score.
We note that the 2019 and 2020 document context (SR+DC) evaluations differ in their quality assurance (see Table 1). In both 2019 and 2020, references are treated as a "Human" system, to be ranked alongside the other systems; which may explain the lack of "REF" labeled segment types in the data. In 2019, the Appraise interface data used to generate the rankings did not include any segments labeled as "REPEAT", "REF", or "BAD_REF", though these are described as being included in the HITs (Barrault et al., 2019); perhaps they were removed before processing the data. In 2020, the Appraise data did include segments labeled as "BAD_REF", but none labeled as "RE-PEAT" or as "REF", while the 2020 Mechanical Turk document-level ones included all three. The 2019 data collected using the Turkle platform contains no human or reference data and we do not use it for any of our analysis in this work.
We reimplemented the scoring system using python and plan to release code for this paper. We were able to exactly replicate the raw scores and z-scores for most of the language pairs of interest from 2018-2020, 7 as well as the significance clusters. 8 See Appendix A for details. We use this reimplementation of the WMT scoring scripts in ordownloaded from 2018-2020 WMT websites: http://www. statmt.org/wmt18/results.html, http://www. statmt.org/wmt19/results.html, and http:// www.statmt.org/wmt20/results.html. 6 "SYSTEM" type are system outputs, while the remainder are quality assurance: "REPEAT" are repeated system outputs which are also valid for computing averages, "BAD_REF" are degraded references, and "REF" are references. 7 In order to match the z-scores generated by the R packages used for WMT, we set ddof equal to 1 when using the stats.zscore function from scipy. 8 We replicated the significance clusters using scipy's stats.mannwhitneyu function.
Dataset
SYSTEM REPEAT REF BAD_REF newstest2018-humaneval 265387 26489 26003 36924 appraise-doclevel-humaneval-newstest2019 194625 0 0 0 mturk-sntlevel-humaneval-newstest2019 92164 13266 13177 13113 turkle-sntlevel-humaneval-newstest2019 47799 0 0 0 appraise-doclevel-humaneval-newstest2020 186663 0 0 26856 mturk-sntlevel-humaneval-newstest2020 26262 3741 3746 3773 mturk-doclevel-humaneval-newstest2020 93777 12887 12939 12965 Table 1: Counts of sentence types in ad-good-raw-redup.csv files from 2018-2020. We omit the Turkle data from most of our analysis because it contains neither human systems nor reference data.
der to score authentic and modified WMT data, to examine underlying assumptions and hypothesize about how these may impact final system rankings.
For 21 language pairs annotated in SR-DC style and 25 in SR+DC style from 2018-2020, we were able to exactly replicate rankings, nearly replicate rankings (e.g., with rounding difference related changes to one significance line), or produce rankings whose differences could be explained by delays in data collection (2020 en-iu). 9 Appendix A provides more details on replication. We use our recalculated rankings and clusters as the starting point for all remaining analysis in this paper.
Understanding z-scores
While we've described how the z-score is calculated in the setting of the WMT human annotations, it's important to take a closer look at z-scores to understand how they behave in different scenarios. In this section, we explore z-scores and their underlying assumptions in hypothetical scenarios.
Given a raw score x, a mean µ, and a standard deviation σ, the z-score (or standard score) is the number of standard deviations above or below the mean that x falls. The z-score for a given raw score x can be computed as follows:
z = x − µ σ (2)
This is a linear transformation; the shape of the distribution of z-scores is the same as that of the raw scores, but now with a mean of 0 and a standard deviation of 1. It is a unitless score.
Intuitively, the z-score provides a potential way of comparing scores from different annotators, but it requires a careful examination of underlying assumptions. If we think of the z-score as a unitless score, perhaps we can think of each annotator as having their own measurement units: we might have a lenient annotator and a harsh annotator, such that a raw score of 50 by the lenient annotator is quite bad while a raw score of 50 for the harsh annotator is actually quite good. In order to directly compare the two annotators' scores, we would like to map them to a shared scale, a unitless z-score. Under what assumptions is it appropriate to calculate z-scores to compare annotators' scores?
We start with perhaps the most obvious (but frequently unstated) assumption: there exists some latent "quality" of a given translation, which can be judged by a human annotator, such that annotators roughly agree about what constitutes a "good" or a "bad" translation. In practice, human annotators may disagree -for any number of reasons (Basile et al., 2021) -about which of two translations of "similar quality" is better, but we assume that the disagreement is not extreme; i.e., we hope that under a correlation coefficient like Pearson's r or Spearman's ρ, the correlation between annotators' scores would be much closer to 1 than to -1. For the sake of simplicity in the following examples, we will assume there exists a "true" and "objective" score for every translation.
Suppose that we have some translations with a true mean score of µ and a true standard deviation of σ. A lenient annotator scores all of the translations such that the distribution of their scores has a mean of µ + n and a standard deviation of σ, while a harsh annotator scores all of the translations such that the distribution of their scores has a mean of µ − m and a standard deviation of σ. 10 When we compute their z-scores, it is easier to directly compare sentence scores, since they are now on the same scale. This seems like a reasonable use of z-scores, but in this scenario annotators are scoring exactly the same data, which doesn't scale to WMT-style annotations; annotators simply don't have time to score all of the data. Now suppose that we have two disjoint sets of sentences scored by two different annotators: the set S X of sentences scored by annotator X and the set S Y of sentences scored by annotator Y . From these raw scores, we can compute µ X and µ Y along with σ X and σ Y . If µ X > µ Y , can we conclude that annotator X is a more lenient annotator than annotator Y and resolve this by computing z-scores? Not without additional information! Imagine that we could see the "true" mean scores of S X and S Y , as annotated by a perfect omniscient annotator. It could be the case that the true means are identical and annotator X is indeed more lenient, but it could also be the case that the true mean of the scores in set S X is actually higher. In the latter case, the annotators could be equally lenient, or it is even possible that annotator Y could be more lenient! In short, without a shared basis for comparison, we don't know whether computing z-scores is normalizing out annotator differences, differences in the data itself, or a combination.
z-scores in practice
This raises the question: what is happening in practice when we compute z-scores on WMT DAs? Are we really normalizing away inter-annotator differences, or is the normalization also doing something else, such as normalizing away real differences in HIT and system quality? If it is the latter, even z-scores for DAs may suffer the same bias from comparisons to better (or worse) systems.
We don't have access to an oracle, and we don't have a direct or reliable way to compute interannotator agreement, because in some collections it is rare that two annotators annotate the same text (and for the Appraise data, we only have HIT information, not annotator information). However, we can still examine this in the existing data and modifications thereof. Bojar et al. (2011) noted that systems might suffer from being compared to the reference too frequently under relative ranking, or might benefit from being compared to particularly poor systems. The same could hold true in DA. Consider the following toy example: a HIT contains 4 sentences, with raw scores of 25, 50, 50, 75, respectively. A sentence with a raw score of 50 in this HIT would have a z-score of 0. If, instead, the raw scores were 0, 25, 50, 75, a sentence with a raw score of 50 would have a z-score of 0.39, while for a HIT with raw scores of 25, 50, 75, 100, a sentence with a raw score of 50 would have a z-score of -0.39. While it is possible that such a set of scores could reflect differences in annotator behavior, we could also easily imagine that they might reflect differences in HIT composition, with one containing only system scores, one containing system scores and a bad reference, and one containing system scores and a (good) reference.
HIT Composition
Thus we examine HIT composition, or, more accurately, the composition of data annotated by any given worker/worker ID. In 2018, all systems were SR-DC, and 100% of workers annotated "BAD_REF" data. 11 However, an "Alternate DA HIT Structure" was employed for a subset of researcher HITs (run in Appraise), which used only "BAD_REF" segments for quality assurance, "omitting repeat pairs and good reference pairs" while also attempting to include "the output of all participating systems for each source input" (to have the same annotator produce annotations across systems). The percentage of (non-rejected) workers who annotated data containing "REF" in 2018 ranged from 4.9% (en-et) to 98.8% (zh-en); the former is an outlier, as the next two lowest are 25.8% (en-cs) and 47.4% (en-fi).
In 2019 annotations into English, 100% of workers annotated both "REF" and "BAD_REF" segments. In 2019 annotations out of English, the final output data does not include any "REF" or "BAD_REF" segments (though these are described as having been included for QA), but human references are treated as systems, and between 37.8% (en-de) and 61.5% (en-kk) of workers annotated at least some human reference data.
The 2020 Appraise annotations differed from prior years as well: 100% of the 2020 into English (Mechanical Turk) workers annotated both "REF" and "BAD_REF" segments. In 2020 annotations out of English (Appraise), between 95.8% (en-iu) and 100% (en-{ja, ta, zh}) of workers 12 annotated "BAD_REF" data. The percentage of Appraise "workers" that annotated data containing human references (treated as a system) ranged from 8.3% (en-iu) to 73.4% (en-zh).
Analysis
In an ideal world where z-score normalization is only correcting for annotator variation, removing one system should not result in changes to the relative rankings of the remaining systems. That is to say, the z-scores themselves may be expected to change (shifting up if a very good system is removed, shifting down if a low-quality system is removed), but we wouldn't expect the relative ranking of two systems to change. After all, one stated motivation of the shift to DA was to avoid the known bias in RR of systems being unfairly penalized or benefiting unfairly from comparisons to stronger/weaker systems (Bojar et al., 2016). Similarly, replacing one system -for example with a much better or much worse system -should not result in other systems switching places in the rankings. We simulate these two scenarios using the existing data, and show that rankings produced in SR+DC settings are much more sensitive to removal or modification of systems than SR-DC. We first examine removing human systems and "REF" -acting as though they had never been annotated at all, so all z-scores are calculated without "REF" or human system scores. 13 We then compute rankings and significance clusters. We compare these against the original rankings generated from all available data, with the significance clusters recomputed after removal of human systems. 14 For each pair of rankings, we check whether there is any change in the order of systems (ignoring significance clusters; we call this ∆ Rank), whether there is any change in clusters (different number 13 We observe similar results if we only remove "REF", but in that setting we cannot examine the 2019 and 2020 Appraise SR+DC rankings, as they do not make use of "REF" at all. 14 Relevant to clusters containing or above human system(s). or composition of clusters; we call this ∆ Cluster), and/or changes in both (∆ Both). Table 2 shows the results. Rank changes (ignoring significance clusters) are the most common, and many of these occur within significance clusters as we would expect. However, there are also a number of changes to the significance clusters (clusters merging, splitting, or rearranging), as well as pairs for which both rank and cluster changes occur. Most strikingly, all of these changes are much more common in the SR+DC settings than in the SR-DC. Removing human and "REF" data results in cluster changes to almost half (12/25) of the SR+DC rankings, but less than 5% (1/21) of the SR-DC rankings. No SR-DC rankings exhibit changes in both rank and clusters, but 32% of SR+DC rankings do. This is evidence that the SR+DC rankings are less stable, and consequently less reliable, than the SR-DC rankings. We replicate this result with removing the highest and lowest ranked systems, respectively, as shown in Table 3; the SR+DC rankings are much less robust than the SR-DC rankings to the removal of the best or worst single system. One might worry that some of this instability is due to the shrinking number of datapoints available when we remove "REF" and human systems, or the highest/lowest ranked systems. To account for this, we run the same experiment and measure the same changes, but instead of removing "REF" and human systems, we degrade their raw scores (dividing each score by 1.25, 1.5, 2, 4, and 10) before computing z-scores, rankings, and significance clusters. This could be viewed as a simulation of what would occur if the high-quality human system were replaced with mediocre (or, in the case of division by 10, very low-quality) systems. 15 We visualize the result in Figure 1. Once again, the SR+DC evaluations are more brittle to these Table 2. The x-axis shows the divisor (ranging from 1.25 to 10) and the y-axis shows the fraction of pairs for which the rankings, clusters, or both ranks and clusters changed.
changes. However, we see that even the SR-DC evaluations are not immune to the effects of extreme outliers on rankings and clusterings -as the divisor used increases, so does the fraction of pairs that have ranking and/or clustering changes. This makes sense intuitively: if most systems are of similar quality, a slight imbalance in which systems are compared to one another likely won't have dramatic effects, but if one system is much worse (or much better) than the rest, systems that are compared against it more or less frequently than others will see their z-scores benefit or suffer accordingly. We also examined monolingual vs. bilingual tasks in the SR+DC context (all SR-DC tasks were monolingual), but note similar rates of changes to ranks, clusters, and both across the two settings.
We have used a very coarse measurement here: counting whether the ranks or clusters changed at all rather than whether multiple clusters or large numbers of systems were reranked. Indeed, many of these changes are quite subtle, with just a single new significance line appearing or two clusters merging, or two close systems switching ranks (within or across clusters). If that is the case, why should we be concerned with this? The first reason is to better understand what it is that is actually being measured and whether the WMT annotation protocol is succeeding in its goals. If the inclusion of outliers or the degradation of system scores results in other systems shifting ranks, this indicates that the current approach does suffer from a similar comparison bias to RR. Thus we can't always be confident that what is being measured is a property of the system itself and not closely intertwined with HIT composition -this approach is doing something other than only normalizing away interannotator differences. The second reason is to highlight these goals and assumptions so that they can be considered when making future modifications to the annotation process. Many of these issues are currently resulting in small inconsistencies, but if future modifications are made to the annotation process without considering the underlying assumptions and goals, there is no reason to expect that the errors will cancel one another out rather than compound. If we are aware of the underlying assumptions when changes are introduced to the annotation process, we will be better positioned to consider potential problems in the hypothetical and then examine the real data to see if they appear in practice. There is also the question of effects on downstream tasks ( §7). Finally, it also helps us to consider ways to mitigate these challenges before they grow, and we discuss some options for future consideration in §8.
Case Study
We manually select for examination a relatively dramatic case of rankings and clusters changing, from en-de 2020, pictured in Figure 2. This is an unusual case since it contained multiple human-based systems. 16 Nevertheless, it incorporates several issues we raised in hypotheticals, so we discuss it here. Figure 2 shows the rankings for the original data (human systems were dropped only for the purpose of computing clusters, but were used for calculating z-scores), and each of the rankings computed by degrading raw scores by dividing them by 1.25 through 10 (denoted d-n where n is the divisor). We begin by focusing on PROMT_NMT, whose rank increases with increased degradation Figure 2: Plot showing z-score rankings (top is best) for 2020 en-de (SR+DC), from original rankings and five divisors for raw human scores. Significance lines are marked with black "x". Human systems were used in calculating z-scores but were removed prior to computing clusters for ease of visualization and comparison.
of the human systems. In the original ranking, AFRL and PROMT_NMT appear in the same cluster, with AFRL having a higher score than PROMT_NMT, but not statistically significantly so. When degrading the human raw scores by 1.25 or 1.5, AFRL is in a higher significance cluster than PROMT_NMT, but when dividing by 2, this is reversed: PROMT_NMT is now ranked as significantly better than AFRL, while with a divisor of 4 or 10, they return to the same cluster but with PROMT_NMT scoring higher. Thus we see, purely by degrading the raw scores of other systems, we observe the full range of possible relative rankings and clusterings for this pair of systems. The same holds true for PROMT_NMT compared with Online-A. The en-de 2020 rankings may have suffered somewhat from having fewer annotations (1123.6 assessments per system), so we also show results for one of the most-assessed pairs that year: zh-en (2035.1 assessments per system). This is shown in Figure 3. 17 Here we focus on the top system: VolcTrans, which was ranked in Barrault et al. (2020) as significantly better than all systems. As we degrade the human systems, we see it begin to drop in rank, and this significance cluster merges with the one below it, raising the possibility that the initial finding was an artifact of the distribution of data across HITs rather than an inherent property 17 Note that in the original rankings shown, the human system was omitted when computing significance clusters, and in this case a new significance line (separating Online-A and Online-G appears) where it had been, which was not there in the published rankings that do include human systems. of the MT quality of that particular system.
System Comparisons
There is a distinct difference in the way that systems are distributed across HITs in the SR-DC and SR+DC annotation styles. In SR-DC, almost all HITs contain segments from every single system (though there is no guarantee that they appear in exactly equal proportions to one another). In SR+DC, this is not the case, owing to the fact that HITs are limited to 100 segments, there are often 10 or more systems, and documents are often longer than 10 segments. This means that it may be numerically impossible for a given HIT to cover all systems. We see this in Figure 4. A given system may be paired with any other system in less than half of the HITs in which it appears. These kinds of imbalances mean that systems may be more frequently compared to better or worse systems, resulting in unfair effects on their rankings.
Documents
In both SR+DC and SR-DC styles, we don't have a guarantee that every segment-system pair is judged by an annotator, nor that at least one segment from every document-system pair is judged. If we assume approximately uniform translation difficulty across the test set, this isn't necessarily too much of a concern. However, is that really the case, or are some documents "easy" and others "hard"? Figure 5 shows a matrix of document-system pairs, with each cell showing the average of all of the segment raw scores for that system-document pair. 18 The documents are ranked from highest average raw score to lowest average score (top to bottom), while the systems are ranked by highest average raw score to lowest average raw score (left to right). In the leftmost column, we see the "HU-MAN" system, which has high scores across all documents. If all documents were equally difficult to translate, we would expect to see a gradient along the x-axis (i.e., across systems), with minimal variation along the y-axis (i.e., across documents). What we observe instead in this en-lt pair from 2019 (and across a number of other language pairs) is a rough gradient from the top left to the bottom right (with the exception of the "HUMAN" system, which remains strong throughout). This suggests that there are some documents that are "easy" for most systems to translate (top) and some that are "hard" (bottom). This raises a concern: when we attempt to compare two systems of very similar quality, they are not being measured on the same test set. An unlucky sample of documents might see one system judged on a "harder" set of documents, calling the resulting rankings into question.
Downstream Consequences
While researchers building MT systems for the shared task may view the human judgment rankings as the end result, the rankings are the input to the metrics tasks at WMT. Thus the reliability of the rankings has a direct impact on the reliability of the metrics task -which in the long term feeds into MT research as researchers decide 18 We can also produce such a matrix using z-scores or automatic metric scores, and results are comparable. which automatic metrics to use for evaluating their systems. In system-level metric evaluation at the WMT Metrics shared task, Pearson correlations are computed between metric scores and the z-score human rankings (Mathur et al., 2020b). Note that these correlations are directly between the system average z-scores and the metric scores, and as such do not treat all systems within a given cluster as tied. In practice, this means that even rank-only perturbations in the official ranking can be expected to cause changes to metrics task results.
Metrics scores are run on the full test set, not the various human-annotated subsets. Citing Graham et al. (2013), the Metrics task papers note that system-level DA scores are "consistent and have been found to be reproducible" even though different sets of segments are assessed for each system. However, that work predates the shift to sampling by document, and our analysis of instability and document difficulty suggest revisiting it.
Recent work has shown that outliers have a concerning impact on metric correlations (Mathur et al., 2020a), and organizers have worked to mitigate this (Mathur et al., 2020b). This paper is a step towards answering questions raised in Mathur et al. (2020b) regarding outliers and unfair advantages. It may seem tempting to remove outliers from human judgment tasks, but this will not solve the other problems and could instead mask their presence.
Proposals for Future Work
The issues discussed in this paper raise concerns about changes to the human evaluation protocols used at WMT and their effects on the validity of WMT system rankings. A partial solution would be to return to SR-DC annotations, perhaps after validation of the 2018 alternate HIT structure that guarantees that for every segment in the HIT, the HIT contains every MT system's output for that sentence. But this may be an unsatisfactory conclusion, and fails to address the interest in pushing MT evaluation toward whole documents.
Document-level and context-inclusive evaluations are growing in popularity, but there is limited study on document-level assessment methodologies for MT. Castilho (2021) examines setups comparable to SR-DC, SR+DC, and document rating with document context (which we omitted from this work), and finds in a controlled experiment using Likert scale ratings that a methodology comparable to SR+DC produces higher levels of interannotator agreement and fewer misevaluations than either whole document scores or individual sentences without context. However, that experimental setup does not suffer from the same task composition issues we observe in WMT; in fact these may be orthogonal issues.
If the choice is made to use SR+DC style annotations, there are some improvements to consider, but as noted in Castilho (2020), it remains "essential to test which methodologies will be best suited for different tasks and domains" prior to adopting them. One option would be to create 2018-alternate-structure style HITs with document context, where a HIT contains all systems' output for one or more documents. The downside to this is that it would likely require longer HITs or HITs that only contain a small number of documents; if systems are of similar quality, we might be concerned about annotator fatigue from repetition. The amount of context needed to adequately assess translations is still a question under consideration Castilho, 2021), which ties into issues of document and HIT length.
Another possibility to consider would be to al-ways normalize over annotators (rather than over HITs), but this isn't a solution on its own -it is still necessary to make sure that annotators see comparable distributions of systems and documents, or the same problems will be reintroduced. Having annotators do calibration HITs, i.e., a set of annotations that all annotators complete, could also be considered. The calibration HITs would provide a consistent basis for computing the parameters of an annotator-specific z-score transformation, which could then be applied to the remainder of the annotator's judgments. This could untangle the issue of annotator strictness/leniency, but would still merit study before implementation (as annotator behavior may depend on HIT composition, so the z-scores learned in calibration may not be as applicable as one might hope if there is a mismatch between calibration HITs and the remainder of the HITs). One could also consider additional ways of modeling annotator behavior beyond z-score normalization (Paun et al., 2018). A simpler starting point to deal with the issue of different systems being annotated over different documents would be to guarantee that all systems are scored over the same subset of documents.
All of these are (partially) orthogonal to the questions of what type of annotation tasks result in the most reliable ratings -whether it be direct assessment, ranking, or detailed error annotation -or questions of annotator skills and knowledge (Freitag et al., 2021).
Conclusions
We have shown that the current judgment collection methodology at the WMT news translation task results in SR+DC judgments that are more prone to variation on the basis of outliers than SR-DC judgments, and that HIT composition issues have helped reintroduce the relative ranking problem of unfair comparisons to the WMT rankings. We examined issues of document difficulty and how this interacts with the decision to sample documents (rather than sentences) for judgment. These issues risk undermining the validity of WMT rankings, with real consequences for MT research and downstream tasks on automatic metrics. In examining these issues, we've also presented several approaches to diving into the WMT ranking data that may be helpful to consider when planning future changes to WMT human judgment collection procedures.
A Notes on Replication
As shown in Table 4, we are able to duplicate the following rankings exactly (or with minor differences, as noted). Code to replicate this work will be available at https://github. com/nrc-cnrc/WMT-Stability/.
Language codes are as follows: Chinese (zh), Czech (cs), German (de), English (en), Estonian (et), Finnish (fi), Gujarati (gu), Inuktitut (iu), Japanese (ja), Kazakh (kk), Khmer (km), Lithuanian (lt), Pashto (ps), Polish (pl), Russian (ru), Tamil (ta), Turkish (tr).
• 2018, Mechanical Turk, SR-DC: en-{cs, de, et, fi, ru, tr, zh} and {cs, de, et, fi, ru, zh}-en, but we do not successfully replicate the scores for tr-en (we omit tr-en 2018 from future experiments).
• 2019, Appraise, SR+DC: en-{cs, de, fi, gu, kk, lt, ru, zh}, though we note that en-kk contains a duplicate system that is omitted from the published table.
• 2019, Mechanical Turk, SR-DC: {gu, kk, lt, ru}-en, and fi-en is nearly replicated, but our replication of it is missing a significance line between two clusters due to a rounding difference when computing the significance value.
• 2019, Mechanical Turk, SR+DC: {de, zh}-en are successfully replicated.
• 2019, Turkle, SR-DC: de-cs, de-fr, fr-de, zhen, are all successfully replicated but are not included in the analyses.
• 2020, Appraise, SR+DC: en-{cs, ja, ru, ta, zh}, are successfully replicated, while en-pl is missing one significance line due to rounding differences. The ranking for en-de has identical scores except for Human-A and Humanparaphrase. The original en-de ranking in Barrault et al. (2020) included Human-A, Human-B, and Human-paraphrase. The released en-de data only contained Human-A and Human-B, though Human-A was about twice as large as Human-B, suggesting that it may have incorporated the Human-paraphrase data. Finally, the ranking for en-iu is quite different, though we expect this is because of delays in data collection resulting in a mismatch between the reported scores in the findings paper and the released scores. The en-iu scores also contain an additional low-scoring system that was omitted from the published table.
• 2020, Mechanical Turk, SR+DC: {cs, de, ja, pl, ru, ta, zh}-en were all replicated exactly.
• 2020, Mechanical Turk, SR-DC: {iu, km, ps}en were all replicated exactly.
• 2020 en-{km, ps} appear to be missing from the released data.
Figure 1 :
1Effect of dividing raw human system and "REF" scores on overall (z-score) rankings for all SR-DC and SR+DC shown in
Figure 3 :Figure 4 :
34Rankings for 2020 zh-en (SR+DC), from original rankings and with divisors, as inFigure 2. Co-occurrence matrix of systems for en-lt 2019 (SR+DC). Each cell shows the number of HITs that contained segments from the systems at those x and y values. The diagonal shows the total number of HITs that contained each system.
Figure 5 :
5Average raw scores for document-system pairs from en-lt 2019 (SR+DC). Empty cells indicate pair was not judged. Documents are ranked by average raw score (highest: top) as are systems (highest: left).
Table 2 :
2Effect of removing human and "REF" scores from annotations and recalculating rankings by year, platform (MTurk or Appraise), and annotation style. Values indicate the fraction of language pairs that had changes in rank, clustering, or both rank and clustering.
Table 3 :
3Effect of removing single lowest ranked or
highest ranked system across all years, by data collec-
tion type (-/+DC). Values indicate the fraction of lan-
guage pairs that had changes in rank, clustering, or both
rank and clustering.
It is still possible that there may be biases based on the segments observed in any given set of 100.
Here we reproduce this quote fromBarrault et al. (2019), though it appears consistent 2018-2020.
Language codes: Chinese (zh), Czech (cs), German (de), English (en), Estonian (et), Finnish (fi), Gujarati (gu), Inuktitut (iu), Japanese (ja), Kazakh (kk), Khmer (km), Lithuanian (lt), Pashto (ps), Polish (pl), Russian (ru), Tamil (ta), Turkish (tr).
We use the same standard deviation for simplicity, with arbitrary positive values of n and m.
These values are calculated on ad-good-raw-redup.csv files, so only include annotators who successfully passed QA.12 The definition of "worker" is really a bit fuzzy here; the "WorkerID" produced by Appraise is really a HIT ID, so averages are not necessarily being computed across all of a given worker's annotations, but rather each HIT is being treated as a unique worker.
The reverse -inflating scores of low-performing systems -has a similar effect, but requires consideration of how to handle scores of zero.
See Appendix A for details.
AcknowledgmentsThank you to the anonymous reviewers for their suggestions and comments. Thank you to Chikiu Lo, Nitika Mathur, Gabriel Bernier-Colborne, Roland Kuhn, Huda Khayrallah, Adam Poliak, and George Foster for feedback on various drafts of this work. Thank you to Yvette Graham and task organizers for the 2020 data release and pointers to DA-related code. Thank you also to those listed above and a number of other current and former colleagues -including Rachel Rudinger, Eric Joanis, Darlene Stewart, Samuel Larkin, Michel Simard, Serge Léger, Patrick Littell, and Cyril Goutte -for discussions on related topics.Lang.Year -DC +DC Mono./Bi. Systems marked with * were not included in any additional analysis.
Loïc Barrault, Magdalena Biesialska, Ondřej Bojar, Marta R Costa-Jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-Kiu Lo, Nikola Ljubešić, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationSantanu Pal, Matt Postand Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20Loïc Barrault, Magdalena Biesialska, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubešić, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Lin- guistics.
Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). Loïc Barrault, Ondřej Bojar, Marta R Costa-Jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, 10.18653/v1/W19-5301Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, Italy2Association for Computational LinguisticsLoïc Barrault, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.
Massimo Poesio, and Alexandra Uma. 2021. We need to consider disagreement in evaluation. Valerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, 10.18653/v1/2021.bppf-1.3Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future. the 1st Workshop on Benchmarking: Past, Present and FutureBarbara PlankOnline. Association for Computational LinguisticsValerio Basile, Michael Fell, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio, and Alexandra Uma. 2021. We need to con- sider disagreement in evaluation. In Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future, pages 15-21, Online. Association for Computational Linguistics.
Findings of the 2017 conference on machine translation (WMT17). Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, Marco Turchi, 10.18653/v1/W17-4717Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational LinguisticsOndřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Sec- ond Conference on Machine Translation, pages 169- 214, Copenhagen, Denmark. Association for Com- putational Linguistics.
Findings of the 2016 conference on machine translation. Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, Marcos Zampieri, 10.18653/v1/W16-2301Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersOndřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131-198, Berlin, Ger- many. Association for Computational Linguistics.
A grain of salt for the WMT manual evaluation. Ondřej Bojar, Miloš Ercegovčević, Martin Popel, Omar Zaidan, Proceedings of the Sixth Workshop on Statistical Machine Translation. the Sixth Workshop on Statistical Machine TranslationEdinburgh, ScotlandAssociation for Computational LinguisticsOndřej Bojar, Miloš Ercegovčević, Martin Popel, and Omar Zaidan. 2011. A grain of salt for the WMT manual evaluation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 1-11, Edinburgh, Scotland. Association for Compu- tational Linguistics.
Findings of the 2018 conference on machine translation (WMT18). Ondřej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, Christof Monz, 10.18653/v1/W18-6401Proceedings of the Third Conference on Machine Translation: Shared Task Papers. the Third Conference on Machine Translation: Shared Task PapersBelgium, BrusselsAssociation for Computational LinguisticsOndřej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 con- ference on machine translation (WMT18). In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 272-303, Bel- gium, Brussels. Association for Computational Lin- guistics.
On the same page? comparing inter-annotator agreement in sentence and document level human machine translation evaluation. Sheila Castilho, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationOnline. Association for Computational LinguisticsSheila Castilho. 2020. On the same page? compar- ing inter-annotator agreement in sentence and doc- ument level human machine translation evaluation. In Proceedings of the Fifth Conference on Machine Translation, pages 1150-1159, Online. Association for Computational Linguistics.
Towards document-level human MT evaluation: On the issues of annotator agreement, effort and misevaluation. Sheila Castilho, Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval). the Workshop on Human Evaluation of NLP Systems (HumEval)Online. Association for Computational LinguisticsSheila Castilho. 2021. Towards document-level human MT evaluation: On the issues of annotator agree- ment, effort and misevaluation. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 34-45, Online. Association for Computational Linguistics.
On context span needed for machine translation evaluation. Sheila Castilho, Maja Popović, Andy Way, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationSheila Castilho, Maja Popović, and Andy Way. 2020. On context span needed for machine translation eval- uation. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 3735- 3742, Marseille, France. European Language Re- sources Association.
Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Markus Freitag, George F Foster, David Grangier, Viresh Ratnakar, abs/2104.14478CoRRMarkus Freitag, George F. Foster, David Grang- ier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. CoRR, abs/2104.14478.
Is all that glitters in machine translation quality estimation really gold?. Yvette Graham, Timothy Baldwin, Meghan Dowling, Maria Eskevich, Teresa Lynn, Lamia Tounsi, Proceedings of COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeYvette Graham, Timothy Baldwin, Meghan Dowling, Maria Eskevich, Teresa Lynn, and Lamia Tounsi. 2016. Is all that glitters in machine translation qual- ity estimation really gold? In Proceedings of COL- ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3124-3134, Osaka, Japan. The COLING 2016 Orga- nizing Committee.
Continuous measurement scales in human evaluation of machine translation. Yvette Graham, Timothy Baldwin, Alistair Moffat, Justin Zobel, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. the 7th Linguistic Annotation Workshop and Interoperability with DiscourseSofia, BulgariaAssociation for Computational LinguisticsYvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Pro- ceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41, Sofia, Bulgaria. Association for Computational Lin- guistics.
Is machine translation getting better over time?. Yvette Graham, Timothy Baldwin, Alistair Moffat, Justin Zobel, 10.3115/v1/E14-1047Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsGothenburg, SwedenAssociation for Computational LinguisticsYvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2014. Is machine translation getting better over time? In Proceedings of the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 443-451, Gothen- burg, Sweden. Association for Computational Lin- guistics.
Trueskill™: A bayesian skill rating system. Ralf Herbrich, Tom Minka, Thore Graepel, Advances in Neural Information Processing Systems. MIT Press19Ralf Herbrich, Tom Minka, and Thore Graepel. 2007. Trueskill™: A bayesian skill rating system. In Ad- vances in Neural Information Processing Systems, volume 19. MIT Press.
Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. Nitika Mathur, Timothy Baldwin, Trevor Cohn, 10.18653/v1/2020.acl-main.448Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsNitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020a. Tangled up in BLEU: Reevaluating the eval- uation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4984-4997, Online. Association for Computa- tional Linguistics.
Qingsong Ma, and Ondřej Bojar. 2020b. Results of the WMT20 metrics shared task. Nitika Mathur, Johnny Wei, Markus Freitag, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationOnline. Association for Computational LinguisticsNitika Mathur, Johnny Wei, Markus Freitag, Qing- song Ma, and Ondřej Bojar. 2020b. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725, Online. Association for Computa- tional Linguistics.
Comparing Bayesian models of annotation. Transactions of the Association for. Bob Silviu Paun, Jon Carpenter, Dirk Chamberlain, Udo Hovy, Massimo Kruschwitz, Poesio, 10.1162/tacl_a_00040Computational Linguistics. 6Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018. Comparing Bayesian models of annotation. Trans- actions of the Association for Computational Lin- guistics, 6:571-585.
Efficient elicitation of annotations for human evaluation of machine translation. Keisuke Sakaguchi, Matt Post, Benjamin Van Durme, 10.3115/v1/W14-3301Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsKeisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2014. Efficient elicitation of annota- tions for human evaluation of machine translation. In Proceedings of the Ninth Workshop on Statisti- cal Machine Translation, pages 1-11, Baltimore, Maryland, USA. Association for Computational Linguistics. |
11,703,281 | Using a Machine Learning Model to Assess the Complexity of Stress Systems | We address the task of stress prediction as a sequence tagging problem. We present sequential models with averaged perceptron training for learning primary stress in Romanian words. We use character n-grams and syllable n-grams as features and we account for the consonant-vowel structure of the words. We show in this paper that Romanian stress is predictable, though not deterministic, by using data-driven machine learning techniques. | [
13292908,
6645826,
16648836,
9800133
] | Using a Machine Learning Model to Assess the Complexity of Stress Systems
Ioana Chitoran [email protected]
Faculty of Mathematics and Computer Science
University of Bucharest
Alina Maria Ciobanu [email protected]
Center for Computational Linguistics
University of Bucharest
Max Planck Institute for Software Systems
Liviu P Dinu [email protected]
Center for Computational Linguistics
University of Bucharest
Max Planck Institute for Software Systems
Vlad Niculae [email protected]
Max Planck Institute for Software Systems
Université Paris Diderot
CLILLAC-ARP
Using a Machine Learning Model to Assess the Complexity of Stress Systems
stress predictionRomanian stresssyllabicationsequence tagging
We address the task of stress prediction as a sequence tagging problem. We present sequential models with averaged perceptron training for learning primary stress in Romanian words. We use character n-grams and syllable n-grams as features and we account for the consonant-vowel structure of the words. We show in this paper that Romanian stress is predictable, though not deterministic, by using data-driven machine learning techniques.
Introduction
The goal of this study is to use a machine learning approach to test the learnability of the stress system of Romanian, and at the same time to verify the analysis of stress proposed by linguists. Romanian is a challenging case study, because at first glance stress appears to be freely placed, no obvious patterns emerge, suggesting that the only way it can be learned is as part of individual lexical items. The algorithm presented here relies on the analysis of primary stress proposed by Chitoran (1996), which makes crucial use of morphological information to reveal predictable patterns of stress assignment. This analysis is quite complex, and the question of its learnability is an important one. We are specifically interested in determining whether certain parts of the lexicon are less learnable than others, and to analyze their properties. Syllable structure and stress pattern prove extremely useful in text-to-speech synthesis (TSS), as they provide valuable knowledge with regard to pronunciation modeling, and were, therefore, thoroughly investigated (Damper et al., 1999;Demberg et al., 2007;Dou et al., 2009;Trogkanis and Elkan, 2010;Bartlett et al., 2008;Dinu, 2003;Dinu and Dinu, 2005;Toma et al., 2009).
The Stress System of Romanian
Traditional Romanian grammars treat primary stress as unpredictable, and therefore lexically assigned (most recently Panȃ Dindelegan (2013)). Apparent minimal pairs such aś acele (the needles) and acéle (those, fem.) support this conclusion. Chitoran (1996) argues, however, that stress is in fact to a large extent predictable if one takes into account its close dependence on the morphology of the language, specifically the distribution of lexical items by their part of speech and their internal morphological composition. Once this type of information is considered, unpredictability is significantly reduced. For example, if we consider the morphological structure in the pairácele (noun) -acéle (pronoun), the first lexical item has the structure ac]-e-le (needles) consisting of the root ac, the plural marker -e, and the feminine plural definite article -le. The second item has the structure ace]-le (those, fem.), consisting of a pronoun form and the same definite article -le. Once the forms are decomposed, we see that both bear stress on the final syllable of the root. Stress assignment, therefore does not include inflectional material that lies beyond the square bracket. Such a fine-grained linguistic analysis shows that a significant amount of unpredictability can be eliminated if the domain of stress is computed over the morphological composition of the lexicon. Nevertheless, lexical marking cannot be entirely dispensed with, because two separate stress patterns coexist in the Romanian lexicon, as a result of historical changes and lexical borrowings in different language contact situations. Each stress pattern is regular and predictable, but no generalizations can be drawn regarding which lexical items belong to which pattern. Chitoran (1996) identifies the regularities in each pattern by considering both phonological and morphological factors. Generalizations emerge when lexical items are grouped by parts of speech. This organization of the data reveals unifying generalizations for verbs, adjectives, nouns, regarding:
• the distance of the stressed syllable from the right edge of the word;
• the shape of the final syllable -CV or CVC.
We present the relevant generalizations organized by part of speech and the shape of the final syllable for each stress pattern.
Nouns and adjectives. In pattern 1, when the final syllable is CV, stress falls on the penultimate syllable, the second from the end: sá.re ( Verbs. The stress pattern of verbs is related to the conjugation class. Romanian has four verb conjugations. In the following infinitive forms, the final theme vowel which marks the conjugation class is underlined: (I) cântá (to sing), (II) vedeá (to see), (III) spúne (to say), (IV) dormí (to sleep). Verbs of the first, second, and fourth conjugation stress the theme vowel, while third conjugation verbs stress the root. This alternation in the location of stress between the theme vowel and the root is maintained throughout verb paradigms. As for nouns and adjectives, the stress domain excludes inflectional markers and contains only the root and the theme vowel. Consider, for example, the present tense forms of (I) cânta (to sing). The stressed vowel is in bold:
(1) Pattern 1 -cântá (I) cânt] I sing cânţ] j you sing cânt]ȃ s/he sings cânt-ȃ] m we sing cânt-a] ţj you sing cânt]ȃ they sing
The main generalization for the verb system is: stress the rightmost syllable of the stress domain. When the theme vowel is present, it belongs to the rightmost syllable and it is always stressed. Otherwise the rightmost syllable of the root is stressed. As for nouns, a second stress pattern is found for verbs. For example (I) cumpȃrá (to buy) is conjugated as follows:
(2) Pattern 2 -cumpȃrá (I) cumpȃr] I buy cumper] i you buy cumpȃr]ȃ s/he buys cumpȃr-ȃ] m we buy cumpȃr-a] ţi you buy cumpȃr]ȃ they buy
In the second pattern stress falls on the penultimate syllable of the root when the theme vowel is absent, and on the theme vowel when it is present.
Questions
We are first of all interested in testing the general learnability of this analysis, and determining whether certain subpatterns are more difficult to identify than others. The proposed linguistic analysis did not include words containing glides. This gives us the opportunity to extend the algo-rithm as is to these additional forms and to test its performance. This paper also relies on the assumption that the stress system of Romanian is predictable from the distribution of the lexical items among parts of speech. Unlike Chitoran's analysis, our system does not factor out inflections. When applied to fully inflected forms it detects a much higher number of stress pattern classes, with much more complex structures. For instance, while Chitoran distinguishes 6 patterns for disyllabic words (CV-CVC,CVC-CVC, CV-CVCC, CVC-CVCC, CV-CVC and CVC-CVC), we automatically identify 447 distinct patterns for disyllabic words in the RoSyllabiDict dataset (which is described in detail in Section 4), including Chitoran's patterns 1 and 2. We account for type words in our analysis. Almost all of these patterns are not deterministic with regard to stress assignment, that is, the dictionary indicates more than on possible stress location. For example, for the CCV-CVC pattern, which has 1,390 occurrences in our dataset, we identify two positions for stress placement: CCV-CVC (1,017 occurrences) and CCV-CVC (373 occurrences).
Data
We run our experiments for Romanian using the RoSyllabiDict (Barbu, 2008) dictionary, which is a dataset of annotated words comprising 525,528 inflected forms for approximately 65,000 lemmas. For each entry, the unsyllabified form, the syllabication, the stressed vowel (and, in case of ambiguities, also grammatical information or type of syllabication) are provided. For example, the word copii (children) has the following representation:
<form w="copii" obs="s."> co-píi</form>
We investigate stress placement with regard to the syllable structure and we provide in Table 1 the percentages of words having the stress placed on different positions, counting syllables from the beginning and from the end of the words as well. Dinu and Dinu (2009) show that the probability distribution of the n-syllabic lemmas in RoSyllabiDict follows a Poisson distribution. We investigate the C/V structure of the words in RoSyl-labiDict using raw data, i.e., a,ȃ,â, e, i,î, o, u are always considered vowels and the rest of the letters in the Romanian alphabet are considered consonants. Thus, we identify a very large number of C/V structures, most of which are not deterministic with regard to stress assignment, having more then one choice for placing the stress. For example, for CCV-CVC structure (1,390 occurrences in our dataset) there are 2 associated stress patterns: CCV-CVC (1,017 occurrences) and CCV-CVC (373 occurrences). Words with 6 syllables cover the highest number of distinct C/V structures (5,749). There are 31 C/V structures (ranging from 4 to 7 syllables) reaching the maximum number of distinct associated stress patterns (6).
For our experiments, we discard words which do not have the stressed vowel marked (3,430 words), compound words having more than one stressed vowel (1,668 words) and ambiguous words, either regarding their part of speech or type of syllabication, marked in the dataset in the obs. field (20,123 words).
Romanian Stress Prediction
We address the task of stress prediction for Romanian words as a sequence tagging problem, extending the method proposed by Ciobanu et al. (2014). In this paper, we account only for the primary stress, but this approach allows further development in order to account for secondary stress as well.
In order to investigate the predictability of the stress system of Romanian, we employ Chitoran's hypothesis regarding the dependence of the stress placement on the morphology of the language; we conduct a detailed experiment dividing the Romanian words based on their part of speech and for verbs we introduce a further level of granularity by accounting for the conjugation class, as described in Section 2. We train and evaluate four systems for the automatic prediction of stress placement for Romanian words: a "majority-class" type of baseline and three systems using averaged perceptron for parameter estimation: a sequential model with character n-gram features and two cascaded models; each consists of two sequential models trained separately (one for syllabication and another one for stress prediction), the output of the first being used as input for the second. One of the cascaded models uses character n-grams and the other uses syllable n-grams and both systems employ additional information regarding stress placement and word structure.
Baseline
We use a "majority class" type of baseline which employs the C/V structures described in Section 4. and assigns, for a word in the test set, the stress pattern which is most common in the training set for the C/V structure of the word, or places the stress randomly on a vowel if the C/V structure is not found in the training set. For example, the word copii (meaning children) has the following C/V structure: CV-CVV. In our training set, there are 659 words with this structure and the three stress patterns which occur in the training set are as follows: CV-CVV (309 occurrences), CV-CVV (283 occurrences) and CV-CVV (67 occurrences). Therefore, the most common stress pattern CV-CVV is correctly assigned, in this case, for the word copii.
Sequential Model
We use a simple tagging structure for marking primary stress. The stressed vowel receives the positive tag 1, while all previous characters are tagged 0 and all subsequent ones 2. This structure helps enforce the uniqueness of the positive tag.
Cascaded Model with Character n-grams
The cascaded model consists of two sequential models, the output of the first one being used as a form of input (features) for the second one. We use a syllabication model to predict syllable boundaries and for stress prediction we use another one, similar to the baseline and including two additional types of features:
• syllable structure features regarding vowel/consonant sequences: n-grams using, instead of characters, markers for consonants (C) and vowels (V);
• binary indicators of the following positional statements about the current character:
exactly before/after a split; in the first / second / third / fourth syllable of the word, counting from left to right; in the first / second / third / fourth syllable of the word, counting from right to left.
Following the method proposed by Dinu et al. (2013), the syllabication prediction is performed with another sequential model of length n − 1, where each node corresponds to a position between two characters. Based on experimenting and previous work, we adopted the Numbered NB labeling (Bartlett et al., 2008). Each position is labeled with an integer denoting the distance from the previous boundary. For example, for the word diamond, the syllable (above) and stress annotation (below) is:
d i a m o n d 1 0 0 1 2 3 0 1 2 2 2 2 2
The features used for syllabication are based on the same principle, but because the positions are in-between characters, the window of radius W has length 2W instead of 2W + 1. For this model we used only character n-grams as features.
Cascaded Model with Syllable n-grams
This cascaded model is similar to the previous one, but uses, for the second sequential model, syllable n-grams instead of character n-grams. For example, if the current character is the second o in accommodation and W = 2, the feature values would be ac, com, mo, da, tion, accom, commo, moda, dation. For training and model selection, we use the gold syllable structure from the dataset.
Experiments and Results Analysis
In this section we present and analyse the main results drawn from our research on Romanian stress assignment.
Experiments
We split the dataset in two subsets: train set (on which we perform cross-validation to select optimal parameters for our model) and test set (with unseen words, on which we evaluate the performance of our system). We use the same train/test sets for the two sequential models, but they are trained independently. The output of the first model (used for predicting syllabication) is used for determining feature values for the second one (used for predicting stress placement) for the test set. The second model is trained using gold syllabication (provided in the dataset) and we report results on the test set in both versions: using gold syllabication to determine feature values and using predicted syllabication to determine feature values. The results with gold syllabication are reported only for providing an upper bound for learning and for comparison. We use averaged perceptron training (Collins, 2002) from CRFsuite (Collins, 2002). For the stress prediction model we optimize hyperparameters using grid search to maximize the 3fold cross-validation F1 score of class 1, which marks the stressed vowels. We search over {2, 3, 4} for W ,and over {1, 5, 10, 25, 50} for the maximum number of iterations. For stress prediction systems, the optimal window radius W was 4 and the maximum number of iterations 50 when using character n-grams, and when using syllable n-grams the optimal window radius W was 3 and the maximum number of iterations 50. We investigate, during grid search, whether employing C/V markers and binary positional indicators improve the cascaded systems' performance. It turns out that in most cases they do. For the syllabication model, the optimal hyperparameters are 4 for the window radius and 50 for the maximum number of iterations. We evaluate the cross-validation F1 score of class 0, which marks the position of a hyphen. Further, we divide words based on their part of speech (nouns, adjectives and verbs -one group for each conjugation class) and we train and evaluate the cascaded models independently on each category in the same manner as we did for the entire dataset. We decided to use crossvalidation for parameter selection instead of splitting the data in train/dev/test subsets in order to have consistency across all models, because some of these word categories do not comprise enough words for splitting in three subsets (verbs of the fourth conjugation class, for example, have only 1,385 instances). In Table 2 we provide the number of words in each category for the RoSyllabiDict dataset. The results drawn from our research are reported and analysed in the following subsections.
Results Analysis
In Table 3 we report the results of all models on the entire RoSyllabiDict dataset. We report word-level accuracy (instance accuracy), that is, we account for words for which the stress pattern was correctly assigned. As expected, all sequential models significantly outperform the baseline. The best perfermonce is obtained by the cascaded model with gold syllabication and character n-grams, which obtains 0.975 instance accuracy.
Model
Instance accuracy Baseline 0.637 Sequential 0.974 Cascaded (gold, character n-grams) 0.975 Cascaded (predicted, character n-grams) 0.973 Cascaded (gold, syllable n-grams) 0.955 Cascaded (predicted, syllable n-grams) 0.684 Table 8: Results for stress prediction system with syllable n-grams and predicted syllabication for feature extraction.
The cascaded model with character n-grams obtains better performances overall and for each part of speech category as well. Highest overall instance accuracy is 0.975, obtained by the cascaded model with gold syllabication. As expected, when words are divided in groups based on their parts of speech, the systems are able to predict stress placement with higher accuracy. Best performances are obtained for verbs (all four conjugations), followed by adjectives, while stress placement for nouns is predicted with lowest accuracy. The system with character n-grams substantially outperforms the system with syllable n-grams in both version, with gold and predicted syllabication as well.
Error Analysis
In Table 6 we report the distribution of the words for which the best performing system (the cascaded model with gold syllabication and character n-grams) did not correctly predict the stress placement, counting syllables from right to left. For most verbs (first, second and third conjugation) and for nouns, the stress is most frequently misplaced when it is located on the penultimate syllable (the syllable from right to left). For adjectives, almost half of the errors occur when the stress is placed on the last syllable (48.08 %), while for verbs of the fourth conjugation more than half of the errors occur when the stress is placed on the antepenultimate syllable (the third syllable from right to left).
Conclusion and Future Work
Syllable structure is important and helps the task of stress prediction. This is consistent with linguistic analysis, which shows that the syllable is the stress-bearing unit. The cascaded models using gold syllabication outperform their equivalent systems with predicted syllabication by only very little. For real applications, such systems, which require less or no linguistic knowledge are needed for words that cannot be found in datasets, and therefore gold splits are not available. We intend to evaluate the system on other languages, as there is nothing language-specific in the pipeline. Both the linguistic and the machine learning approach presented here test the hypothesis that the stress system of Romanian is predictable. They both reach the conclusion that only parts of it are. The main difference between the two lies in the number of different patterns identified. Chitoran (1996) reduces the number of patterns by considering fine details of word structure. The learning model, on the other hand, is applied to raw data, to citation forms moreover presented to the model in written form. It has thus identified a larger number of separate patterns. This discrepancy in the results motivates further work that would investigate the possibility of adapting the learning model to more fine grained linguistic analysis.
Table 1 :
1Stress placement for RoSyllabiDict.
The features used are character n-grams up to n = W in a window of radius W around the current position. For example, if W = 2, the feature templateconsists of c[-2], c[-1], c[0], c[1], c[2],
c[-2:-1], c[-1:0], c[0:1], and c[1:2]. If
the current character is the fourth of the word dinosaur, o,
the feature values would be i, n, o, s, a, in, no, os, sa.
Table 2 :
2Number of words in each subcategory forRoSyllabiDict.
Table 3 :
3Instance accuracy for stress prediction.Further, we perform an in-depth analysis of the cascaded
models' performance on part of speech based categories.
The test results of both cascaded systems for RoSyllabiDict
subsets split based on part of speech are reported in Ta-
bles 4, 5, 7 and 8. We account for word-level correct stress
placement (instance accuracy) and character-level correct
stress placement (item accuracy). The cascaded models us-
ing gold syllabication outperform their equivalent systems
with predicted syllabication by only very little. For real ap-
plications, such systems, which require less or no linguistic
knowledge are needed for words that cannot be found in
datasets, and therefore gold splits are not available.
POS
Conj.
Item
Instance
correct
accuracy accuracy predictions
Verbs
1
0.999
0.997
56,324
2
0.998
0.996
3,749
3
0.999
0.997
691
4
0.999
0.999
30,358
Nouns
-
0.993
0.979
130,746
Adjectives -
0.997
0.992
48,194
Table 4 :
4Results for stress prediction system with character
n-grams and gold syllabication for feature extraction.
POS
Conj.
Item
Instance
correct
accuracy accuracy predictions
Verbs
1
0.999
0.997
56,320
2
0.998
0.994
3,743
3
0.999
0.997
691
4
0.999
0.998
30,333
Nouns
-
0.993
0.979
130,696
Adjectives -
0.997
0.992
48,195
Table 5 :
5Results for stress prediction system with character
n-grams and predicted syllabication for feature extraction.
Table 6 :
6The distribution of the words for which the stress placement was not correctly predicted, based on the index of the stressed syllable. We report both the number ( ) and the percentage (%) of words in each category. Syllables are counted from right to left.POS
Conj.
Item
Instance
correct
accuracy accuracy predictions
Verbs
1
0.996
0.986
55,702
2
0.984
0.933
3,512
3
0.982
0.923
640
4
0.992
0.966
29,360
Nouns
-
0.987
0.958
127,929
Adjectives -
0.993
0.974
47,364
Table 7 :
7Results for stress prediction system with syllable n-grams and gold syllabication for feature extraction.POS
Conj.
Item
Instance
correct
accuracy accuracy predictions
Verbs
1
0.961
0.842
47,577
2
0.954
0.833
3,136
3
0.903
0.587
407
4
0.878
0.541
16,445
Nouns
-
0.921
0.725
96,844
Adjectives -
0.924
0.722
35,115
AcknowledgementsThe authors thank the anonymous reviewers for their helpful and constructive comments. The contribution of the authors to this paper is equal. Research supported by a grant of the Romanian National Authority for Scientific Research, CNCS UEFISCDI, project number PN-II-ID-PCE-2011-3-0959.ReferencesAna-Maria Barbu. 2008. Romanian Lexical Data Bases:Inflected and Syllabic Forms Dictionaries. In Proceedings of the 6th International Conference on Language
Automatic Syllabification with Structured SVMs for Letter-to-Phoneme Conversion. Susan Bartlett, Grzegorz Kondrak, Colin Cherry, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-HLT 2008. the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-HLT 2008Susan Bartlett, Grzegorz Kondrak, and Colin Cherry. 2008. Automatic Syllabification with Structured SVMs for Letter-to-Phoneme Conversion. In Proceedings of the 46th Annual Meeting of the Association for Compu- tational Linguistics: Human Language Technologies, ACL-HLT 2008, pages 568-576.
Prominence vs. rhythm: The predictability of stress in Romanian. Ioana Chitoran, Grammatical theory and Romance languages. Karen ZagonaIoana Chitoran. 1996. Prominence vs. rhythm: The pre- dictability of stress in Romanian. In Grammatical theory and Romance languages, pages 47-58. Karen Zagona.
Predicting Romanian Stress Assignment. Alina Maria Ciobanu, Anca Dinu, Liviu P Dinu, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsAlina Maria Ciobanu, Anca Dinu, and Liviu P. Dinu. 2014. Predicting Romanian Stress Assignment. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2014.
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. Michael Collins, Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing. the ACL-02 Conference on Empirical Methods in Natural Language Processing10Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of the ACL- 02 Conference on Empirical Methods in Natural Lan- guage Processing -Volume 10, EMNLP 2002, pages 1-8.
Evaluating the pronunciation component of text-to-speech systems for English: a performance comparison of different approaches. Robert I Damper, Yannick Marchand, M J Adamson, K Gustafson, Computer Speech & Language. 132Robert I. Damper, Yannick Marchand, M. J. Adamson, and K. Gustafson. 1999. Evaluating the pronunciation com- ponent of text-to-speech systems for English: a perfor- mance comparison of different approaches. Computer Speech & Language, 13(2):155-176.
Phonological Constraints and Morphological Preprocessing for Grapheme-to-Phoneme Conversion. Vera Demberg, Helmut Schmid, Gregor Möhler, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. the 45th Annual Meeting of the Association for Computational LinguisticsVera Demberg, Helmut Schmid, and Gregor Möhler. 2007. Phonological Constraints and Morphological Prepro- cessing for Grapheme-to-Phoneme Conversion. In Pro- ceedings of the 45th Annual Meeting of the Association for Computational Linguistics, ACL 2007, pages 96-103.
A Parallel Approach to Syllabification. P Liviu, Anca Dinu, Dinu, Proceedings of the 6th International Conference on Computational Linguistics and Intelligent Text Processing. the 6th International Conference on Computational Linguistics and Intelligent Text ProcessingLiviu P. Dinu and Anca Dinu. 2005. A Parallel Approach to Syllabification. In Proceedings of the 6th Interna- tional Conference on Computational Linguistics and In- telligent Text Processing, CICLing 2005, pages 83-87.
On the Behavior of Romanian Syllables Related to Minimum Effort Laws. Anca Dinu, P Liviu, Dinu, Proceedings of the Workshop on Multilingual Resources, Technologies and Evaluation for Central and Eastern European Languages. the Workshop on Multilingual Resources, Technologies and Evaluation for Central and Eastern European LanguagesAnca Dinu and Liviu P. Dinu. 2009. On the Behavior of Romanian Syllables Related to Minimum Effort Laws. In Proceedings of the Workshop on Multilingual Re- sources, Technologies and Evaluation for Central and Eastern European Languages, MRTECEEL 2009, pages 9-13.
Romanian Syllabication Using Machine Learning. P Liviu, Vlad Dinu, Octavia-Maria S Niculae, Ulea , Proceedings of the 16th International Conference on Text, Speech and Dialogue, TSD 2013. the 16th International Conference on Text, Speech and Dialogue, TSD 2013Liviu P. Dinu, Vlad Niculae, and Octavia-Maria S , ulea. 2013. Romanian Syllabication Using Machine Learning. In Proceedings of the 16th International Conference on Text, Speech and Dialogue, TSD 2013, pages 450-456.
An Approach to Syllables via some Extensions of Marcus Contextual Grammars. Grammars. Dinu Liviu Petrisor, 6Liviu Petrisor Dinu. 2003. An Approach to Syllables via some Extensions of Marcus Contextual Grammars. Grammars, 6(1):1-12.
A Ranking Approach to Stress Prediction for Letter-to-Phoneme Conversion. Qing Dou, Shane Bergsma, Sittichai Jiampojamarn, Grzegorz Kondrak, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, ACL 2009. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, ACL 2009Qing Dou, Shane Bergsma, Sittichai Jiampojamarn, and Grzegorz Kondrak. 2009. A Ranking Approach to Stress Prediction for Letter-to-Phoneme Conversion. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, ACL 2009, pages 118-126.
The Grammar of Romanian. Gabriela Panȃ Dindelegan, Oxford University PressGabriela Panȃ Dindelegan. 2013. The Grammar of Roma- nian. Oxford University Press.
Automatic rule-based syllabication for Romanian. S.-A Toma, E Oancea, D Munteanu, Proceedings of the 5th Conference on Speech Technology and Human-Computer Dialogue. the 5th Conference on Speech Technology and Human-Computer DialogueSPeDS.-A. Toma, E. Oancea, and D. Munteanu. 2009. Auto- matic rule-based syllabication for Romanian. In Pro- ceedings of the 5th Conference on Speech Technology and Human-Computer Dialogue, SPeD 2009, pages 1- 6.
Conditional Random Fields for Word Hyphenation. Nikolaos Trogkanis, Charles Elkan, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL 2010. the 48th Annual Meeting of the Association for Computational Linguistics, ACL 2010Nikolaos Trogkanis and Charles Elkan. 2010. Conditional Random Fields for Word Hyphenation. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics, ACL 2010, pages 366-374. |
16,334,135 | Leveraging Preposition Ambiguity to Assess Compositional Distributional Models of Semantics | Complex interactions among the meanings of words are important factors in the function that maps word meanings to phrase meanings. Recently, compositional distributional semantics models (CDSM) have been designed with the goal of emulating these complex interactions; however, experimental results on the effectiveness of CDSM have been difficult to interpret because the current metrics for assessing them do not control for the confound of lexical information. We present a new method for assessing the degree to which CDSM capture semantic interactions that dissociates the influences of lexical and compositional information. We then provide a dataset for performing this type of assessment and use it to evaluate six compositional models using both co-occurrence based and neural language model input vectors. Results show that neural language input vectors are consistently superior to co-occurrence based vectors, that several CDSM capture substantial compositional information, and that, surprisingly, vector addition matches and is in many cases superior to purpose-built paramaterized models. | [
11691908,
629094,
15659560,
6331273,
11567084,
436023,
18597583,
15616495,
16639476,
85205,
17414711,
990233,
18193242
] | Leveraging Preposition Ambiguity to Assess Compositional Distributional Models of Semantics
June 4-5, 2015
Samuel Ritter
Princeton University
Indiana University
University of Trento
University of Trento
Princeton University
Princeton University
Cotie Long
Princeton University
Indiana University
University of Trento
University of Trento
Princeton University
Princeton University
Denis Paperno
Princeton University
Indiana University
University of Trento
University of Trento
Princeton University
Princeton University
Marco Baroni
Princeton University
Indiana University
University of Trento
University of Trento
Princeton University
Princeton University
Matthew Botvinick
Princeton University
Indiana University
University of Trento
University of Trento
Princeton University
Princeton University
Adele Goldberg
Princeton University
Indiana University
University of Trento
University of Trento
Princeton University
Princeton University
Leveraging Preposition Ambiguity to Assess Compositional Distributional Models of Semantics
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics (*SEM 2015)
the Fourth Joint Conference on Lexical and Computational Semantics (*SEM 2015)Denver, ColoradoJune 4-5, 2015
Complex interactions among the meanings of words are important factors in the function that maps word meanings to phrase meanings. Recently, compositional distributional semantics models (CDSM) have been designed with the goal of emulating these complex interactions; however, experimental results on the effectiveness of CDSM have been difficult to interpret because the current metrics for assessing them do not control for the confound of lexical information. We present a new method for assessing the degree to which CDSM capture semantic interactions that dissociates the influences of lexical and compositional information. We then provide a dataset for performing this type of assessment and use it to evaluate six compositional models using both co-occurrence based and neural language model input vectors. Results show that neural language input vectors are consistently superior to co-occurrence based vectors, that several CDSM capture substantial compositional information, and that, surprisingly, vector addition matches and is in many cases superior to purpose-built paramaterized models.
Introduction
Consider the meanings of the following phrases: "red apple," "red hair," and "red state." The meaning of the word "red" in each of these examples interacts with the meaning of the noun it modifies, applying * Please address correspondence to the first author at [email protected] a different color to the first two and a political affiliation to the third. This is an example of a common phenomenon in natural language in which the meaning of a whole expression is not derived from a simple concatenation of its parts, but is composed by interactions among their meanings.
Cognitive and computer scientists have pointed out this complexity and proposed various models for accommodating it (Kintsch, 2001;Mitchell and Lapata, 2010;Socher et al., 2013). A dominant modeling approach seeks to learn functions that combine word representations derived from the distributional structure of large natural language corpora (Deerwester et al., 1990;Landauer and Dumais, 1997). Because the word representations to be combined and the compositional functions are generated based on the distributions of words in corpora, these models have been dubbed compositional distributional semantic models, or CDSM (Marelli et al., 2014). CDSM produce fixed-dimensional vector representations of arbitrary sentences and phrases, and the foundational principle of these models is, stated simply, that semantically similar phrases should have vector representations that are close together in the vector space.
CDSM Assessment
Past studies have tested how well CDSM adhere to this principle by comparing the vector similarity of pairs of sentences with similarity ratings given by humans. Many of these studies used datasets in which the amount of lexical overlap between the sentence pairs is not carefully controlled, e.g., the datasets of Dolan and Brockett (2005) and Agirre et al. (2014). One such study obtained the influential result that on such a dataset, simple composition models such as vector addition perform comparably to a state-of-the-art composition model (Blacoe and Lapata, 2012). The success of these simplistic models led to the conjecture that these data sets fail to assess critical aspects of language (Baroni et al., 2014a) and leaves open the question of whether CDSM would outperform simplistic models in a setting in which lexical cues are uninformative.
In the present study, we develop a method for removing the confound of lexical cues from CDSM assessment. The method is to create a set of sentences where each sentence fits into a semantic category and where a sentence's semantic category cannot be determined based on any individual word in the sentence. CDSM are then challenged to create a vector space in which the representations for sentences in a given category cluster together, even though the individual word vectors do not cluster together. This clustering can be tested by training a simple linear classifier on the CDSM representations, then testing it on representations for held out sentences.
Here, we build a suitable test set by leveraging the lexical ambiguity inherent in locative expressions. Locative expressions are phrases that describe a spatial relationship between two objects using two nouns joined by a preposition; for example, "The magnet is on the refrigerator", which describes the relationship of adhesion to a vertical surface. Crucially, the spatial relationship between the two nouns in a locative expression is undetermined by the spatial preposition, and can only be determined based on semantic interactions among the prepositions and the two nouns (Herskovits, 1985).
For example, while "The magnet is on the refrigerator" describes the spatial relationship of adhesion to a vertical surface, "The apple is on the refrigerator" describes support by a horizontal surface. In order to classify a new sentence, e.g., "The magnet is on the papers", into the correct category of support by a horizontal surface, the CDSM vectors for the three sentences must encode the fact that "The magnet is on the papers" shares a common spatial relationship with "The apple is on the refrigerator" and not with "The magnet is on the refrigerator", even though the latter pair of sentences share more words than the former. Given this dissociation between lexical overlap and spatial relationship, we were able to construct a dataset wherein lexical information is uninformative, and models must rely on compositionality to score well in classification.
Relation to Past Work
This approach to CDSM assessment is similar to a previous method wherein polysemous verbs are paired with disambiguating nouns in transitive or intransitive verb phrases. These phrases are then matched with "landmark" verbs that are either similar or not similar in meaning to the full phrase. CDSM are then challenged to create representations of the phrases from which classifiers can determine whether or not a phrase is similar to its landmark verb (Kintsch, 2001;Mitchell and Lapata, 2008;Mitchell and Lapata, 2010;Grefenstette and Sadrzadeh, 2011). Another notable CDSM assessment task involves matching a phrase with a word with a similar meaning, for example, matching a short dictionary definition with the word it defines (Kartsaklis et al., 2012;Turney, 2014).
While these methods are applicable only to simple phrases that can be mapped reasonably to a single word, the present method can, in principle, be applied to any type of phrase. This allowed us to build a dataset that extends the current landmark word and word matching datasets in at least two important ways. First, it includes function words, specifically prepositions. Second, it requires the characterization of interactions among three words in each expression, whereas previous datasets had two words per expression, or subsets of the words did not interact in complex ways.
Other important approaches to CDSM assessment include rating the similarity of sentence pairs, determining whether two sentences are paraphrases (Dolan and Brockett, 2005), classifying the entailment relationship between two sentences (Marelli et al., 2014), classifying the relationship between two entities named in a sentence (Hendrickx et al., 2009), and classifying the valence of the sentiment expressed in a sentence (Socher et al., 2013). These methods have primarily been aimed at assessing CDSM on the full array of constructions inherent in naturally generated language, while our method aims to isolate a specific construction of interest.
200 Category Example Adhesion to Vertical Surface "There is a magnet on the refrigerator." Support by Horizontal Surface "There is an apple on the refrigerator." Support from Above "There is an apple on the branch." Full Containment "There is an apple in the refrigerator." Partial Containment "There is an apple in the water."
The Dataset
A list of all of the spatial categories with examples is given in Table 1. The authors chose the set of categories to produce the desired dissociation between lexical meaning and phrase category, taking inspiration from the observations of Herskovits (1985).
To produce a dataset of expressions fitting these categories, the first and second authors -both native English speakers -generated a large set of locative expressions, intending each expression for a specific category. Then all of the expressions were independently rated by the first two authors, and any expression for which the ratings disagreed were excluded from the dataset. In order to achieve a balanced category size, the second author then created additional sentences intended for underrepresented categories. All additional sentences were stripped of labels and rated independently by the first author. If the first and second authors' categorizations did not match, the sentence was not added to the dataset. The dataset contains 500 sentences in total with 100 sentences per category. There is a large amount of lexical variety in the set, with 242 distinct words occurring in noun position one and 213 occurring in noun position two. The dataset is publicly available for download at www.princeton.edu/ ∼ swritter.
Evaluation Setup
Classification among the five categories was performed using a naive Bayes classifier. Two of the categories contained "in" as the preposition in all sentences while the other three contained "on" in all sentences. To be certain that the held out sentences on which the classifier was tested did not contain even a single category-informative noun, we operationally defined informativeness and relegated all sentences with an informative noun to the training set. A noun was deemed informative if it both occurred more than once in the entire data set and it occurred more frequently in one category than in any other. This criterion yielded a set of 80 sentences with no informative nouns, and a set of 420 sentences with at least one informative noun. By this method, we ensured that no component of the models' classification accuracy on the test set is due to the recognition of individual nouns.
In addition to the CDSM, we included two nondistributional models for comparison. The first, referred to as word overlap, consists of a binary feature vector containing one feature per vocabulary item. This model's performance provides an upper-bound on the performance that a model can achieve given only the distribution of word tokens in the training set. The second model, inspired by Srikumar and Roth (2013), contains binary features for Wordnet hypernyms (up to 4 levels) of each sense of the noun and a binary feature for each preposition. This model's score provides an indication of the amount of task-relevant information contained in the taxonomic features of individual words.
We compared CDSM to a further control that consisted of the concatenation of the word vectors. The concatenated vectors contain a complete representation of all of the individual word information, so that any performance the CDSM can achieve above the concatenation score can be attributed to semantic interaction information contained in the parameters of the CDSM. 1 1 One other experiment we considered was to test the models on the dataset phrases with prepositions removed. However, LF and PLF are undefined for such an input, and the element-wise models trivially perform better with the preposition included because the preposition is the only word that is not stripped of informativeness by design of the task. As such, we excluded this experiment from this report.
201
O v e r la p
W o r d n e t R A E P L F L F F A A d d M u lt C o
Compositional Distributional Models
We compared six models that are currently prominent in the CDSM literature: addition, multiplication (Mitchell and Lapata, 2008), lexical function (LF) (Coecke et al., 2010), practical lexical function (PLF) (Paperno et al., 2014), full additive (FA) (Guevara, 2010;Zanzotto et al., 2010), and the recursive auto-encoder (RAE) (Socher et al., 2011).
The training data for LF, PLF, and FA was the UKWAC+Wikipedia+BNC 2.8 billion word corpus. In training LF, we followed Grefenstette et al. (2013), employing a two-step training regime using corpus-extracted vectors for noun-preposition-noun combinations to estimate matrices of corresponding prepositional phrases, which were in turn used to estimate a three-way tensor of each preposition. For PLF and FA, we learned separate matrices for combining prepositions with each of the two nouns in the construction, using corpus-based vectors of prepositional phrases for training preposition-noun combination. For training composition of the head noun with the prepositional phrase, we used corpusextracted noun+preposition (for lexical matrices in PLF) or attributive adjective+noun (for attributive construction in FA) vectors. Phrase vectors for training were built as DISSECT 'peripheral' spaces from phrase cooccurrence data in the count models. In the predict models, phrase vectors were learned along with word vectors in one pass, feeding all phrases of the relevant type as single tokens.
The RAE vectors were computed using Socher et al.'s implementation which is trained on a 150K sentence subset of the NYT and AP sections of the Gigaword corpus.
For all compositional models, we used as input two varieties of word level representations: cooccurrence based (Turney et al., 2010) and neural language model (Mikolov et al., 2013). Following Baroni et al. (2014b), we will refer to these variants as count and predict models respectively. Both word models were trained on the same corpus as those used to train the compositional models. Count was based on a 5 word window weighted with positive PMI and was reduced to 300 dimensions via SVD, while predict was based on a 5 word window using Mikolov's continuous bag of words approach with negative sampling (Mikolov et al., 2013). These parameters were based on their strong performance in the systematic evaluation by Baroni et al. (2014b). Socher et al.'s RAE implementation composes neural language model vectors described by Collobert and Weston (2008) and supplied by Turian et al. (2010). For comparison with the RAE, we report results for addition, multiplication, and concatenation of these same embeddings.
Results
The naive Bayes accuracy scores for all models are displayed in Figure 1. Addition, PLF, and the RAE each substantially outperformed concatenation, indicating that these models' vectors contain informa-tion about the semantic interactions between phrase constituents. Addition scored higher than PLF, while the RAE achieved comparable performance to its additive counterpart. In all cases except FA in which predict and count vectors were compared, predict achieved a higher score. This last result shows that the superiority of predict vectors documented by Baroni et al. (2014b) extends to their use in compositional models.
All of the models performed well above chance accuracy of 0.2. The Wordnet based model achieved accuracy substantially above word overlap using hypernym information, indicating that although each noun is uninformative, its membership in higher level semantic categories is informative. All of the distributional models outperform the nondistributional models, except for LF and FA, which also fail to outperform concatenations of their input vectors. One explanation for the poor performance of LF and FA is that the 2.8B word corpus used to train them did not have sufficient relevant information to specify their large sets of parameters. This explanation is supported by the fact that PLF, a model designed as a parameter-reduced version of LF, performs well.
Discussion
The most important finding of this study is that, even on a test painstakingly designed to exclusively assess composition, vector addition matches or outperforms sophisticated CDSM. This finding implies that the structure of distributional vector spaces admits the effective use of addition for modeling complex interactions between meanings. This suggests that future work should be concerned with understanding the properties of distributional vector spaces that make this possible, as well as with understanding how these properties can be leveraged by sophisticated models.
A further contribution of this work is that it serves as a proof-of-concept for a new method for dissociating the influences of lexical and compositional influences on CDSM performance. Future work can extend this approach by finding alternatives to locative expressions in order to test a wider variety of constructions. More immediately, future work may improve the locative expressions dataset by using crowdsourcing to obtain naive participant ratings to corroborate the expert ratings and to increase the size of the dataset.
Figure 1 :
1Naive Bayes accuracy scores for count and predict variants of several CDSM. Chance performance on this task was 0.2. Overlap refers to the word overlap baseline. CW refers to the vectors fromCollobert and Weston (2008) .
Table 1 :
1Categories and Example Sentences
AcknowledgmentsDenis Paperno and Marco Baroni were supported by ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). Samuel Ritter and Matthew Botvinick were supported by Intelligence Advanced Research Projects Activity (IARPA) Grant n. 102-01.
Semeval-2014 task 10: Multilingual semantic textual similarity. Eneko Agirre, Carmen Baneab, Claire Cardiec, Daniel Cerd, Mona Diabe, Aitor Gonzalez-Agirre, Weiwei Guof, Rada Mihalceab, German Rigaua, Janyce Wiebeg, 81Eneko Agirre, Carmen Baneab, Claire Cardiec, Daniel Cerd, Mona Diabe, Aitor Gonzalez-Agirre, Weiwei Guof, Rada Mihalceab, German Rigaua, and Janyce Wiebeg. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. SemEval 2014, page 81.
Frege in space: A program of compositional distributional semantics. Linguistic Issues in Language Technology. Marco Baroni, Raffaela Bernardi, Roberto Zamparelli, 9Marco Baroni, Raffaela Bernardi, and Roberto Zampar- elli. 2014a. Frege in space: A program of compo- sitional distributional semantics. Linguistic Issues in Language Technology, 9.
Dont count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. Marco Baroni, Georgiana Dinu, Germán Kruszewski, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics1Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014b. Dont count, predict! a systematic compari- son of context-counting vs. context-predicting seman- tic vectors. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics, volume 1.
A comparison of vector-based representations for semantic composition. William Blacoe, Mirella Lapata, EMNLP. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composi- tion. In EMNLP, pages 546-556.
Mathematical foundations for a compositional distributional model of meaning. Bob Coecke, Mehrnoosh Sadrzadeh, Stephen Clark, arXiv:1003.4394arXiv preprintBob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. arXiv preprint arXiv:1003.4394.
A unified architecture for natural language processing: Deep neural networks with multitask learning. Ronan Collobert, Jason Weston, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningACMRonan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceedings of the 25th international conference on Machine learn- ing, pages 160-167. ACM.
Indexing by latent semantic analysis. C Scott, Susan T Deerwester, Thomas K Dumais, George W Landauer, Richard A Furnas, Harshman, JASIS. 416Scott C. Deerwester, Susan T Dumais, Thomas K. Lan- dauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. JASIS, 41(6):391-407.
Automatically constructing a corpus of sentential paraphrases. B William, Chris Dolan, Brockett, Proc. of IWP. of IWPWilliam B Dolan and Chris Brockett. 2005. Automat- ically constructing a corpus of sentential paraphrases. In Proc. of IWP.
Experimental support for a categorical compositional distributional model of meaning. Edward Grefenstette, Mehrnoosh Sadrzadeh, EMNLP. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In EMNLP, pages 1394-1404.
Multi-step regression learning for compositional distributional semantics. Edward Grefenstette, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, Marco Baroni, Proceedings of IWCS. IWCSPotsdam, GermanyEdward Grefenstette, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Baroni. 2013. Multi-step regression learning for composi- tional distributional semantics. In Proceedings of IWCS, pages 131-142, Potsdam, Germany.
A regression model of adjective-noun compositionality in distributional semantics. Emiliano Guevara, Proceedings of the 2010 Workshop on Geometrical Models of Natural Language Semantics. the 2010 Workshop on Geometrical Models of Natural Language SemanticsEmiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional se- mantics. In Proceedings of the 2010 Workshop on Geometrical Models of Natural Language Semantics, pages 33-37.
Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuidó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, Stan Szpakowicz, Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. the Workshop on Semantic Evaluations: Recent Achievements and Future DirectionsAssociation for Computational LinguisticsIris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, DiarmuidÓ Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakow- icz. 2009. Semeval-2010 task 8: Multi-way classifica- tion of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evalu- ations: Recent Achievements and Future Directions, pages 94-99. Association for Computational Linguis- tics.
Semantics and pragmatics of locative expressions*. Annette Herskovits, Cognitive Science. 93Annette Herskovits. 1985. Semantics and pragmatics of locative expressions*. Cognitive Science, 9(3):341- 378.
A unified sentence space for categorical distributional-compositional semantics: Theory and experiments. Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Stephen Pulman, Proceedings of COLING: Posters. Citeseer. COLING: Posters. CiteseerDimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Stephen Pulman. 2012. A unified sentence space for cate- gorical distributional-compositional semantics: The- ory and experiments. In In Proceedings of COLING: Posters. Citeseer.
. Walter Kintsch, Predication. Cognitive Science. 252Walter Kintsch. 2001. Predication. Cognitive Science, 25(2):173-202.
A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. K Thomas, Susan T Landauer, Dumais, Psychological review. 1042211Thomas K Landauer and Susan T Dumais. 1997. A so- lution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 104(2):211.
Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, Roberto Zamparelli, SemEval-2014Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zampar- elli. 2014. Semeval-2014 task 1: Evaluation of com- positional distributional semantic models on full sen- tences through semantic relatedness and textual entail- ment. SemEval-2014.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.
Vector-based models of semantic composition. Jeff Mitchell, Mirella Lapata, ACL. CiteseerJeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL, pages 236- 244. Citeseer.
Composition in distributional models of semantics. Jeff Mitchell, Mirella Lapata, Cognitive science. 348Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34(8):1388-1429.
A practical and linguistically-motivated approach to compositional distributional semantics. Denis Paperno, Nghia The, Marco Pham, Baroni, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MarylandLong Papers1Denis Paperno, Nghia The Pham, and Marco Baroni. 2014. A practical and linguistically-motivated ap- proach to compositional distributional semantics. In Proceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 90-99, Baltimore, Maryland, June.
Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, Christopher D Manning, Advances in Neural Information Processing Systems 24. Richard Socher, Eric H. Huang, Jeffrey Pennington, An- drew Y. Ng, and Christopher D. Manning. 2011. Dy- namic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In Advances in Neural In- formation Processing Systems 24.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Y Jean, Jason Wu, Chuang, D Christopher, Manning, Y Andrew, Christopher Ng, Potts, EMNLP. CiteseerRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631-1642. Citeseer.
Modeling semantic relations expressed by prepositions. V Srikumar, D Roth, Transactions of the Association for Computational Linguistics. 1V. Srikumar and D. Roth. 2013. Modeling semantic rela- tions expressed by prepositions. In Transactions of the Association for Computational Linguistics, volume 1, pages 231-242.
Word representations: a simple and general method for semi-supervised learning. Joseph Turian, Lev Ratinov, Yoshua Bengio, Proceedings of the 48th. the 48thJoseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th
Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics, pages 384-394. Association for Computa- tional Linguistics.
From frequency to meaning: Vector space models of semantics. D Peter, Patrick Turney, Pantel, Journal of artificial intelligence research. 371Peter D Turney, Patrick Pantel, et al. 2010. From fre- quency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141- 188.
Semantic composition and decomposition: From recognition to generation. D Peter, Turney, arXiv:1405.7908arXiv preprintPeter D Turney. 2014. Semantic composition and de- composition: From recognition to generation. arXiv preprint arXiv:1405.7908.
Estimating linear models for compositional distributional semantics. Fabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, Suresh Manandhar, Proceedings of the 23rd International Conference on Computational Linguistics. the 23rd International Conference on Computational LinguisticsFabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional dis- tributional semantics. In Proceedings of the 23rd International Conference on Computational Linguis- tics, pages 1263-1271. |
10,760,594 | Avoiding and Resolving Initiative Conflicts in Dialogue * | In this paper, we report on an empirical study on initiative conflicts in human-human conversation. We examined these conflicts in two corpora of task-oriented dialogues. The results show that conversants try to avoid initiative conflicts, but when these conflicts occur, they are efficiently resolved by linguistic devices, such as volume. | [
1006472,
459098,
2570492
] | Avoiding and Resolving Initiative Conflicts in Dialogue *
Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 2007. 2007
Fan Yang
Center for Spoken Language Understanding
OGI School of Science & Engineering Oregon Health & Science University
Peter A Heeman [email protected]
Center for Spoken Language Understanding
OGI School of Science & Engineering Oregon Health & Science University
Avoiding and Resolving Initiative Conflicts in Dialogue *
Proceedings of NAACL HLT 2007
NAACL HLT 2007Rochester, NYAssociation for Computational LinguisticsApril 2007. 2007
In this paper, we report on an empirical study on initiative conflicts in human-human conversation. We examined these conflicts in two corpora of task-oriented dialogues. The results show that conversants try to avoid initiative conflicts, but when these conflicts occur, they are efficiently resolved by linguistic devices, such as volume.
Introduction
Current computer dialogue systems tend to be systeminitiative. Although there are some mixed-initiative systems that allow the user to make a request or state a goal, such systems are limited in how they follow natural initiative behavior. An example is where the system always releases the turn whenever the user barges in. However, in a complex domain where the computer system and human user are collaborating on a task, the computer system might need to interrupt the human user, or might even need to fight with the human user over the turn. Thus the next generation of computer dialogue systems need a better model of initiative (Horvitz, 1999). In what situations can the system try to take initiative from the user? What devices can the system use to fight for initiative? We propose examining human-human conversation to answer these questions. Once we understand the conventions people adopt in negotiating initiative, we can implement them in a computer dialogue system to create natural interactivity.
In this research work, we examined two corpora of human-human conversation: the Trains corpus (Heeman and Allen, 1995) and the MTD corpus (Heeman et al., 2005). The research purpose is to understand conversants' behavior with initiative conflicts, which we define a situation where both conversants try to direct the conversation at the same time, but one of them fails. We * This work was funded by the National Science Foundation under IIS-0326496. found that (1) conversants try to avoid initiative conflicts; and (2) initiative conflicts, when they occur, are efficiently resolved by linguistic devices, such as volume.
In Section 2, we review related research work on modeling initiative and turn-taking. Dialogue initiative and turn-taking are two intertwined research topics. When conversants fight to show initiative, they are also fighting for the turn to speak. In Section 3, we describe the two corpora and their annotations. In Section 4, we define initiative conflict and give an example. In Section 5, we present the evidence that conversants try to avoid initiative conflicts. In Section 6, we present evidence that initiative conflicts are efficiently resolved by linguistic devices. We discuss our findings in Section 7 and future work in Section 8.
Related Research
Initiative Models
Researchers have been investigating how people manage dialogue initiative in their conversation. Whittaker and Stenton (1988) proposed rules for tracking initiative based on utterance types; for example, statements, proposals, and questions show initiative, while answers and acknowledgements do not. Smith (1993) proposed four different initiative strategies with differing amounts of control by the system. Chu-Carrol and Brown (1998) distinguished dialogue initiative from task initiative, and proposed an evidential model of tracking both of them. Cohen et al. (1998) proposed presenting initiative in different strengths. Some researchers related initiative to discourse structure. Walker and Whittaker (1990) found a correlation between initiative switches and discourse segments. Strayer et al. (2003) proposed the restricted initiative model in which the initiator of a discourse segment, who introduces the discourse segment purpose, is in control of the segment and shows most of the initiative. These models allowed the possibility that multiple conversants will want to show initiative at the same time; however, none of them addressed initiative conflicts.
Guinn (1998) studied another type of initiative, task initiative, which is about directing the problem-solving of a domain goal. Guinn proposed that the person who is more capable of coordinating the current goal is the person who should be leading the dialogue. Initiative switches between conversants as goals get pushed and popped from the problem-solving stack. However, because conversants only have incomplete information, initiative conflicts might occur when conversants overestimate their own capability or underestimate the other's. Guinn proposed a negotiation model to resolve these conflicts of task initiative. Conversants negotiate by informing each other of positive and negative information of their plans to achieve the goal. By comparing each other's plan, the conversant whose plan has the higher probability of success takes initiative. Guinn's research on conflicts of task initiative, however, has little bearing on conflicts of dialogue initiative. For dialogue initiative, very often, one of the conversants just gives up the attempt very quickly, without giving a justification. As stated by Haller and Fossum (1999):"... conflicts are often simple clashes that result from both participants trying to take the initiative at the same time. Such conflicts do not necessarily require complex negotiation to resolve. Often, unwritten rules based on factors like social roles, personal assertiveness, and the current locus of control play a part in determining who will give away." However, Haller and Fossum did not further investigate how conversants efficiently resolve conflicts of dialogue initiative.
Turn-Taking and Initiative
Turn-taking in conversation is highly related to initiative. Conversants have to possess the turn in order to show initiative. When conversants are fighting for initiative, they are also fighting for the turn to speak. Thus the mechanisms of turn-taking might share some similarity with initiative. On the other hand, turn-taking is different from initiative; for example, an answer takes a turn, but answering does not show initiative.
Turn-taking in conversation has been discussed in linguistics literature. Duncan (1974) examined cues (gesture, acoustic, and linguistic) that conversants use to signal turn-taking or turn-releasing. A model based on these signals was created to account for conversants' turntaking behavior. In this model, miscues are the cause of overlapping speech: for example, the hearer misrecognizes the speaker's cue to keep the turn, or the speaker fails to properly signal. Sacks et al. (1974) proposed a set of rules for turntaking: the current speaker can select somebody else to speak; otherwise, hearers can self-select to speak; otherwise, the speaker can self-select to speak. This model suggested that overlapping speech results from either the hearer waiting too long to speak, or the speaker not waiting long enough. Schegloff (2000) examined overlapping speech in detail in human conversation. He concluded that (1) fights for turn are often accompanied with sudden acoustic alteration, such as louder volume, higher pitch, and faster or slower speaking rate; (2) the vast majority of fights for turn are resolved very quickly; (3) fights for turn are resolved through an interactive procedure, e.g. syllable by syllable negotiation, using devices such as volume, pitch, and speaking rate. However, his analysis only consisted of a few examples; no statistical evidence was given. It is thus unclear whether his conclusions represent human conventions of initiative conflict, or are occasional behavior that would only occur under special circumstances.
Corpora and Annotations
To understand human behavior in initiative conflicts, we examined two corpora, the Trains corpus and the MTD corpus. These two corpora have very different domain setups. The distinct behavior seen in each corpus will help inform us how domain settings affect initiative, while the common behavior will help inform us the cross-domain human conventions.
The Trains Corpus
The Trains corpus is a collection of human-human taskoriented dialogues, in which two participants work together to formulate a plan involving the manufacture and transportation of goods. One participant, the user, has a goal to solve; and the other participant, the system, knows the detailed domain information including how long it takes to ship and manufacture goods.
We annotated eight Trains dialogues totaling about 45 minutes using the tool DialogueView (Yang et al., 2007). We tagged each utterance with a simplified DAMSL scheme (Core and Allen, 1997). Utterances were tagged as forward or backward functions, stalls, or non-contributions. Forward functions include statements, questions, checks and suggestions. Backward functions include agreements, answers, acknowledgments, repetitions and completions. Examples of stalls are "um" and "let's see", used by a conversant to signal uncertainty of what to say next or how to say it. Non-contributions include abandoned and ignored utterances. The flow of the dialog would not change if non-contributions were removed.
Hierarchical discourse structure was annotated following Strayer et al. (2003). To determine whether a group of utterances form a discourse segment, we took into account whether there exists a shared goal introduced by one of the conversants (cf. Grosz and Sidner, 1986).
The MTD Corpus
The MTD corpus contains dialogues in which a pair of participants play two games via conversation: an ongoing game that takes a relatively long time to finish and an interruption game that can be done in a couple turns but has a time constraint. Both games are done on computers. Players are separated so that they cannot see each other.
In the ongoing game, the two players work together to assemble a poker hand of a full house, flush, straight, or four of a kind. Each player has three cards in hand, which the other cannot see. Players take turns drawing an extra card and then discarding one until they find a poker hand, for which they earn 50 points. To discourage players from simply rifling through the cards to look for a specific card without talking, one point is deducted for each picked-up card, and ten points for a missed or incorrect poker hand. To complete this game, players converse to share card information, and explore and establish strategies based on the combined cards in their hands.
From time to time, the computer generates a prompt for one player to start an interruption game to find out whether the other player has a certain picture on the screen. The interruption game has a time constraint of 10, 25, or 40 seconds, which is (pseudo) randomly determined. Players get five points for the interruption game if the correct answer is given in time. Players are told to earn as many points as possible.
We annotated six MTD dialogues totaling about 90 minutes. Utterances were segmented based on player's intention so that each utterance has only one dialogue act that is to share information, explore strategies, suggest strategies, or maintain an established strategy (Toh et al., 2006). We applied the same simplified DAMSL scheme on utterance tag annotations. Figure 1 shows an annotated excerpt of an MTD dialogue. We grouped utterances into blocks. Block b21 is a game block in which conversants completed a poker hand. Blocks b22 and b23 are two card blocks in which conversants picked up a new card, discussed what they had in hand, and chose a card to discard. Block b24 is an interruption segment in which conversants switched their conversation to the interruption game. No claim is made that the game and card blocks are discourse segments according to Grosz and Sidner's definition (1986).
Defining Initiative Conflicts
An initiative conflict occurs when a conversant's attempt to show initiative fails because someone else is showing initiative at the same time. Following Whittaker and Stenton (1988), we use utterance tags to determine whether an utterance shows initiative: forward functions show initiative while others do not. Non-contributions are viewed as failed attempt to show initiative. Thus we identify initiative conflicts as overlapping utterances that involve either a forward function and a non-contribution or two non-contributions. Figure 2 gives an example of an initiative conflict from We use the term preceding-pause to refer to the time interval between the end of the previous utterance and the first utterance that is involved in the overlap (from A to B in Figure 2). Offset refers to the interval between the start times of the two overlapped utterances (from B to C). Duration refers to the time interval from the beginning of overlap till the end of overlap (from C to D).
In the Trains corpus, there are 142 cases of overlapping speech, 28 of which are initiative conflicts. Of the remaining, 96 cases involve a backward function (e.g. an acknowledgment overlapping the end of an inform), and 10 cases involve a stall. The remaining 8 cases are other types of overlap, such as a collaborative completion, or conversants talking about the same thing: for example, one saying "we are a bit early" and the other saying "we are a little better".
In the MTD corpus, there are 383 cases of overlapping speech, 103 of which are initiative conflicts. Of the remaining, 182 cases involve a backward function, 21 cases involve a stall, and 77 cases are others. Initiative conflicts There are three cases in the Trains and thirteen cases in the MTD corpus where the preceding-pause is negative, i.e. the first overlapped utterance is started before the other conversant finishes the previous utterance. Sometimes the hearer starts a little bit early to take the turn. If the original speaker does not intend to release the turn, a conflict arises. Because these cases involve three utterances, we exclude them from our current analysis and save them for future research. 1 This leaves 25 cases in the Trains corpus and 90 cases in the MTD corpus for analyzing initiative conflicts.
Avoiding Initiative Conflicts
In this section, we show that conversants try to avoid initiative conflicts by examining both the offset of initiative conflicts and the urgency levels.
Offset of Initiative Conflicts
The offset of an initiative conflict indicates where the conflict happens. A short offset indicates that the conflict happens at the beginning of an utterance, while a long offset indicates an interruption in the middle. Figure 3 shows the cumulative distribution function (CDF) for offsets for both corpora individually. The mean offset is 138ms for the Trains corpus, and 236ms for the MTD corpus. In comparison to the average length of forward utterances (2596ms in the Trains corpus and 1614ms in the MTD corpus), the offset is short. Moreover, in the Trains corpus, 88% of offsets are less than 300ms (and 80% less than 200ms); in the MTD corpus, 75% of offsets are less than 300ms. Thus most initiative conflicts happen at the beginning of utterances. 1 These cases of negative value preceding-pause are in fact very interesting. They seem to contradict with Sacks et al. (1974)'s model that the hearer has priority to self select to speak. If Sacks et al. is correct, the speaker should wait a certain amount of time in order not to overlap with the hearer, but in these cases we see that the speaker self-selects to speak without taking into account whether the hearer self-selects to speak or not. There is one instance in the Trains corpus and eleven in the MTD corpus. Four cases are because the second conversant has something urgent to say. For example, when an interruption game is timing out, conversants would interrupt, sometimes in the middle of an utterance, which results in a long offset. Another six cases are due to miscues. Figure 4 shows an example. Conversant B said "I have two aces" with end-of-utterance intonation, paused for about half a second, and then added "and a seven". The ending intonation and the pause probably misled conversant A to believe that B had finished, and thus A started a new forward utterance, which overlapped with B's extension. A's utterance was then quickly abandoned. In these cases, it is ambiguous whether B's utterance "I have two aces ... and a seven" should be further chopped into two utterances. The final two cases are intrusions, with an example shown in Figure 5. Conversant A cut in probably because he was confident with his decision and wanted to move on to the next card. In such cases, the intruder might be perceived as being rude. The preponderance of short offsets provides evidence that conversants try to avoid initiative conflicts. When A detects that B is talking, A should not attempt to show initiative until the end of B's utterance in order to avoid conflicts, unless there is an urgent reason. If conversants did not take into account whether someone else is speaking before attempting initiative, we would see a lot of intrusions in the middle of utterances, which in fact rarely happen in the two corpora. As we have shown, initiative conflicts tend to happen at the beginning of utterances. Thus initiative conflicts occur mainly due to unintentional collision, i.e. both conversants happen to start speaking almost at the same time. The fact that the offset of most initiative conflicts is within 300ms confirms this. 2
Urgency Level and Initiative Conflicts
To further support the hypothesis that conversants avoid initiative conflicts except for urgent reasons, we examined the MTD corpus for the correlation between the urgency levels of the interruption game and initiative conflicts. For the urgency level of 10 seconds, conversants started 33 interruption games, 8 of which were introduced via initiative conflicts. For 25 seconds, conversants started 36 interruption games, 5 introduced via initiative conflicts. For 40 seconds, conversants started 33 interruption games, 3 introduced via initiative conflicts. Thus the percentages of initiative conflicts for the three urgency levels are 24% for 10 seconds, 14% for 25 seconds, and 9% for 40 seconds. The urgency level of 10 seconds requires conversants to start the interruption game very quickly in order to complete it in time. On the other hand, the urgency level of 40 seconds allows conversants ample time to wait for the best time to start the game (Heeman et al., 2005). Thus we see the percentage of initiative conflicts decreases as it becomes less urgent to the interruption game. These results suggest that conversants try to avoid initiative conflicts if they can, unless there is an urgent reason.
Resolving Initiative Conflicts
In this section, we present evidence that initiative conflicts, if they occur, are resolved very quickly using simple devices.
2 This 300ms might be related to human reaction time.
Duration of Initiative Conflicts
The duration of an initiative conflict, as defined in Section 4, indicates how quickly the conflict is resolved. Figure 6 shows the cumulative distribution function of durations of initiative conflicts and the lengths of forward utterances in the two corpora. The mean duration is 328ms in the Trains corpus and 427ms in the MTD corpus. From Figure 6 we see that the duration is much shorter than the length of forward utterances, which have the mean length of 2596ms in the Trains corpus and 1614ms in the MTD corpus. The difference between duration of initiative conflicts and length of forward utterances is statistically significant (p < 10 −5 , ttest). On average, the duration of initiative conflicts is about 1/8 the length of forward utterances in the Trains corpus and about 1/4 in the MTD corpus. The short durations suggest that initiative conflicts are resolved very quickly.
According to Crystal and House (1990), the average length of CVC syllable is about 250ms. Thus on average, the length of initiative conflicts is about one to two syllables. 3 In fact, 96% of conflicts in the Trains corpus and 73% in the MTD corpus are resolved within 500ms. These observations are consistent with one of Schelogff's (2000) claims about turn-taking conflicts, that they usually last less than two syllables to resolve.
Resolution of Initiative Conflicts
From our definition of initiative conflict, at least one of the speakers has to back off. For expository ease, we re-fer to the person who gets the turn to contribute as the winner, and the other who fails as the yielder. There are two cases in the Trains corpus and three cases in the MTD corpus in which both speakers abandoned their incomplete utterances, paused for a while, and then one of them resumed talking. These five cases are treated as ties: no winners or yielders, and are excluded from our analysis here.
Given how quickly initiative conflicts are resolved, we examined whether the resolution process might be dependent on factors presented before the conflict even begins, namely who was speaker in the previous utterance, and who was interrupted. If we predict that the conversant who spoke prior to the conflict (speaker of u262 in Figure 2) loses, we get 55% accuracy in the Trains corpus and 61% accuracy in the MTD corpus. If we predict the conversant who spoke first in the overlap (speaker of u263 in Figure 2) wins, we get 60% accuracy in the Trains corpus and 53% accuracy in the MTD corpus. These low percentages suggest that they are not robust predictors.
We next examined how conversants resolve the conflicts using devices such as volume, pitch, and others.
Volume
For a stretch of speech, volume is calculated as the mean energy of the spoken words. For each initiative conflict, we calculated each conversant's volume during the overlap, and then normalized it with respect to the conversant's volume throughout the whole conversation. 4 We refer to this as relative volume. In the Trains corpus, the average relative volume of the winner is 1.06; the average relative volume of the yielder is 0.93. The difference is statistically significant (P < 0.01, anova). In the MTD corpus, the average relative volume of the winner is 1.12; the average relative volume of the yielder is 0.98. The difference is also statistically significant (p < 10 −6 , anova). These results show that the winner is the one speaking at a higher relative volume.
To strengthen our argument, we also calculated volume ratio as the relative volume of the winner divided by the yielder. The average volume ratio in the Trains corpus is 1.16 and in the MTD corpus is 1.18. If a classifier always chooses the speaker with higher relative volume to be the winner, we achieve about 79% accuracy in both corpora, which is a 29% absolute improvement over random prediction. These results further confirm that the conversant who speaks at a higher relative volume wins the initiative conflicts.
Given the importance of volume in the resolution process, we examined whether it has an impact on the duration of initiative conflicts. Figure 7 plots the relation 4 Normalization is necessary particularly as conversants heard each other via headsets, and the microphones were not calibrated to have exactly the same gains. between volume ratio and duration of conflicts for all the cases in the two corpora. For reference, the dotted line divides the data points into two groups: under the line are what volume ratio fails to predict the winner, and above the line are success. If we look at the points where volume ratio succeeds, we see that when duration of initiative conflicts is long, volume ratio tends to be small: in fact, the average volume ratio for initiative conflicts shorter than 600ms is 1.27; for long than 600ms is 1.13; and the difference is statistically significant (ttest, p < 0.01).
To further understand how volume is used in the resolution procedure, we examined how volume changes during the overlap. For initiative conflicts whose duration is longer than 600ms, we cut the overlapped speech evenly in half, and calculated the relative volume for each half individually. For the first half, the average relative volume of the winner is 1.03, and the yielder is 1.02. The difference is not statistically significant (p = 0.93, paired ttest). For the second half, the average relative volume of the winner is 1.20, and the yielder is 1.02. The difference is statistically significant (p < 0.001, paired ttest). The fact that these long initiative conflicts are not resolved in the first half is probably partially due to the close relative volume.
We then calculated volume increment as subtracting the relative volume of the first half from the second half. The average volume increment of the winner is 0.17; the average volume increment of the yielder is 0. The difference is statistically significant (p < 0.001, paired ttest). These results show that the range of volume increment during the overlap by the winner is larger than the yielder. The behavior of increasing volume during overlap to win the fight suggests that conversants use volume as a device to resolve initiative conflicts.
Pitch
We used the tool WaveSurfer (Sjölander and Beskow, 2000) to extract the f0 from the audio files. We calculated relative pitch similarly as we did for volume.
In the Trains corpus, the average relative pitch of the winner is 1.02; the average relative pitch of the yielder is 0.96. The difference is not statistically significant (P = 0.54, anova). In the MTD corpus, the average relative pitch of the winner is 1.09; the average relative pitch of the yielder is 0.98. The difference is statistically significant (p < 0.001, anova). If we choose the speaker with higher pitch to be the winner, we achieve about 65% accuracy in the Trains corpus and 62% in the MTD corpus. These results suggest that pitch alone is not robust for predicting the winner of initiative conflicts, at least not as predictive as volume, although we do see the tendency of higher pitch by the winner.
We also examined pitch range in the window of 100ms and 300ms respectively. We calculated the pitch range of the overlapping speech and then normalized it with respect to the conversant's pitch range throughout the whole conversation. We did not see a significant correlation between pitch range and the winner of initiative conflicts. Thus pitch does not seem to be a device for resolving initiative conflicts.
Role of Conversants
Human-computer dialogues often have a user interacting with a system, in which the two have very different roles. Hence, we investigated whether the conversant's role has an effect in how initiative conflicts are resolved. We focused on the Trains corpus due to both its rich discourse structure and the difference in the roles that the system and the use have.
In the Trains corpus, if we predict that the initiator of a discourse segment wins the conflicts, we get 65% accuracy. In system-initiated segments, the system wins all eight conflicts; however, in user-initiated segments, the user only wins seven and system wins eight. The user does not have an advantage during initiative conflicts in its segments. Moreover, if the initiator had an advantage, we would expect the system to have fought more strongly in the user-initiated segments in order to win. However, we do not see that the relative volume of the system winning in user-initiated segments is statistically higher than in system-initiated segments in this small sample size (p = 0.9, ttest). The initiator does not seem to have a privileged role in the resolution process.
From the above analysis, we see that the system wins the conflicts 16 out of 23 times. Thus if we predict that the system always wins the conflicts, we achieve 70% accuracy. This is not surprising because the system has all the domain information, and is more experienced in solving goals. If the system and user want to speak at the same time, both would know that the system probably has a more significant contribution. That the system wins most of the initiative conflicts agrees with Guinn (1998) that capability plays an important role in determining who to show initiative next.
Discussion
In this paper, we present our empirical study of human behavior in initiative conflicts. Our first finding is that conversants try to avoid initiative conflicts. The consequence of initiative conflicts is that at least one of the conversants would have to back off, which makes their effort of contributing in vain. Moreover, the effort of resolving initiative conflicts is overhead to the dialogue. According to the theory of least collaborative effort by Clark and Wilkes-Gibbs (1986), it only makes sense for conversants to interrupt when the loss of not interrupting is higher than the cost of an initiative conflict. Thus the theory of least collaborative effort is consistent with our conclusion that most initiative conflicts are unintentional collisions, except where conversants interrupt in the middle of an utterance for urgency reasons.
The second finding of our research is that initiative conflicts, when they occur, are efficiently resolved. We found that volume plays an important role: the louder speaker wins. We also show how conversants change their volume to resolve initiative conflicts. Conversants probably identify their eagerness of speaking, confidence in what they want to say, and capability of achieving the current goal by means of volume, which resolves the initiative conflicts very quickly.
Domain settings obviously have an impact on conversants' initiative behavior. There are more frequent initiative conflicts in the MTD corpus than in the Trains corpus. Moreover, the roles of the conversants also affect their initiative behavior as we found that the system wins more initiative conflicts in the Trains corpus. In a teacherstudent conversation, one would expect to see that the teacher interrupts the student more often than vice versa, but also that the teacher wins more initiative conflicts. Capability, culture, and social relationship probably are some underlying elements that influence when and under what conditions conversants would seek initiative, while volume is a device for resolving initiative conflicts.
Future Work
In this paper we focused on initiative conflicts in dialogue where two conversants cannot see each other. In face-toface conversation, there might be other cues, such as eyecontact, head-nodding, and hand gesture, that conversants use in initiative conflicts. Moreover, in a multi-party conversation, a conversant might talk to different people on different topics, and get interrupted from time to time, which leads to an initiative conflict involving multiple speakers. In our future work, we plan to examine initiative conflicts in face-to-face multi-party conversation, such as the ICSI corpus (Shriberg et al., 2004).
Inspired by the findings on human behavior of initiative conflicts, we speculate that conversants might also have a mechanism to even minimize unintentional initiative conflicts, which probably includes devices such as volume, pause, and other prosodic features. The speaker uses these devices, as opposed to explicitly informing each other of their knowledge to evaluate capability (Guinn, 1998), to implicitly signal his or her eagerness, confidence and capability. The hearer then compares his or her own eagerness with the speaker's, and decides whether to just make an acknowledgement (allowing the speaker to continue the lead) or to take over the initiative when taking the turn to speak. In our future work, we plan to build an initiative model to capture this negotiation process.
Figure 1 :
1An excerpt of an MTD dialogue the MTD corpus. The top conversant says "that's pair of threes and pair of fours", which ends at time point A. After a short pause, at time B, the bottom conversant asks "how many threes do you have", which is overlapped by the top conversant's second utterance "I'll drop" at time C. The top conversant then abandons the attempt of showing initiative at time D. Hence the bottom speaker is the winner of this initiative conflict.
Figure 2 :
2An illustration of an initiative conflict are more frequent in the MTD corpus (103 cases in 90 min) than in the Trains corpus (28 cases in 45 min).
Figure 3 :
3CDF plot for offsets of initiative conflicts Few initiative conflicts have offsets longer than 500ms.
Figure 4 :Figure 5 :
45Long Long offset: intrusion
Figure 6 :
6CDF plot for durations of initiative conflicts together with lengths of forward utterances
Figure 7 :
7Volume ratio and duration of conflicts
It would be interesting to examine the length of initiative conflicts based on syllable. However currently we do not have syllable-level alignment for the two corpora. We leave this for future research.
An evidential model for tracking initiative in collaborative dialogue interactions. Jennifer Chu, - Carroll, Michael K Brown, User Modeling and User Adapted Interaction. 8Jennifer Chu-Carroll and Michael K. Brown. 1998. An evidential model for tracking initiative in collabora- tive dialogue interactions. User Modeling and User Adapted Interaction, 8:215-253.
Referring as a collaborative process. H Herbert, Deanna Clark, Wilkes-Gibbs, Cognitive Science. 22Herbert H. Clark and Deanna Wilkes-Gibbs. 1986. Re- ferring as a collaborative process. Cognitive Science, 22:1-39.
Robin Cohen, C Allaby, C Cumbaa, M Fitzgerald, K Ho, B Hui, C Latulipe, F Lu, N Moussa, D Pooley, A Qian, S Siddiqi, What is initiative? User Modeling and User Adapted Interaction. 8Robin Cohen, C. Allaby, C. Cumbaa, M. Fitzgerald, K. Ho, B. Hui, C. Latulipe, F. Lu, N. Moussa, D. Poo- ley, A. Qian, and S. Siddiqi. 1998. What is initiative? User Modeling and User Adapted Interaction, 8:171- 214.
Coding dialogues with the DAMSL annotation scheme. G Mark, James F Core, Allen, Working Notes: AAAI Fall Symposium on Communicative Action in Humans and Machines. CambridgeMark G. Core and James F. Allen. 1997. Coding dia- logues with the DAMSL annotation scheme. In Work- ing Notes: AAAI Fall Symposium on Communicative Action in Humans and Machines, pages 28-35, Cam- bridge.
Articulation rate and the duration of syllables and stress groups in connected speech. H Thomas, Arthur S Crystal, House, Journal of Acoustical Society of America. 88Thomas H. Crystal and Arthur S. House. 1990. Articula- tion rate and the duration of syllables and stress groups in connected speech. Journal of Acoustical Society of America, 88:101-112.
On the structure of speakerauditor interaction during speaking turns. Starkey Duncan, Language in Society. 2Starkey Duncan. 1974. On the structure of speaker- auditor interaction during speaking turns. Language in Society, 2:161-180.
Attention, intentions, and the structure of discourse. J Barbara, Candace L Grosz, Sidner, Computational Linguistics. 123Barbara J. Grosz and Candace L. Sidner. 1986. Atten- tion, intentions, and the structure of discourse. Com- putational Linguistics, 12(3):175-204.
An analysis of initiative selection in collaborative task-oriented discourse. User Modeling and User Adapted Interaction. I Curry, Guinn, 8Curry I. Guinn. 1998. An analysis of initiative selection in collaborative task-oriented discourse. User Model- ing and User Adapted Interaction, 8:255-314.
Using protocols to model mixed initiative interaction. Susan Haller, Timothy Fossum, Proceedings of AAAI Workshop on Mixed Initiative Intelligence. AAAI Workshop on Mixed Initiative IntelligenceSusan Haller and Timothy Fossum. 1999. Using pro- tocols to model mixed initiative interaction. In Pro- ceedings of AAAI Workshop on Mixed Initiative Intel- ligence.
The Trains spoken dialogue corpus. CD-ROM, Linguistics Data Consortium. A Peter, James F Heeman, Allen, Peter A. Heeman and James F. Allen. 1995. The Trains spoken dialogue corpus. CD-ROM, Linguistics Data Consortium.
Conventions in humanhuman multithreaded dialogues: A preliminary study. A Peter, Fan Heeman, Andrew L Yang, Alexander Kun, Shyrokov, Proceedings of Intelligent User Interface. Intelligent User InterfaceSan Diego CAshort paper sessionPeter A. Heeman, Fan Yang, Andrew L. Kun, and Alexander Shyrokov. 2005. Conventions in human- human multithreaded dialogues: A preliminary study. In Proceedings of Intelligent User Interface (short pa- per session), pages 293-295, San Diego CA.
Principles of mixed-initiative user interfaces. Eric Horvitz, Proceedings of CHI. CHIPittsburgh PAEric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of CHI, pages 159-166, Pittsburgh PA.
A simplest systematics for the organization of turn-taking for conversation. Harvey Sacks, Emanuel A Schegloff, Gail Jefferson, Language. 504Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. Language, 50(4):696- 735.
Overlapping talk and the organization of turn-taking for conversation. A Emanuel, Schegloff, Language in Society29Emanuel A. Schegloff. 2000. Overlapping talk and the organization of turn-taking for conversation. Lan- guage in Society, 29:1-63.
The ICSI meeting recorder dialog act corpus. E Shriberg, R Dhillon, S Bhagat, J Ang, H Carvey, Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue. the 5th SIGdial Workshop on Discourse and DialogueE. Shriberg, R. Dhillon, S. Bhagat, J. Ang, and H. Carvey. 2004. The ICSI meeting recorder dialog act corpus. In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue.
WaveSurfer: An open source speech tool. Kåre Sjölander, Jonas Beskow, Proceedings of ICSLP. ICSLPBeijing China4Kåre Sjölander and Jonas Beskow. 2000. WaveSurfer: An open source speech tool. In Proceedings of ICSLP, pages 4:464-467, Beijing China.
Effective spoken natural language dialogue requires variable initiative behavior: an empirical study. Ronnie W Smith, AAAI93 Fall Symposium On Human-Computer Collaboration. Ronnie W. Smith. 1993. Effective spoken natural lan- guage dialogue requires variable initiative behavior: an empirical study. In AAAI93 Fall Symposium On Human-Computer Collaboration.
Reconciling control and discourse structure. Susan E Strayer, Peter A Heeman, Fan Yang, Current and New Directions in Discourse and Dialogue. J. Van Kuppevelt and R. W. SmithKluwer Academic PublishersSusan E. Strayer, Peter A. Heeman, and Fan Yang. 2003. Reconciling control and discourse structure. In J. Van Kuppevelt and R. W. Smith, editors, Current and New Directions in Discourse and Dialogue, chapter 14, pages 305-323. Kluwer Academic Publishers.
An annotation scheme for agreement analysis. Fan Siew Leng Toh, Peter A Yang, Heeman, Proceedings of INTERSPEECH. INTERSPEECHPittsburgh PASiew Leng Toh, Fan Yang, and Peter A. Heeman. 2006. An annotation scheme for agreement analysis. In Pro- ceedings of INTERSPEECH, Pittsburgh PA.
Mixed initiative in dialogue: An investigation into discourse segmentation. Marilyn Walker, Steve Whittaker, Proceedings of 28th ACL. 28th ACLMarilyn Walker and Steve Whittaker. 1990. Mixed ini- tiative in dialogue: An investigation into discourse seg- mentation. In Proceedings of 28th ACL.
Cues and control in expert-client dialogue. Steve Whittaker, Phil Stenton, Proceedings of 28th ACL. 28th ACLSteve Whittaker and Phil Stenton. 1988. Cues and con- trol in expert-client dialogue. In Proceedings of 28th ACL, pages 123-130.
Dialogueview: Annotating dialogues in multiple views with abstraction. Fan Yang, Peter A Heeman, Kristy Hollingshead, Susan E Strayer, Natural Language Engineering. To appearFan Yang, Peter A. Heeman, Kristy Hollingshead, and Susan E. Strayer. 2007. Dialogueview: Annotating dialogues in multiple views with abstraction. Natural Language Engineering. To appear. |
218,974,251 | [] | Ellogon Casual Annotation Infrastructure
May 2020
Georgios Petasis [email protected]
Institute of Informatics and Telecommunications
National Centre for Scientific Research (N.C.S.R.) "Demokritos"
Aghia ParaskeviP.O. BOX 60228GR-153 10AthensGreece
Leonidas Tsekouras [email protected]
Institute of Informatics and Telecommunications
National Centre for Scientific Research (N.C.S.R.) "Demokritos"
Aghia ParaskeviP.O. BOX 60228GR-153 10AthensGreece
Ellogon Casual Annotation Infrastructure
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)
the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 20203360Authoring ToolsCorpus (CreationAnnotationetc)LR Infrastructures and ArchitecturesCasual AnnotationAn- notation ToolsCollaborative AnnotationWeb-based annotation tools
This paper presents a new annotation paradigm, casual annotation, along with a proposed architecture and a reference implementation, the Ellogon Casual Annotation Tool, which implements this paradigm and architecture. The novel aspects of the proposed paradigm originate from the vision to tightly integrate annotation with the casual, everyday activities of users. Annotating in a less "controlled" environment, and removing the bottleneck of selecting content and importing it to annotation infrastructures, casual annotation provides the ability to vastly increase the content that can be annotated and ease the annotation process through automatic pre-training. The proposed paradigm, architecture and reference implementation has been evaluated for more than two years on an annotation task related to sentiment analysis. Evaluation results suggest that, at least for this annotation task, there is a huge improvement in productivity after casual annotation adoption, in comparison to the more traditional annotation paradigms followed in the early stages of the annotation task.
Introduction
The development and maintenance of annotated corpora can be significantly facilitated through the use of annotation tools, as annotation tools can control most aspects of the annotation process, from the presentation of the relevant information to the annotators to the validation of annotated information according to a predefined schema. A plethora of annotation tools has been presented during the last two decades (Uren et al., 2006;Fragkou et al., 2008a;Katakis et al., 2016), covering a wide range of annotation tasks and offering various levels of support. Annotation solutions can be divided into manual and semi-automatic methods: manual solutions provide the required infrastructure (i.e. storage management, graphical user interface, etc.) for annotators to annotate a corpus with a completely manual approach, where all information must be manually entered by the annotators. Semi-automatic solutions on the other hand, try to pre-annotate corpora, reducing the role of annotators into validation of existing pre-annotation. However, several of the existing annotation tools are desktop applications, allowing the annotation of corpora found on a single computer. A more recent category of annotation solutions, are distributed or collaborative annotation tools, where several annotators (not necessarily co-located) can annotate the same corpus, and in some cases even the same document. However, the construction of annotation tools that operate in a distributed environment is a challenging task, while the majority of these tools are implemented as Web applications (such as Ellogon's Collaborative Annotation Tool 1 (Katakis et al., 2016), WebAnno 2 (Yimam et al., 2013) or BRAT 3 (Stenetorp et al., 2012a)), having to cope with the capabilities offered by browsers. This paper describes a new annotation tool, the Ellogon 4 (Petasis et al., 2002a) casual annotation tool, which proposes a new annotation paradigm ("casual" annotation) and implements an alternative architecture employing an HTTP proxy server.
Casual Annotation
Typically the annotation of textual corpora is a tightly controlled process. It is not uncommon for the annotation process to involve several groups of annotators, often with various roles. Quite frequently the annotation task is facilitated with either custom-made or more generic annotation tools, such as the Ellogon Annotation Infrastructure (Katakis et al., 2016;Petasis et al., 2002a), UIMA 5 , BRAT (Stenetorp et al., 2012a)), WebAnno (Yimam et al., 2013), etc. All these tools typically involve three main steps: 1) importing of documents (usually converted to plain text) into the annotation tool infrastructure; 2) annotating the imported textual documents (either manually, semi-automatic or automatic) following the annotation guidelines and procedures; and 3) exporting the annotated texts from the annotation infrastructure according to a predefined format. While this process has been well studied and has been used to produce a vast number of high-quality annotated corpora, there are some disadvantages, such as: 1) formatting is usually lost, as well as contextual features like images or dialog utterances (i.e. in comments or tweets), leading to interpretation of textual fragments in isolation; 2) data must be imported into annotation infrastructure restricting annotation to preselected resources, a task that must be performed without support from the annotation infrastructure; and 3) not all evaluation scenarios can be supported, especially those requiring application to larger corpora than the ones used for training. In addition, there is a number of tasks that is not well supported by the typical annotation procedure, especially in the context of curating resources, like morphological, sentiment or other types of lexicons, or during the task of selecting suitable textual resources that should be manually annotated. When seeking coverage of a resource or when selecting candidates for further annotation, a closer integration between browsing for resources and using an annotation infrastructure is required. This need has driven the integration of annotation infrastructures into Web browsers, through initiatives like "AnnotateIt" 6 and "Hypothesis" 7 , where Web browser plugins or JavaScript "scriptlets" are used to "add" annotation support to Web pages, after they have been rendered within a browser. Although these browser-based annotation infrastructures enable the annotation of arbitrary Web resources, a) they are browser and browser version dependent as they rely on plugins; and b) they offer limited to no support in preprocessing the displayed content. The annotation infrastructure presented in this paper goes one step further: instead of relying on browser-specific properties, the infrastructure is placed before the browser, between the browser and the Web, acting as a Web proxy. This placement has some advantages, as a) the annotation infrastructure has increased content modification power over content, being able to even block parts of the content and modify HTTP request headers; b) any form of pre-processing can be applied to the content; and c) there is no need for plug-ins or actions applied by the user through "scriptlets", enhancing compatibility with any modern Web browser. The combination of server-side pre-processing with the absence of needed user actions for activation of the infrastructure, enables the annotation infrastructure to be continuously available, even outside the context of a "controlled" annotation process. This continuous availability is the main motivation behind casual annotation: casual annotation can be defined as the evaluation and annotation of a manually curated resource while performing every-day tasks, like browsing, reading news or interacting with social media. For example, casual annotation can be considered the evaluation of a sentence splitter while an annotator is casually reading news, comments in news articles and tweets. In such a case the annotator can easily identify errors and annotate them as such, with the annotations stored in a centralised database, similar to "AnnotateIt" and "Hypothesis". Similarly, in the case of annotating sentiment, the annotator can see automatically annotated news items, comments, tweets or Facebook posts by applying the sentiment analyser, and annotate errors or omissions, while doing casual tasks like reading the news, or visiting their Twitter/Facebook timelines. Casual annotation has the potential to increase annotation productivity, mainly for two reasons: a) the annotation happens while the annotator is engaged in casual, everyday type activities, which do not feel as "controlled" as typical annotation tasks but are rather more relaxed; and b) annotation is not restricted to pre-selected artifacts. Instead the annotation can be applied to a wider range and more diverse content. 6 AnnotateIt: https://annotateit.org/ 7 Hypothesis.io: https://web.hypothes.is/
Architecture
The architecture of the Ellogon Casual Annotation Tool is centered around the "SQUID Web Proxy Cache" proxy server. Squid 8 is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. As a caching server, it reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid provides the ability to include plugins (written in the C/C++ languages) through the "eCAP" 9 software interface, allowing the application of arbitrary operations on a Web resource, before the Web resource is served to a Web browser that utilises the proxy. Through the eCAP plugin for the Tcl language "ecap-tcl" 10 , the Ellogon language infrastructure and its annotation engine can be integrated within Squid, as shown in Figure 1. The "ecap-tcl" plugin includes sample code for the implementation of an HTML processor. When the user requests a Web page through a browser, the browser sends the request to the Squid proxy server. The Squid server retrieves the Web page from its original location, and through eCAP asks the Annotation (Ellogon) infrastructure for a modified version of the page. The Annotation infrastructure may decide to modify the resource, according to its configuration. Since all HTTP headers are available during the modification request, the Annotation infrastructure has many parameters in order to select whether to modify a resource and what types of modifications are needed, including the URL of the resource, its type, its encoding, etc. The architecture displayed in Figure 1 is generic in the sense that is not tied to a specific Annotation infrastructure. However, the current implementation of the aforementioned infrastructure has been tested only with Ellogon. The configuration that has been extensively tested (for around four years) modifies all resources of type HTML, and some resources of type JSON, when their origin is within a set of servers (i.e. Twitter or Face-book). Processing of JSON resources is supported, as it may be required in some cases (e.g. for Twitter), especially with Web pages formed through XHR requests. However, the interception of JSON messages typically require a sitespecific implementation 11 . Figure 2 shows an example of how a pre-annotated Web page is shown within a browser (Firefox). The annotation task shown in Figure 2 is re- lated to sentiment analysis and the curation of a relevant lexicon for words and phrases. As shown in Figure 2, the whole Web page is annotated with existing lexicon listings (even parts loaded dynamically with JavaScript and AJAX requests), along with the identification of clauses and sentences (denoted by the slanted box areas). The annotator is able to edit existing annotations through a JavaScript User Interface based on "AnnotateIt", activated when clicking on an annotated textual segment (as a floating window). In addition, the annotator can annotate new segments, simply by selecting them, and then clicking on a small indicator that appears on top of the selection. It is interesting to note that no browser plugins/scriptlets are required, since all the needed modifications (like loading of the JavaScript annotation infrastructure in the Web page and pre-annotating the page contents) occur within the Squid proxy server, before the content is received by the browser. As a result, the annotator is able to fully browse sites by following links: all pages will appear pre-annotated and able to be annotated. There are various ways to specify which sites will be directed through the annotator proxy server, with the most 11 The current implementation supports JSON from Twitter and DISQUS. common ones being a browser extension that toggles the usage of a proxy server (like "Proxy Toggle" for Firefox), or through a "Proxy Auto-Configuration File" 12 .
Use case: Annotating Sentiment
The paradigm of casual annotation and the Ellogon Casual Annotation tool has been applied in the context of an annotation task that has been running for several years (2010 -today). The annotation task relates to sentiment analysis, where the annotators need to annotate both words and larger fragments, such as phrases or sentences. During the first years (from 2010 to roughly 2016) the annotation task has followed the traditional paradigm, where relevant content has been collected using various criteria, including diversity of thematic domains, words and phrases not already covered, variance over document types (longer documents to micro-blogging posts), inclusion of several news sources and user generated content, etc. The collected content was subsequently imported into an annotation infrastructure, where words, segments, phrases, sentences or whole documents were annotated mainly for opinion polarity. At predetermined intervals (i.e once a month), the annotated data were used to train an automatic annotation pipeline based on machine learning, which could be applied on new unannotated content. Figure 3 shows the total number of annotations performed by a single annotator on various points in time. The points marked with circles (in blue) monitor how the total number of annotations increased over time. At the beginning of 2017, the annotation task switched to casual annotation, with all annotators switching to the Ellogon Casual Annotation Tool. How the total number of annotations has increased over time (for the same annotator)
is also shown in Figure 3 (denoted by red squares). As can be seen, the slope of the increase has drastically changed, and the annotator was able to annotate much more content over time, increasing annotations from around 14,000 in 2017 to more than 65,000 at the end of 2019 13 . There are several factors that have contributed to this productivity increase, including a) the task of selecting content for annotation has been completely eliminated, saving a lot of time and reducing annotation effort; b) content is constantly preannotated, allowing the annotator to quickly see what is annotated, what has been erroneously annotated and must be corrected, and what needs to be annotated; c) the annotation process is significantly less "controlled", usually to the point that the annotator does not feel like annotating, but rather performing casual activities like reading news or interacting with social media, resulting in making them spend more time on the task. At least for the case of annotation for sentiment, where errors and omissions are relatively easy to identify and annotate. At the same time, quality of annotation has maintained the same levels, mainly due to the automatic detection of contradictions among annotators and their presentation to the annotators for resolution, before a commit can be made for each annotator. Unfortunately, the number of conflicts has not been recorded during this annotation use case, but typically lies in the range of 50-100 conflicts per year for each annotator (in the context of 4000-6000 new entries per year by each annotator). In addition, the rapid increase in available annotations and the amount of new annotations on a daily basis, allowed us to shorten the cycle of producing an updated automated annotator, from training a new annotator on a monthly basis to training a new annotator on a daily basis.
Note that in such an annotation tasks, where we aim for the maximum coverage of words and phrases over multiple document types and thematic domains, the elimination of the content selection step does not affect the quality or relevance of the annotations. In other tasks, where careful selection of documents and control over the annotation process are important, casual annotation may not be the best annotation paradigm to use, although it could still complement such a process.
Related Work
During the last two decades, a large number of annotation tools has been presented. Each one of them, is built upon its own logic and provides a different set of features, while some of them exploit previous experience acquired from their equivalent desktop versions. GATE Teamware 14 is an annotation solution which aims to facilitate the annotation process among teams, by leveraging its distributed architecture (Bontcheva et al., 2013). It offers a desktop application which enables users to add annotations, as well as a Web-based user interface from which the users are able to manage their projects and monitor their statistics. 13 The slight deceleration that can be observed at the beginning of 2018, was caused by a change in Twitter APIs, causing a temporary inability to annotate Twitter content. 14 https://gate.ac.uk/teamware
Another popular annotation solution is BRAT 15 , a Webbased tool for NLP-assisted text annotation. Its users are able to access and annotate their collections through their browsers, without the need of installing any additional software (Stenetorp et al., 2012b). BRAT also offers collaboration features, meaning that two or more users have the ability to add and modify annotations in the same document, simultaneously. The changes take place in real time and everyone has access to the latest version of the document. WebAnno 16 follows the design philosophy of BRAT but it differentiates in multiple ways. It is a Web-based annotation solution which combines BRAT's visualisations with a fully-fledged back-end and delivers features like user and quality management, monitoring tools as well as an interface to crowd-sourcing (de Castilho et al., 2014). Moreover, it offers a library of predefined schemas for various annotation tasks, and it supports different corpora formats enabling the cooperation with various existing platforms and infrastructures. Inforex 17 is a Web-based system which facilitates the management and creation of annotated corpora. Its users are able to browse and edit the content of the annotated documents as well as to pre-process them. In addition, it integrates an advanced versioning system allowing users to revert every document of their collections to a previous state.
Regarding the annotation process, it offers a number of predefined annotation schemas which can be customised according to the needs of each annotation task. The Ellogon language engineering platform 18 (Petasis et al., 2002b) provides an all-in-one desktop solution and an annotation engine, which allows the annotation of a wide range of information, ranging from information about words to complex annotation schemas involving links between aligned segments in bilingual texts (Petasis and Tsoumari, 2012b). In addition it supports collaborative/distributed annotation through Ellogon's Collaborative Annotation Tool 19 (Katakis et al., 2016), where the annotation process can be shared among different annotators at different locations. Last but not least, it is an open source software which can be customised according to the requirements of each annotation task, exploiting a customisable engine for generating different layouts and user interfaces, driven by XML annotation schemas (Petasis, 2014;Katakis et al., 2016). It has been applied to a wide range of tasks, ranging from annotation of part-of-speech tags and named entities (Petasis et al., 2003), prosodic features (Spiliotopoulos et al., 2005), semantic graphs (Fragkou et al., 2008b), document sections Petasis and Tsoumari, 2012a), co-reference on aligned corpora (Tsoumari and Petasis, 2011), events (Petasis, 2012), arguments (Petasis, 2014;Katakis et al., 2016), and sentiment. In addition, Ellogon's infrastructure is the only infrastructure that has implemented the casual annotation paradigm. Distributed collaborative annotation similar to "AnnotateIt" and "Hypothesis" (relying on "WebAnnotations" W3C standards) have been employed in use cases such as disinformation and "fake news" (Rehm et al., 2018), annotating PDF documents (Shindo et al., 2018), annotating complex linguistic phenomena through annotation graphs (Forbes et al., 2018) and hierarchies (Helfrich et al., 2018), or curating resources like morphological lexicons and annotating morphosyntactic annotation of inflectional languages (Alosaimy and Atwell, 2018). Most of the presented tools and approaches are largely open source, i.e Doccano (Nakayama et al., 2018), although there are also commercial approaches like TagTog 20 . Comparing all aforementioned solutions to the tool presented in this paper, the Ellogon Casual Annotation Tool suggests a novel annotation paradigm (casual annotation), going beyond what is currently available in Web-based, collaborative annotation tools, alleviating problems like browser compatibility, content pre-annotation and embedding the annotation infrastructure within the casual browsing activities of the users.
Conclusions and Future Work
This paper presents a new annotation paradigm, casual annotation, along with a proposed architecture based on an HTTP proxy, and an annotation tool, the Ellogon Casual Annotation Tool, which implements this paradigm and architecture. The novel aspects of the proposed paradigm originate from the vision to tightly integrate annotation with the casual, everyday activities of the users. Annotating in a less "controlled" environment, and removing the bottleneck of selecting content and importing it to annotation infrastructures, casual annotation provides the ability to vastly increase the content that can be annotated and ease the annotation process through automatic pre-training. The proposed paradigm, architecture and reference implementation has been evaluated for more than two years on an annotation task related to sentiment analysis. Evaluation results suggest that at least for this annotation task there is a huge improvement in productivity after casual annotation adoption, in comparison to the more traditional annotation paradigms followed in the early stages of the annotation task.
As future work, we aim to evaluate the casual annotation paradigm and provide support for more use-cases, identifying more annotation tasks that can be potentially benefited by the new paradigm. In addition, the reference annotation infrastructure must be enhanced in the direction of supporting more annotation schemas, and possibly enhance integration with recent approaches, such as "Hypothesis".
Figure 1 :
1The architecture of Ellogon Casual Annotation Tool.
Figure 2 :
2An example of Ellogon Casual Annotation Tool usage.
Figure 3 :
3As can be seen from the figure, annotations are increased roughly linearly over time, from around 3,000 annotations in 2013 to about 14,000 annotations in 2017Number of Annotations before and after adopting Casual Annotation.
https://www.ellogon.org/clarin/welcome 2 https://webanno.github.io/webanno/ 3 https://brat.nlplab.org/index.html 4 https://www.ellogon.org
https://uima.apache.org/
SQUID: http://www.squid-cache.org/ 9 eCAP: https://www.e-cap.org/ 10 eCAP Tcl:https://github.com/petasis/ ecap-tcl
PAC: https://en.wikipedia.org/wiki/Proxy_ auto-config
https://brat.nlplab.org/ 16 https://webanno.github.io/webanno 17 http://nlp.pwr.wroc.pl/inforex 18 https://www.ellogon.org 19 https://www.ellogon.org/clarin/welcome
https://www.tagtog.net/
AcknowledgmentsWe acknowledge support of this work by the project "APOLLONIS: Greek Infrastructure for Digital Arts, Humanities and Language Research and Innovation" (MIS 5002738) which is implemented under the Action "Reinforcement of the Research and Innovation Infrastructure", funded by the Operational Programme "Competitiveness, Entrepreneurship and Innovation" (NSRF 2014-2020) and co-financed by Greece and the European Union (European Regional Development Fund).
Web-based Annotation Tool for Inflectional Language Resources. A Alosaimy, E ; Atwell, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanNicoletta Calzolari (Conference chair). European Language Resources Association (ELRAAlosaimy, A. and Atwell, E. (2018). Web-based An- notation Tool for Inflectional Language Resources. In Nicoletta Calzolari (Conference chair), et al., editors, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).
Gate teamware: a web-based, collaborative text annotation framework. K Bontcheva, H Cunningham, I Roberts, A Roberts, V Tablan, N Aswani, G Gorrell, Language Resources and Evaluation. 474Bontcheva, K., Cunningham, H., Roberts, I., Roberts, A., Tablan, V., Aswani, N., and Gorrell, G. (2013). Gate teamware: a web-based, collaborative text anno- tation framework. Language Resources and Evaluation, 47(4):1007-1029.
Webanno: a flexible, web-based annotation tool for clarin. R E De Castilho, C Biemann, I Gurevych, S M Yimam, Proceedings of the CLARIN Annual Conference (CAC). the CLARIN Annual Conference (CAC)de Castilho, R. E., Biemann, C., Gurevych, I., and Yimam, S. M. (2014). Webanno: a flexible, web-based annota- tion tool for clarin. Proceedings of the CLARIN Annual Conference (CAC) 2014, October.
Text Annotation Graphs: Annotating Complex Natural Language Phenomena. A Forbes, K Lee, G Hahn-Powell, M A Valenzuela-Escarcega, M ; Surdeanu, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanNicoletta Calzolari (Conference chair). European Language Resources Association (ELRAForbes, A., Lee, K., Hahn-Powell, G., Valenzuela- Escarcega, M. A., and Surdeanu, M. (2018). Text Annotation Graphs: Annotating Complex Natural Lan- guage Phenomena. In Nicoletta Calzolari (Conference chair), et al., editors, Proceedings of the Eleventh Inter- national Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).
Boemie ontologybased text annotation tool. P Fragkou, G Petasis, A Theodorakos, V Karkaletsis, C Spyropoulos, Proceedings of the Sixth International Language Resources and Evaluation (LREC'08). Nicoletta Calzolari (Conference Chair), et al.the Sixth International Language Resources and Evaluation (LREC'08)Marrakech, MoroccoEuropean Language Resources Association (ELRAFragkou, P., Petasis, G., Theodorakos, A., Karkaletsis, V., and Spyropoulos, C. (2008a). Boemie ontology- based text annotation tool. In Nicoletta Calzolari (Con- ference Chair), et al., editors, Proceedings of the Sixth International Language Resources and Evaluation (LREC'08), Marrakech, Morocco, May. European Lan- guage Resources Association (ELRA). http://www.lrec- conf.org/proceedings/lrec2008/.
TreeAnnotator: Versatile Visual Annotation of Hierarchical Text Relations. P Fragkou, G Petasis, A Theodorakos, V Karkaletsis, C D Spyropoulos, P Helfrich, E Rieb, G Abrami, A Lücking, A Mehler, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Nicoletta Calzolarithe Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Marrakech, Morocco; Miyazaki, JapanProceedings of the International Conference on Language Resources and Evaluation, LREC. European Language Resources Association (ELRAFragkou, P., Petasis, G., Theodorakos, A., Karkaletsis, V., and Spyropoulos, C. D. (2008b). BOEMIE ontology- based text annotation tool. In Proceedings of the Inter- national Conference on Language Resources and Eval- uation, LREC 2008, 26 May -1 June 2008, Marrakech, Morocco. European Language Resources Association. Helfrich, P., Rieb, E., Abrami, G., Lücking, A., and Mehler, A. (2018). TreeAnnotator: Versatile Visual Annota- tion of Hierarchical Text Relations. In Nicoletta Cal- zolari (Conference chair), et al., editors, Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan, May 7-12, 2018. European Language Resources Associ- ation (ELRA).
CLARIN-EL web-based annotation tool. I M Katakis, G Petasis, V Karkaletsis, Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016. Nicoletta Calzolari, et al.the Tenth International Conference on Language Resources and Evaluation LREC 2016Portorož, SloveniaEuropean Language Resources Association (ELRAKatakis, I. M., Petasis, G., and Karkaletsis, V. (2016). CLARIN-EL web-based annotation tool. In Nicoletta Calzolari, et al., editors, Proceedings of the Tenth Inter- national Conference on Language Resources and Evalu- ation LREC 2016, Portorož, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA).
doccano: Text annotation tool for human. H Nakayama, T Kubo, J Kamura, Y Taniguchi, X Liang, Nakayama, H., Kubo, T., Kamura, J., Taniguchi, Y., and Liang, X. (2018). doccano: Text annotation tool for hu- man. Software available from https://github.com/chakki- works/doccano.
A New Annotation Tool for Aligned Bilingual Corpora. G Petasis, M Tsoumari, Text, Speech and Dialogue. Petr Sojka, et al.Berlin Heidelberg; Brno, Czech RepublicSpringer7499Petasis, G. and Tsoumari, M. (2012a). A New Annotation Tool for Aligned Bilingual Corpora. In Petr Sojka, et al., editors, Text, Speech and Dialogue, volume 7499 of Lec- ture Notes in Computer Science, pages 95-104. Springer Berlin Heidelberg, Brno, Czech Republic, September 3- 7.
A new annotation tool for aligned bilingual corpora. G Petasis, M Tsoumari, Text, Speech and Dialogue. SpringerPetasis, G. and Tsoumari, M. (2012b). A new annotation tool for aligned bilingual corpora. In Text, Speech and Dialogue, pages 95-104. Springer.
Ellogon: A New Text Engineering Platform. G Petasis, V Karkaletsis, G Paliouras, I Androutsopoulos, C D Spyropoulos, Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC 2002). the 3rd International Conference on Language Resources and Evaluation (LREC 2002)Las Palmas, Canary Islands, SpainEuropean Language Resources AssociationPetasis, G., Karkaletsis, V., Paliouras, G., Androutsopou- los, I., and Spyropoulos, C. D. (2002a). Ellogon: A New Text Engineering Platform. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC 2002), pages 72-78, Las Palmas, Ca- nary Islands, Spain, May 29-31. European Language Re- sources Association.
Ellogon: A new text engineering platform. G Petasis, V Karkaletsis, G Paliouras, I Androutsopoulos, C D Spyropoulos, Proceedings of the Third International Conference on Language Resources and Evaluation, LREC 2002. the Third International Conference on Language Resources and Evaluation, LREC 2002Las Palmas, Canary Islands, SpainEuropean Language Resources AssociationPetasis, G., Karkaletsis, V., Paliouras, G., Androutsopou- los, I., and Spyropoulos, C. D. (2002b). Ellogon: A new text engineering platform. In Proceedings of the Third International Conference on Language Resources and Evaluation, LREC 2002, May 29-31, 2002, Las Palmas, Canary Islands, Spain. European Language Resources Association.
Using the Ellogon Natural Language Engineering Infrastructure. G Petasis, V Karkaletsis, G Paliouras, C D Spyropoulos, Proceedings of the Workshop on Balkan Language Resources and Tools, 1st Balkan Conference in Informatics (BCI 2003). the Workshop on Balkan Language Resources and Tools, 1st Balkan Conference in Informatics (BCI 2003)Thessaloniki, GreecePetasis, G., Karkaletsis, V., Paliouras, G., and Spy- ropoulos, C. D. (2003). Using the Ellogon Natu- ral Language Engineering Infrastructure. In Proceed- ings of the Workshop on Balkan Language Resources and Tools, 1st Balkan Conference in Informatics (BCI 2003), Thessaloniki, Greece, November 21. http://labs- repos.iit.demokritos.gr/skel/bci03 workshop/.
Segmenting HTML pages using visual and semantic information. G Petasis, P Fragkou, A Theodorakos, V Karkaletsis, C D Spyropoulos, Proceedings: The 4th Web as Corpus: Can we do better than Google. The 4th Web as Corpus: Can we do better than GoogleMarrakech, Morocco6th Language Resources and Evaluation Conference (LREC 2008)Petasis, G., Fragkou, P., Theodorakos, A., Karkaletsis, V., and Spyropoulos, C. D. (2008). Segmenting HTML pages using visual and semantic information. In Pro- ceedings of the 4th Web as a Corpus Workshop (WAC-4), 6th Language Resources and Evaluation Conference (LREC 2008), pages 18-24, Marrakech, Morocco, June 1. Proceedings: The 4th Web as Corpus: Can we do better than Google? http://www.lrec-conf. org/proceedings/lrec2008/workshops/ W19_Proceedings.pdf.
The SYNC3 Collaborative Annotation Tool. G Petasis, Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012. the 8th International Conference on Language Resources and Evaluation, LREC 2012Istanbul, Turkey, MayEuropean Language Resources AssociationPetasis, G. (2012). The SYNC3 Collaborative Annotation Tool. In Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012, pages 363-370, Istanbul, Turkey, May. European Lan- guage Resources Association.
Annotating arguments: The nomad collaborative annotation tool. G Petasis, LREC. Nicoletta Calzolari, et al.European Language Resources Association (ELRAPetasis, G. (2014). Annotating arguments: The nomad col- laborative annotation tool. In Nicoletta Calzolari, et al., editors, LREC, pages 1930-1937. European Language Resources Association (ELRA).
Automatic and Manual Web Annotations in an Infrastructure to handle Fake News and other Online Media Phenomena. G Rehm, J Moreno-Schneider, P ; Bourgonje, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanNicoletta Calzolari (Conference chair). European Language Resources Association (ELRARehm, G., Moreno-Schneider, J., and Bourgonje, P. (2018). Automatic and Manual Web Annotations in an Infras- tructure to handle Fake News and other Online Media Phenomena. In Nicoletta Calzolari (Conference chair), et al., editors, Proceedings of the Eleventh Interna- tional Conference on Language Resources and Evalu- ation (LREC 2018), Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).
PDFAnno: a Web-based Linguistic Annotation Tool for PDF Documents. H Shindo, Y Munesada, Y Matsumoto, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Nicoletta Calzolarithe Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAShindo, H., Munesada, Y., and Matsumoto, Y. (2018). PDFAnno: a Web-based Linguistic Annotation Tool for PDF Documents. In Nicoletta Calzolari (Conference chair), et al., editors, Proceedings of the Eleventh Inter- national Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).
Prosodically Enriched Text Annotation for High Quality Speech Synthesis. D Spiliotopoulos, G Petasis, G Kouroupetroglou, Proceedings of the 10th International Conference on Speech and Computer (SPECOM-2005). the 10th International Conference on Speech and Computer (SPECOM-2005)Patras, GreeceSpiliotopoulos, D., Petasis, G., and Kouroupetroglou, G. (2005). Prosodically Enriched Text Annotation for High Quality Speech Synthesis. In Proceedings of the 10th International Conference on Speech and Computer (SPECOM-2005), pages 313-316, Patras, Greece, Octo- ber 17-19.
Brat: A web-based tool for nlp-assisted text annotation. P Stenetorp, S Pyysalo, G Topić, T Ohta, S Ananiadou, J Tsujii, Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12. the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12Stroudsburg, PA, USAAssociation for Computational LinguisticsStenetorp, P., Pyysalo, S., Topić, G., Ohta, T., Anani- adou, S., and Tsujii, J. (2012a). Brat: A web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguis- tics, EACL '12, pages 102-107, Stroudsburg, PA, USA. Association for Computational Linguistics.
Brat: a web-based tool for nlpassisted text annotation. P Stenetorp, S Pyysalo, G Topić, T Ohta, S Ananiadou, J Tsujii, Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics. the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsStenetorp, P., Pyysalo, S., Topić, G., Ohta, T., Ananiadou, S., and Tsujii, J. (2012b). Brat: a web-based tool for nlp- assisted text annotation. In Proceedings of the Demon- strations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102-107. Association for Computational Linguistics.
Coreference Annotator -A new annotation tool for aligned bilingual corpora. M Tsoumari, G Petasis, Proceedings of the Second Workshop on Annotation and Exploitation of Parallel Corpora (AEPC. the Second Workshop on Annotation and Exploitation of Parallel Corpora (AEPC8th International Conference on Recent Advances in Natural Language Processing. RANLP 2011Tsoumari, M. and Petasis, G. (2011). Coreference Annota- tor -A new annotation tool for aligned bilingual corpora. In Proceedings of the Second Workshop on Annotation and Exploitation of Parallel Corpora (AEPC 2), in 8th International Conference on Recent Advances in Natu- ral Language Processing (RANLP 2011), pages 43-52, September 15.
Semantic annotation for knowledge management: Requirements and a survey of the state of the art. V Uren, P Cimiano, J Iria, S Handschuh, M Vargas-Vera, E Motta, F Ciravegna, Web Semant. 4Uren, V., Cimiano, P., Iria, J., Handschuh, S., Vargas-Vera, M., Motta, E., and Ciravegna, F. (2006). Semantic an- notation for knowledge management: Requirements and a survey of the state of the art. Web Semant., 4:14-28, January.
Webanno: A flexible,web-based and visually supported system for distributed annotations. S M Yimam, I Gurevych, R E De Castilho, C Biemann, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (System Demonstrations) (ACL 2013). the 51st Annual Meeting of the Association for Computational Linguistics (System Demonstrations) (ACL 2013)Stroudsburg, PA, USAAssociation for Computational LinguisticsYimam, S. M., Gurevych, I., de Castilho, R. E., and Bie- mann, C. (2013). Webanno: A flexible,web-based and visually supported system for distributed annotations. In Proceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (System Demonstra- tions) (ACL 2013), pages 1-6, Stroudsburg, PA, USA, August. Association for Computational Linguistics. |
||
18,622,830 | UABCoRAL: A Preliminary study for Resolving the Scope of Negation | This paper describes our participation in the closed track of the *SEM 2012 Shared Task of finding the scope of negation. To perform the task, we propose a system that has three components: negation cue detection, scope of negation detection, and negated event detection. In the first phase, the system creates a lexicon of negation signals from the training data and uses the lexicon to identify the negation cues. Then, it applies machine learning approaches to detect the scope and negated event for each negation cue identified in the first phase. Using a preliminary approach, our system achieves a reasonably good accuracy in identifying the scope of negation. | [
7114916,
16423689,
15271019,
3882934,
14439287
] | UABCoRAL: A Preliminary study for Resolving the Scope of Negation
June 7-8, 2012
Binod Gyawali [email protected]
Department of Computer and Information Sciences
University of Alabama at Birmingham Birmingham
AlabamaUSA
Thamar Solorio [email protected]
Department of Computer and Information Sciences
University of Alabama at Birmingham Birmingham
AlabamaUSA
Coral Lab
Department of Computer and Information Sciences
University of Alabama at Birmingham Birmingham
AlabamaUSA
UABCoRAL: A Preliminary study for Resolving the Scope of Negation
First Joint Conference on Lexical and Computational Semantics (*SEM)
Montréal, CanadaJune 7-8, 2012
This paper describes our participation in the closed track of the *SEM 2012 Shared Task of finding the scope of negation. To perform the task, we propose a system that has three components: negation cue detection, scope of negation detection, and negated event detection. In the first phase, the system creates a lexicon of negation signals from the training data and uses the lexicon to identify the negation cues. Then, it applies machine learning approaches to detect the scope and negated event for each negation cue identified in the first phase. Using a preliminary approach, our system achieves a reasonably good accuracy in identifying the scope of negation.
Introduction
All human language samples, either written or spoken, contain some information in negated form. In tasks such as information retrieval, sometimes, we should consider only the positive information of an event and disregard its negation information, and vice versa. For example, while searching for the patients with diabetes, we should not include a patient who has a clinical report saying No symptoms of diabetes were observed. Thus, finding the negation and its scope is important in tasks where the negation and assertion information need to be treated differently. However, most of the systems developed for processing natural language data do not consider negations present in the sentences. Although various works (Morante et al., 2008;Morante and Daelemans, 2009;Li et al., 2010;Councill et al., 2010;Apostolova et al., 2011) have dealt with the identification of negations and their scope in sentences, this is still a challenging task.
The first task in *SEM 2012 Shared Task (Morante and Blanco, 2012) is concerned with finding the scope of negation. The task includes identifying: i) negation cues, ii) scope of negation, and iii) negated event for each negation present in the sentences. Negation cue is a word, part of a word, or a combination of words that carries the negation information. Scope of negation in a sentence is the longest group of words in the sentence that is influenced by the negation cue. Negated event is the shortest group of words that is actually affected by the negation cue. In Example (1) below, word no is a negation cue, the discontinuous word sequences 'I gave him' and 'sign of my occupation' are the scopes, and 'gave' is the negated event.
(1) I [gave] him no sign of my occupation.
In this paper, we propose a system to detect the scope of negation for the closed track of *SEM 2012 Shared Task. Our system uses a combination of a rule based approach, and a machine learning approach. We use a rule based approach to create a lexicon of all the negation words present in the training data. Then we use this lexicon to detect the negation cues present in the test data. We do a preliminary analysis of finding the scope of negation and the negated events by applying a machine learning approach, and using basic features created from the words, lemmas, and parts-of-speech (POS) tags of words in the sentences. The F-measure scores achieved by our system are about 85% for negation cue detection, 65% in full scope identification, 48% in negated event detection, and 39% in identifying full negation. Our error analysis shows that the use of lexicon is not very appropriate to detect the negation cues. We also describe the challenges in identifying the scope and the negated events.
Problem Description
The *SEM 2012 shared task competition provided three data sets: training, development, and test data set. Each sentence in each data set is split into words. The dataset contains the information such as lemma, part of speech, and other syntactic information of each word. Each sentence of training and development data is annotated with negation cues, scopes and negated events. Using the training and the development data, the task is to identify negation cues, scopes and negated events in all unannotated sentences of the test data. Negation cue Scope Negated event I -I am -am not not -sure -sure sure whether -whether -I -I left -left it -it here here - A sentence can contain more than one negation cue. Negation cues in the data set can be i) a single word token such as n t, nowhere, ii) a continuous sequence of two or more words, such as no more, by no means or iii) two or more discontinuous words such as ..neither...nor... A negation cue is either a part or same as its corresponding negation word. This corresponding negation word is referred as a negation signal in the remaining sections of the paper. For example, for a negation signal unnecessary, the negation cue is un, and similarly, for a negation signal needless, the negation cue is less.
Sentence tokens
Scope of a negation in a sentence can be a continuous sequence of words or a discontinuous set of words in the sentence. Scope of negation sometimes includes the negation word. A negation word may not have a negated event. Presence of a negated event in a sentence depends upon the facts described by the sentence. Non-factual sentences such as interrogative, imperative, and conditional do not contain negated events. Morante and Daelemans (2012) describe the details of the negation cue, scope, and negated event, and the annotation guidelines. An example of the task is shown in Table 1.
System Description
We decompose the system to identify the scope of negation into three tasks. They are:
1. Finding the negation cue 2. Finding the scope of negation
Finding the negated event
The scope detection and the negated event detection tasks are dependent on the task of finding the negation cue. But the scope detection and the negated event detection tasks are independent of each other.
We identify the negation cues present in the test data based on a lexicon of negation signals that are present in the training and the development data. The tasks of identifying scope of negation and negated event are modeled as classification problems. To identify scope and negated event, we train classifiers with the instances created from the training data provided. We create test instances from the test data annotated with negation cues predicted by our cue detection component. Due to the use of test data annotated by our cue detection component, the false negative rate in predicting the negation cues is propagated to the scope detection as well as negated event detection components. The details of all the three components are described in the subsections below.
Identifying the negation cue
In this task, we identify all the negation cues present in the sentences. We group the negation cues under three types depending upon how they are present in the data. They are: single word cues, continuous multiword cues, and discontinuous multiword cues. All the cues present in the training and development datasets are shown in Table 2.
Cue types
Cues Single word cues absence, dis, except, fail, im, in, ir, less, n't, neglected, neither, never, no, nobody, none, nor, not, nothing, nowhere, prevent, refused, save, un, without Continuous multiword cues no more, rather than, by no means, nothing at all, on the contrary, not for the world Discontinuous multiword cues neither nor, no nor, not not In the training and development data, multiword negation cues account for only 1.40% of the total negation cues. At this stage, we decided to focus on identifying the single word negation cues. The system first creates a lexicon that contains the pairs of negation cues and their corresponding negation signals for all the single word negation cues present in the training and the development datasets. In order to identify a negation cue in the test set, the system searches all the words in the sentences of the test data that match the negation signals of the lexicon. For each word that matches, it assigns the corresponding cue of the signal from the lexicon as its negation cue.
Identifying the scope of negation
We apply a machine learning technique to identify the scope of negation. For each negation cue present in a sentence, we create the problem instances as the tuple of the negation signal and each word present in the same sentence. To create the instances, we use only those sentences having at least one negation. For training, we create instances from the training data, but we consider only those words that are within a window of size 20 from the negation signal and within the sentence boundary. We restricted the words to be within the window in order to minimize the problem of imbalanced data. This window was chosen following our observation that only 1.26% of the scope tokens go beyond the 20 word window from the negation signal. Including the words beyond this window causes a major increase in the negative instances resulting in a highly imbalanced training set. While creating test instances, we do not restrict the words by window size. This restriction is not done in order to include all the words of the sentences in the test instances. An instance is labeled as positive if the word used to create the instance is the scope of the negation signal; else it is labeled as negative.
We extract 10 features to identify the scope of negation as follows: After the classification, if an instance is predicted as positive, the word used to create the instance is considered as the scope of the negation signal. If a negation signal has prefix such as 'dis ', 'un', 'in', 'ir', or 'im', the scope of negation includes only the part of word (signal) excluding the prefix. Thus, for each negation signal having these prefix, we remove the prefix from the signal and consider the remaining part of it as the scope, regardless of whether the classifier classifies the instance pair as positive or negative.
Identifying the negated event
The task of identifying the negated event is similar to the task of identifying the scope of negation. The process of creating the instances for this task is almost the same to that of finding the scope of negation, except that, we limit the window size to 4 words from the negation signal. 4.24% of the negated events lie away from the 4 word window. Beyond this window, the events are very sparse and a small increment in the window size leads to abrupt increase in negative instances and creates an imbalance in the data. The 4 word window size was selected based on the best result obtained among various experiments performed with different window sizes greater than and equal to 4. The same rule applies while creating instances for training data as well as test data. We use only nine features in this step, excluding the 9 th feature used in the scope detection. We also apply the same rule of mapping the negation signals starting with 'dis', 'un', 'in', 'ir', and 'im' to the negated event as in the previous step.
Experimental Settings
We evaluated our system only on the test data of the shared task. For the machine learning tasks, we used the SVM light classifier (Joachims, 1999) with 4 th degree polynomial kernel and other default parameters. The identification of cues, scopes, negated events, and full negation are evaluated on the basis of the F-measures. We also use 'B' variant for cues, scopes, negated events and the full negation for evaluation. The precision of 'B' variant is calculated as the ratio of true positives to the system count. Identification of cues and negated events are measured independent of any other steps. But the identification of the scopes is measured depending upon the correct identification of cues in three different ways as follows:
i) scopes (cue match): the cue has to be correct for the scope to be correct ii) scopes (no cue match): the system must identify part of the cue for the scope to be correct iii) scope tokens (no cue match): a part of the system identified cue must overlap with the gold standard cue for the scope tokens to be correct The F1 score of the full negation detection was used to rank the systems of the participants. The details about the evaluation measures can be found in Morante and Blanco (2012).
Results Analysis
The results obtained by our system over the test data are shown in Table 3. The results obtained by each component, and their analysis are described in the subsections below.
Identifying the negation cues
The system is able to achieve an 85.77% F1 score in the task of identifying the negation cues using a simple approach based on the lexicon of the negation signals. Because of the system's inability to identify multiword negation cues, it could not detect the multiword cues such as ..neither..nor.., ..absolutely nothing.., ..far from.., ..never more.., that account for 3.5% of the total negation cues present in the test data.
The accuracy of the system is limited by the coverage of the lexicon. Due to the low coverage of the lexicon, the system fails to identify signals such as ceaseless, discoloured, incredulity, senseless, and unf ramed that are present only in the test data. These signals account for 4.5% of the total negation signals present in the test data. Some words such as never, nothing, not, n t, no, and without are mostly present as the negation signals in the data. But these words are not always the negation signals. The phrase no doubt is present nine times in the test data, but the word no is a negation signal in only four of them. This accounts for 1.89% error in the negation cue detection. The word save is present once as a negation signal in the training data, but it is never a negation signal in the test data. Therefore, our lexicon based system invariably predicts two occurrences of save in the test data as negation signals.
Identifying the scope of negation
The system achieves 63.46% F1 score in identifying scopes with cue match, 64.76% F1 score in identifying scopes with no cue match, and 76.23% F1 score in identifying scope tokens with no cue match. The results show that our system has a higher precision than recall in identifying the scope. As mentioned Table 3: Results of the system earlier, the negation cues identified in the first task are used to identify the scope of negation and the negated events. Using the test data with 15% error in negation cues as the input to this component and some of the wrong predictions of the scope by this component led to a low recall value in the scope detection.
The results show that the system works well when a negation signal has fewer scope tokens and when the scope tokens are closer to the negation signal. There are some cases where the system could not identify the scope tokens properly. It is unable to detect the scope tokens that are farther in distance from the negation signals. The system is not performing well in predicting the discontinuous scopes. When a negation cue has discontinuous scope, mostly the system predicts one sequence of words correctly but could not identify the next sequence. In sentence (2) in the example below, the underlined word sequences are the discontinuous scopes of the negation cue not. In the sentence, our system predicts only the second sequence of scope, but not the first sequence. In some cases, our system does not have a good coverage of scope tokens. In sentence (3), the underlined word sequence is the scope of the signal no, but our system detects only at ninety was hardship as its scope. These inabilities to detect the full scope have led to have a higher accuracy in predicting the partial scope tokens (76.23%) than predicting the full scope (64.76%).
(2) the box is a half pound box of honeydew tobacco and does not help us in any way
(3) ...a thermometer at ninety was no hardship (4)
.
..I cannot see anything save very vague indications
Analyzing the results, we see that the error in predicting the scope of the negation is high when the scope is distributed in two different phrases. In the example (2) above, does not help us in any way is a single verb phrase and all the scope within the phrase is correctly identified by our system. The box being a separate phrase, it is unable to identify it. However, in some cases such as example (4), the system could not identify any scope tokens for negation cue not.
Some of the findings of previous works have shown that the features related to syntactic path are helpful in identifying the scope of negation. Li et al. (2010) used the syntactic path from the word to the negation signal and showed that this helped to improve the accuracy of scope detection. Similarly, work by Councill et al. (2010) showed that the accuracy of scope detection could be increased using the features from the dependency parse tree. In our experiment, there was a good improvement in the scope detection rate when we included "sequence of POS tags" between the negation signal and the word as a feature. This improvement after including the sequence of POS tags feature and its consistency with the previous works implies that adding path related features might help to improve the accuracy in scope detection.
Identifying the negated event
We are able to achieve an F1 score of 48.33% in predicting the negated events, which is the lowest score among all three components. As in the scope detection task, error in negation cue detection led to lower the recall rate of the negated event detection system. The accuracy of full negation is based on the correct identification of the negation cues, scope and the negated events of all the negations present in the sentences. The output shows that there are many cases where negation cues and the scope are correctly identified but there is an error in identifying the negated events. The higher error in predicting the negated events led to reduce the score of full negation and achieve an F1 score of 39.04%.
Our system is unable to detect some negated events even though they are adjacent to the negation signal. This shows that the use of simple features extracted from words, lemmas, and POS tags is not enough to predict the negated events properly. Adding features related to words in left and right of the negation signal and the path feature may help to improve the detection of negated events.
In order to analyze the impact of error in the negation cue detection component upon the scope and negated event detection components, we performed an experiment using the gold standard negation cues to detect the scope and the negated events. F1 scores achieved by this system are 73.1% in full scope detection, 54.87% in negated event detection, 81.46% in scope tokens detection, and 49.57% in full negation detection. The result shows that there is almost 10% increment in the F1 score in all the components. Thus, having an improved cue detection component greatly helps to improve the accuracy of scope and negated event detection components.
Discussion and Conclusion
In this paper we outline a combination of a rule based approach and a machine learning approach to identify the negation cue, scope of negation, and the negated event. We show that applying a basic approach of using a lexicon to predict the negation cues achieves a considerable accuracy. However, our system is unable to identify the negation cues such as never, not, nothing, n't, and save that can appear as a negation signal as well as in other non-negated contexts. It also cannot cover the negation cues of the signals that are not present in the training data. Moreover, in order to improve the overall accuracy of the scope and negated event detection, we need an accurate system to detect the negation cues since the error in the negation cue detection propagates to the next steps of identifying the scope and the negated event. It is difficult to identify the scope of negations that are farther in distance from the negation signal. Detecting the tokens of the scope that are discontinuous is also challenging.
As future work, we would like to extend our task to use a machine learning approach instead of the lexicon of negation signals to better predict the negation cues. The system we presented here uses a preliminary approach without including any syntactic information to detect the scope and negated events. We would also incorporate syntactic information to identify the scope and negated events in our future work. To improve the accuracy of identifying the scope and the negated events, adding other features related to the neighbor words of the negation signal might be helpful. In our tasks, we limit the scope and negated event instances by the window size in order to avoid imbalance data problem. Another interesting work to achieve better accuracy could be to use other approaches of imbalanced dataset classification instead of limiting the training instances by the window size.
of the word in the tuple 6. POS tag of the word in the tuple 7. Distance between the negation signal and the word in terms of number of words 8. Position of the word from the negation signal (left, right) 9. Whether a punctuation character (',', ':',';') exists between the word and the negation signal 10. Sequence of POS tags in between the negation signal and the word
Table 1 :
1An example of negation cue, scope and the negated event
Table 2 :
2Negation cues present in training and development data
Automatic extraction of lexicosyntactic patterns for detection of negation and speculation scopes. Emilia Apostolova, Noriko Tomuro, Dina Demner-Fushman, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papersStroudsburg, PA, USAAssociation for Computational Linguistics2Emilia Apostolova, Noriko Tomuro, and Dina Demner- Fushman. 2011. Automatic extraction of lexico- syntactic patterns for detection of negation and spec- ulation scopes. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies: short papers - Volume 2, HLT '11, pages 283-287, Stroudsburg, PA, USA. Association for Computational Linguistics.
What's great and what's not: learning to classify the scope of negation for improved sentiment analysis. G Isaac, Ryan Councill, Leonid Mcdonald, Velikovich, Proceedings of the Workshop on Negation and Speculation in Natural Language Processing. the Workshop on Negation and Speculation in Natural Language ProcessingStroudsburg, PA, USAAssociation for Computational LinguisticsNeSp-NLP '10Isaac G. Councill, Ryan McDonald, and Leonid Ve- likovich. 2010. What's great and what's not: learn- ing to classify the scope of negation for improved sen- timent analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Pro- cessing, NeSp-NLP '10, pages 51-59, Stroudsburg, PA, USA. Association for Computational Linguistics.
Making large-scale support vector machine learning practical. Thorsten Joachims, Advances in kernel methods: support sector searning. Cambridge, MA, USAMIT PressThorsten Joachims. 1999. Making large-scale support vector machine learning practical. In Advances in ker- nel methods: support sector searning, pages 169-184. MIT Press, Cambridge, MA, USA.
Learning the scope of negation via shallow semantic parsing. Junhui Li, Guodong Zhou, Hongling Wang, Qiaoming Zhu, Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10. the 23rd International Conference on Computational Linguistics, COLING '10Stroudsburg, PA, USAAssociation for Computational LinguisticsJunhui Li, Guodong Zhou, Hongling Wang, and Qiaom- ing Zhu. 2010. Learning the scope of negation via shallow semantic parsing. In Proceedings of the 23rd International Conference on Computational Linguis- tics, COLING '10, pages 671-679, Stroudsburg, PA, USA. Association for Computational Linguistics.
*SEM 2012 Shared Task: Resolving the Scope and Focus of Negation. Roser Morante, Eduardo Blanco, Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012. the First Joint Conference on Lexical and Computational Semantics (*SEM 2012Montreal, CanadaRoser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Focus of Nega- tion. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012), Montreal, Canada.
A metalearning approach to processing the scope of negation. Roser Morante, Walter Daelemans, Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL '09. the Thirteenth Conference on Computational Natural Language Learning, CoNLL '09Stroudsburg, PA, USARoser Morante and Walter DaelemansRoser Morante and Walter Daelemans. 2009. A met- alearning approach to processing the scope of nega- tion. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL '09, pages 21-29, Stroudsburg, PA, USA. Roser Morante and Walter Daelemans. 2012.
Istanbul. Roser Morante, Anthony Liekens, and Walter Daelemans. Conandoyle-Neg, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC). the Eighth International Conference on Language Resources and Evaluation (LREC)Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08. Association for Computational LinguisticsConanDoyle-neg: Annotation of negation in Conan Doyle stories. In Proceedings of the Eighth Interna- tional Conference on Language Resources and Evalu- ation (LREC), Istanbul. Roser Morante, Anthony Liekens, and Walter Daele- mans. 2008. Learning the scope of negation in biomedical texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 715-724. Association for Compu- tational Linguistics. |
249,204,415 | Towards a methodology for evaluating automatic subtitling | In response to the growing interest towards automatic subtitling, the 2021 EAMTfunded project "Towards a methodology for evaluating automatic subtitling" aimed at collecting subtitle post-editing data in a real use case scenario where professional subtitlers edit automatically generated subtitles. The post-editing setting includes, for the first time, automatic generation of timestamps and segmentation, and focuses on the effect of timing and segmentation edits on the post-editing process. The collected data will serve as the basis for investigating how subtitlers interact with automatic subtitling and for devising evaluation methods geared to the multimodal nature and formal requirements of subtitling. | [
221097200,
201706782,
211296855
] | Towards a methodology for evaluating automatic subtitling
Alina Karakanta [email protected]
University of Trento
Luisa Bentivogli
University of Trento
Mauro Cettolo [email protected]
University of Trento
Matteo Negri [email protected]
University of Trento
Marco Turchi [email protected]
University of Trento
Fondazione Bruno Kessler
University of Trento
Towards a methodology for evaluating automatic subtitling
In response to the growing interest towards automatic subtitling, the 2021 EAMTfunded project "Towards a methodology for evaluating automatic subtitling" aimed at collecting subtitle post-editing data in a real use case scenario where professional subtitlers edit automatically generated subtitles. The post-editing setting includes, for the first time, automatic generation of timestamps and segmentation, and focuses on the effect of timing and segmentation edits on the post-editing process. The collected data will serve as the basis for investigating how subtitlers interact with automatic subtitling and for devising evaluation methods geared to the multimodal nature and formal requirements of subtitling.
In response to the growing interest towards automatic subtitling, the 2021 EAMTfunded project "Towards a methodology for evaluating automatic subtitling" aimed at collecting subtitle post-editing data in a real use case scenario where professional subtitlers edit automatically generated subtitles. The post-editing setting includes, for the first time, automatic generation of timestamps and segmentation, and focuses on the effect of timing and segmentation edits on the post-editing process. The collected data will serve as the basis for investigating how subtitlers interact with automatic subtitling and for devising evaluation methods geared to the multimodal nature and formal requirements of subtitling.
Project overview
Automatic subtitling is the task of generating target language subtitles for a given video without any intermediate human transcription and timing of the source speech. The source speech in the video is automatically transcribed, translated and segmented into subtitles, which are synchronised with the speech -a process called automatic spotting (or auto-spotting). Automatic subtitling is becoming a task of increasing interest for the MT community, practitioners and the audiovisual industry. Despite the technological advancements, the evaluation of automatic subtitling still represents a significant research gap. Popular MT evaluation metrics consider only content-related parameters (translation quality), but not form-related parameters, such as format (length and segmentation) and timing (synchronisation with speech, reading speed), which are important features for high-quality subtitles (Carroll and Ivarsson, 1998). Moreover, the way subtitlers interact with automatically generated subtitles has not been yet explored, since the majority of works which conducted human evaluations of the post-editing effort in MT for subtitling have focused on edits in the textual content (Volk et al., 2010;Bywood et al., 2017;Matusov et al., 2019;Koponen et al., 2020).
This project seeks to investigate automatic subtitling, the factors contributing to post-editing effort and their relation to the quality of the output. This is achieved through the collection of rich, product-and process-based subtitling data in a real use case scenario where professional subtitlers edit automatically translated, spotted and segmented subtitles in a dedicated subtitling environment. The richness of the data collected during this one-year project is ideal for understanding the operations performed by subtitlers while they interact with automatic subtitling in their professional environment and for applying mixed methods approaches to:
• Investigate the correlation between amount of text editing, adjustments in auto-spotting and postediting temporal/technical effort • Explore the effect of auto-spotting edits on the total post-editing process • Investigate the variability in subtitle segmentation decisions among subtitlers • Propose tentative metrics for auto-spotting quality and subtitle segmentation 2 Data collection Three professional subtitlers with experience in post-editing tasks (two subtitlers en→it, one en→de) were asked to post-edit 9 single-speaker TED talks from the MuST-Cinema test set, 1 the only publicly available speech subtitling corpus (Karakanta et al., 2020), amounting to one hour of video (10,000 source words) in total. The postediting task was performed in a novel PE subtitling tool, Matesub, 2 which features automatic speech recognition, machine translation, automatic generation of timestamps and automatic segmentation of the translations into subtitles.
For each subtitler, we collected the following data: 1) original automatically-generated subtitle files and the corresponding final human postedited subtitle files in SubRip .srt format; 2) process logs from the Matesub tool, which records the original and final subtitle, original and final timestamps and total time spent on the subtitle; 3) keystrokes, using InputLog 3 (Leijten and Van Waes, 2013). Screen recordings were also collected to trace the translation and segmentation decisions of the subtitlers and identify possible outliers. At the end of the task, the subtitlers completed a questionnaire giving feedback on their user experience with automatic subtitling, particular problems faced, and their general impressions on automatic subtitling.
For en→it, we collected in total 1,199 subtitles from the first subtitler (it1) and 1,208 subtitles from the second subtitler (it2), while for en→de 1,198 subtitles. Based on the process logs we can define the status of each subtitle: new -a new subtitle is added by the subtitler; deleted -an automatically generated subtitle is discarded by the subtitler; or edited -any subtitle that is not new or deleted, regardless of whether it was confirmed exactly as generated by the system or changed. Table 1 shows the distribution of subtitles based on their status, with edited being the majority.
Final remarks
This project focuses on automatic subtitling and the challenges in its evaluation due to the multi-modal nature of the source medium (video, audio) and the formal requirements of the target (format and timing of subtitles). The data collected constitute the basis for future multi-faceted analyses to explore correlations between translation quality, spotting quality, and post-editing effort, possibly leading to new metrics for automatic subtitling. The subtitling data collected will be publicly released to promote research in automatic subtitling.
Table 1: Distribution of subtitles based on their status.Subtitler
Edited
New
Deleted
it1
1,015 (84,7%) 59 (4.9%) 125 (10.4%)
it2
953 (78.9%)
68 (5.7%) 187 (15.4%)
de
1,051 (87.7%) 59 (4.9%)
88 (7.4%)
© 2022 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND.
https://ict.fbk.eu/must-cinema/ 2 https://matesub.com/ 3 https://www.inputlog.net/
AcknowledgementsThis project has been partially funded by the EAMT programme "2021 Sponsorship of Activities -Students' edition". We kindly thank the subtitlers Giulia Donati, Paolo Pilati and Anastassia Friedrich for their participation in the PE task.
Embracing the threat: machine translation as a solution for subtitling. Perspectives. Lindsay Bywood, Panayota Georgakopoulou, Thierry Etchegoyhen, 25Bywood, Lindsay, Panayota Georgakopoulou, and Thierry Etchegoyhen. 2017. Embracing the threat: machine translation as a solution for subtitling. Per- spectives, 25(3):492-508.
Code of Good Subtitling Practice. Mary Carroll, TransEditSimrishamnCarroll, Mary and Jan Ivarsson. 1998. Code of Good Subtitling Practice. Simrishamn: TransEdit.
MuST-Cinema: a Speech-to-Subtitles corpus. Alina Karakanta, Matteo Negri, Marco Turchi, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, France. ELRAKarakanta, Alina, Matteo Negri, and Marco Turchi. 2020. MuST-Cinema: a Speech-to-Subtitles cor- pus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3727-3734, Mar- seille, France. ELRA.
MT for subtitling: User evaluation of post-editing productivity. Maarit Koponen, Umut Sulubacak, Kaisa Vitikainen, Jörg Tiedemann, Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. the 22nd Annual Conference of the European Association for Machine TranslationLisboa, PortugalEuropean Association for Machine TranslationKoponen, Maarit, Umut Sulubacak, Kaisa Vitikainen, and Jörg Tiedemann. 2020. MT for subtitling: User evaluation of post-editing productivity. In Proceed- ings of the 22nd Annual Conference of the European Association for Machine Translation, pages 115- 124, Lisboa, Portugal, November. European Asso- ciation for Machine Translation.
Keystroke logging in writing research: Using inputlog to analyze writing processes. Written Communication. Mariëlle Leijten, Luuk Van Waes, 30Leijten, Mariëlle and Luuk Van Waes. 2013. Keystroke logging in writing research: Using inputlog to an- alyze writing processes. Written Communication, 30:358-392.
Customizing Neural Machine Translation for Subtitling. Evgeny Matusov, Patrick Wilken, Yota Georgakopoulou, Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics1Matusov, Evgeny, Patrick Wilken, and Yota Geor- gakopoulou. 2019. Customizing Neural Machine Translation for Subtitling. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 82-93, Florence, Italy, August. Association for Computational Linguistics.
Machine Translation of TV Subtitles for Large Scale Production. Martin Volk, Rico Sennrich, Christian Hardmeier, Frida Tidström, Proceedings of the Second Joint EM+/CNGL Workshop "Bringing MT to the User: Research on Integrating MT in the Translation Industry (JEC'10). Zhechev, Ventsislavthe Second Joint EM+/CNGL Workshop "Bringing MT to the User: Research on Integrating MT in the Translation Industry (JEC'10)DenverVolk, Martin, Rico Sennrich, Christian Hardmeier, and Frida Tidström. 2010. Machine Translation of TV Subtitles for Large Scale Production. In Zhechev, Ventsislav, editor, Proceedings of the Second Joint EM+/CNGL Workshop "Bringing MT to the User: Research on Integrating MT in the Translation In- dustry (JEC'10), pages 53-62, Denver. |
1,844,637 | XUXEN: A Spelling Checker/Corrector for Basque Based on Two-Level Morphology | AbslractThe application of the formalism of two-level morphology to Basque and its use in the elaboration of the XUXEN spelling checker/corrector are described. This application is intended to cover a large part of the language.Because Basque is a highly inflected language, the approach of spelling checking and correction has been conceived as a by-product of a general purpose morphological analyzer/generator. This analyzer is taken as a basic tool for current and future work on automatic processing of Basque.An extension for continuation class specifications in order to deal with long-distance dependencies is proposed. This extension consists basically of two features added to the standard formalism which allow the lexicon builder to make explicit the interdependencies of morphemes.User-lexicons can be interactively enriched with new entries enabling the checker from then on to recognize all the possible flexions derived from them.Due to a late process of standardization of the language, writers don't always know the standard form to be used and commit errors. The treatment of these "typical errors" is made in a specific way by means of describing them using the two-level lexicon system. In this sense, XUXEN is intended as a useful tool for standardization purposes of present day written Basque. | [
2209224,
1296471
] | XUXEN: A Spelling Checker/Corrector for Basque Based on Two-Level Morphology
Agirre E
Informatika Fakultatea
DONOSTIA (Basque Country
Urkia M. U.Z.E.I. Aldapeta
DONOSTIA (Basque Country)
20P.K. 64920080, 20009Spain
Alegria I
Informatika Fakultatea
DONOSTIA (Basque Country
Urkia M. U.Z.E.I. Aldapeta
DONOSTIA (Basque Country)
20P.K. 64920080, 20009Spain
Arregi X
Informatika Fakultatea
DONOSTIA (Basque Country
Urkia M. U.Z.E.I. Aldapeta
DONOSTIA (Basque Country)
20P.K. 64920080, 20009Spain
Artola X Diaz De Ilarraza
Informatika Fakultatea
DONOSTIA (Basque Country
Urkia M. U.Z.E.I. Aldapeta
DONOSTIA (Basque Country)
20P.K. 64920080, 20009Spain
A
Informatika Fakultatea
DONOSTIA (Basque Country
Urkia M. U.Z.E.I. Aldapeta
DONOSTIA (Basque Country)
20P.K. 64920080, 20009Spain
Maritxalar M
Informatika Fakultatea
DONOSTIA (Basque Country
Urkia M. U.Z.E.I. Aldapeta
DONOSTIA (Basque Country)
20P.K. 64920080, 20009Spain
Sarasola K
Informatika Fakultatea
DONOSTIA (Basque Country
Urkia M. U.Z.E.I. Aldapeta
DONOSTIA (Basque Country)
20P.K. 64920080, 20009Spain
XUXEN: A Spelling Checker/Corrector for Basque Based on Two-Level Morphology
AbslractThe application of the formalism of two-level morphology to Basque and its use in the elaboration of the XUXEN spelling checker/corrector are described. This application is intended to cover a large part of the language.Because Basque is a highly inflected language, the approach of spelling checking and correction has been conceived as a by-product of a general purpose morphological analyzer/generator. This analyzer is taken as a basic tool for current and future work on automatic processing of Basque.An extension for continuation class specifications in order to deal with long-distance dependencies is proposed. This extension consists basically of two features added to the standard formalism which allow the lexicon builder to make explicit the interdependencies of morphemes.User-lexicons can be interactively enriched with new entries enabling the checker from then on to recognize all the possible flexions derived from them.Due to a late process of standardization of the language, writers don't always know the standard form to be used and commit errors. The treatment of these "typical errors" is made in a specific way by means of describing them using the two-level lexicon system. In this sense, XUXEN is intended as a useful tool for standardization purposes of present day written Basque.
Inlroduclion
This paper describes the application of two-level morphology to Basque, along with its use in the elaboration of the XUXEN spelling checker/corrector. The morphological analyzer included in XUXEN has been designed with the aim of laying the foundations for further development of automatic processing of Basque. The fact that Basque is a highly inflected language makes the correction of spelling errors extremely difficult because collecting all the possible word-forms in a lexicon is an endless task.
The simplicity of English inflections made for reduced interest in research on morphological analysis by computer. In English, the most common practice is to use a lexicon of all of the inflected forms or a minimum set of morphological rules (Winograd,83). That means that while a great many language independent tools have been developed for syntactic and semantic analysis, the same cannot be said for morphological tools. In 1981, Kaplan and Kay (Kaplan et al., 81) made a valuable contribution in designing a formalism for phonological generation by means of rules compiled in an automaton. This idea would later be followed up by Koskenniemi (Koskenniemi,(83)(84)(85)Karttunen et al.,87) in the two-level formalism. The computational model for two-level morphology has found widespread acceptance in the following years due mostly to its general applicability, declarativeness of rules and clear separation of linguistic knowledge from the program. The essential difference from generative phonology is that there are no intermediate states between lexical and surface representations. Word recognition is reduced to finding valid lexical representations which correspond to a given surface form. Inversely, generation proceeds from a known lexical representation and searches for surface representations corresponding to it. The complexity of the model is studied in depth in (Barton,85), who with few exceptions agrees with Karttunen (Karttunen,83) in feeling that thc complexity of a language has no significant effects on the speed of analysis or synthesis.
There have been many implementations of the two-level model for very different languages, some of them taking a full coverage of the language: Finnish, English and Arabic among others. Our implementation is intended to cope extensively with present day Basque.
XUXEN manages user-lexicons which can be interactively enriched during correction by means of a specially designed human-machine dialogue which allows the system to acquire the internal features of each new entry (sublexicon, continuation class, and selection marks).
Moreover, XUXEN deals with errors often due to recent standardization of Basque. An additional lexicon includes alternative variants to the standard entries and additional rules model erroneous morphophonological changes; this allows a specialized treatment of "typical errors".
Following are given an overview of Basque morphology and the application of the two-level model to Basque, then the lexical database built as a support for this and other applications is described, and finally, the strategies followed in the design and implementation of the spelling checkercorrector.
Brief Description of Basque Morphology
Basque is an agglutinative language; that is, for the formation of words the dictionary entry independently takes each of the elements necessary for the different functions (syntactic case included). More specifically, the affixes corresponding to the determinant, number and declension case are taken in this order and independently of each other (deep morphological structure).
One of the principal characteristics of Basque is its declension system with numerous cases, which differentiates it from the languages from surrounding countries. The inflections of determination, number and case appear only after the last element in the noun phrase. This last element may be the noun, but also typically an adjective or a determiner. Basque declension is unique; that is, there exists a single declension table for all flexionable entries, compared to Latin for instance,which has 5 declension paradigms.
As prepositional functions are realized by case suffixes inside word-forms, Basque presents a relatively high power to generate inflected word-forms. For instance, from one noun entry a minimum of 135 inflected forms can be generated. Moreover, while 77 of them are simple combinations of number, determination, and case marks, not capable of further inflection, the other 58 are word-forms ended with one of the two possible genitives or with a sequence composed of a case mark and a genitive mark. If the latter is the case, then by adding again the same set of morpheme combinations (135) to each one of those 58 forms a new, complete set of forms could be recursively generated. This kind of construction reveals a noun ellipsis inside a complex noun phrase and could be theoretically extended ad infinitum; in practice, it is not usual to find more than two levels of this kind of recursion in a wordform but, in turn, some quite frequent forms contain even three or more levels. This means that a morphological analyzer for Basque should be able to recognize the amount of 77 + 58 ( 77 + 58 ( 77 + 58)) = 458683 inflected forms for each noun taking into account only these two levels of recursion. This generation capability is similar for aLl parts of speech. In the case of adjectives, due to the possibility of graduation, this capability is 4 times greater.
The grammatical gender does not exist in Basque; there are not masculine and feminine. However, the verb system uses the difference sometimes, depending on the receiver and the grade of familiarity: this is the case of the allocutive verb forms.
Verb forms are composed of a main verb and an auxiliary finite form. The verb system in Basque is a rich one: it is often found in a single finite verb form morphemes corresponding to ergative, nominative and dative cases.
Derivation and composition are quite productive and they are widely used in neologism formation.
Application of Two-Level Morphology to Basque
The Rules
The correlations existing between the lexical level and the surface level due to morphophonological transformations are expressed by means of the rules. In the case of Basque 21 two-level rules have been defined. These rules are due to the four following reasons: eminently phonological (7 rules), morphological (3 rules), orthographical (5 rules), and both phonological and morphological (6 rules). The effects of the rules are always phonological. Given that suppletion cases are rare in Basque, phonemically unrelated allomorphs of the same morpheme are included in the lexicon system as separated entries. No rules deal with these phenomena. The rules are applied to express three types of realizations: adding or removing a character, or alternation of a character from the lexical to the surface level. These basic transformations can be combined.
In order to control the application of the rules 17 selection marks are used. Since two-level rules are sensitiVc only to the form of the word, these marks inform on part ol speech, special endings and other features needed for handlin~ exceptions in rules.
Examples of rules: where C represents any consonant, 2 is the selection marl stated at the beginning of affixes requiring epenthetical e, 1 is the selection mark stated at the end of those lemmas witl final au diphthong, 6 is the selection mark stated at the em of those lemmas with final hard r, 4 is the selection marl stated at the end of verb infinitives with final n, and & is the selection mark stated at the end of place names with final 1 or n which forces voicing of following t. The first rule states that the selection mark 2 is realized as surface e, always and only when it is preceded either by a consonant or a selection mark 8, or a selection mark 6 realized as surface r, or a selection mark 4.
The second rule specifies the voicing of lexical t, always and only when it is preceded either by a n or I followed by the selection marks & and 2, or a n followed by the selection mark 2.
At the moment, the translation of rules into automata required by the two-level formalism is made by hand.
The Lexicon System
Among the morphological phenomena handled by our system so far, we would like to emphasize the following: whole declension system --including place and person names, special declension of pronouns, adverbs, etc.--, graduation of adjectives, relational endings and prefixes for verb forms --finite and non-finite--and some frequent and productive cases of derivation and compounding.
The lexicon system is divided into sublexicons. Lexical representation is defined by associating each entry to its sublexicon and giving it the corresponding continuation class. a) Sublexicons: Lemmas, auxiliaries of verbs and finite verb forms, and different affixes corresponding to declension, determination, number, verb endings, and so on are distinguished. All of the entries in the sublexicons are coded with their continuation class and morphological information. At present near 15,000 items are completely coded in the lexicon system: 8,697 lemmas, 5,439 verb forms and 120 affixes. They are grouped into 94 different sublexicons. Within short time, this number will be increased in order to code all the 50,000 entries present at the moment in the database supporting the lexicon. The entry code gives, when appropriate, information on part of speech, determination, number, declension case, gender (exceptional cases), relation (of subordination), part of speech transformation that a relational affix produces, type of verb, root of finite verb forms, tense-mood, grammatical person, etc. along with the specific information each entry requires. b) Continuation class: Generalizations are not always possible. For example, while with nouns and adjectives the assignment of a single continuation class to all of the elements of each category has been possible, adverbs, pronouns and verbs have required more particularized solutions. A number of 79 continuation classes have been defined.
The system permits the unlimited accumulation and treatment of information as it extracts data from the dictionary according to the segmentation found. This feature is essential to Basque given that: a) a large amount of morpho-syntactic knowledge can be derived from a single word-form, and b) there is no set theoretical limit to the potential recursion of genitives.
Separated representation for homographs and homonyms --in the main sublexicon, with the same or different continuation classes--has been made possible. Although this distinction is not necessarily relevant to morphological analysis, future work on syntax and semantics has been taken into consideration.
Some Problems and Possible Solutions
Although until now, the notation and concept of continuation class have been used, in authors' opinion it is the weakest point of the formalism. Specially in dealing with the Basque auxiliary verb, cases of long-distance dependencies that are not possible to express adequately have been found. Different solutions have been proposed to solve similar problems for other languages (Trost,90;Schiller,90). The solution suggested below is not as elegant and concise as a word-grammar but it seems expressive enough and even more efficient when dealing with this kind of problems. To this end, an improved continuation class mechanism is being implemented. This mechanism supports the following two extra features:
bans that can be stated altogether with a continuation class; they are used to express the set of continuation classes forbidden further along the word-form (from the lexical entry defined with this restricted continuation class). Examples:
bait (PERTSONA -LA -N) this states that among the morphemes in the wordform following to the verb prefix bait are to be allowed those belonging to the continuation class PERTSONA but also that further on in the word no morphemes belonging to the continuation classes LA or N will be accepted. continuation class-tree: the lexicon builder has the possibility of restricting the set of allowed continuation morphemes for a given one, by means of making explicit these morphemes through different segments in the word-form; this explicitation is done by giving a parenthesized expression representing a tree. This mechanism improves the expressiveness of the formalism providing it with the additional power of specifying constraints to the set of morphemes allowed after the lexicon entry, stating in fact a continuation "path" --not restricted to the immediate morpheme--which makes explicit that set in a conditioned way. Examples: Long-distance dependency cases are found in the verb finite form instances above: the initial morpheme na (nominative, first person) allows dative morphemes corresponding to the third person after the morpheme tzai (root) but not those corresponding to the first person. Analogously the theoretically possible hatzain* is not grammatical in Basque because it combines two second person morphemes in nominative and dative cases. The continuation corresponding to na can be stated as follows:
na (KI (DAT23 (N_KE)), TZAI (DAT23 (LAT))) which specifies two alternative continuation "paths" allowed after this morpheme: the one including the morphemes in the continuation class KI and that which includes those in the continuation class TZAI. In both cases DAT23 restricts the set of morphemes potentially permitted as continuation of those in KI or TZAI, allowing only the 2nd and 3rd person dative morphemes. Without this extension of the formalism, it would be possible to do it by storing repeatedly the morpheme tzai in two or more different lexicons, but this is not very useful when the distance between dependent morphemes is longer. Similarly: ha (KI (DAT13 (N_KE)), TZA! (DAT13 (LAT))) is the way to express that ha (nominative, 2nd person) is to be combined with dative morphemes of 1st and 3rd person but not with those of 2nd. Continuation classes N_KE and LAT further restrict the morphemes allowed conditioning them in this case to the classes KI and TZAI respectively. Note that in this example two different cases of longdistance dependency are present.
The Lexical Database
The lexical database is supported permanently in a relational system. This database is intended as an independent linguistic tool. Within this framework, information about the two-level lexicon system is stored in three different relations.
Each lexicon is mainly characterized by the susceptibility of its components to be the initial morpheme in a word-form and by whether or not they are of semantic significance.
In another relation, continuation classes are defined in terms of lexicons or other continuation classes. It is possible to store examples as well.
Finally, the main component of the database is the set of lexicons with their associate entries: the two-level form of the entry is stored along with its original form, the source from which it has been obtained, examples, and in some cases (lemmas) the usage frequency. Obviously, the linguistic knowledge related to the entry is also stored in this relation.
A user friendly interface allows the lexicon builder to do the operations of addition and updating of entries, consistency checking, etc. in a comfortable way. Selection marks depending on knowledge contained in the database such as part of speech, subcategorization of nouns, special endings for certain categories, etc. may be automatically derived from the information in the base.
The production of the up-to-date run-time lexicon and continuation class definitions in the format required by the two-level system is obtained automatically from this database by means of specially designed procedures.
The Spelling Checker/Cor~ctor
The morphological analyzer-generator is an indispensable basic tool for future work in the field of automatic processing of Basque, but in addition, it is the underlying basis of the spelling checker/corrector. The spelling checker accepts as good any word which permits a correct morphological breakdown, while the mission of the morphological analyzer is to obtain all of the possible breakdowns and the corresponding information. Languages with a high level of inflection such as Basque make impossible the storage of every word-form in a dictionary even in a very compressed way; so, spelling checking cannot be resolved without adequate treatment of words from a morphological standpoint.
From the user's point of view XUXEN is a valid system to analyze documents elaborated by any word processor. It operates at a usual speed and takes up reasonable amount of space, thus allowing it to work with any microcomputer.
The Spelling Checker
The basic idea of accepting words which have a correct morphological analysis is fulfilled with classic techniques and tools for detecting spelling errors (Peterson,80). A filter program appropriate for the punctuation problems, capital letters, numbers, control characters and so on has been implemented. In addition to the mentioned problems, difficulties intrinsic to Basque, like word-composition, abbreviations, declension of foreign words, etc. have been also taken into account. Besides this filter, interactive dialogue with the user, buffers for the most frequent words (in order to improve the performance of the system), and maintenance of the user's own dictionary (following the structure of the two-level lexicon) are the essential elements to be added to the morphological analyzer for the creation of a flexible and efficient spelling checker.
It is very important to notice the necessity of a suitable interface for lexical knowledge acquisition when it comes to managing with precision the inclusion of new lemmas in the user's own dictionary. Without this interface, morphological and morphotactical information essential to the checker would be left unknown and, so, no flexions could be accepted. Currently, the system acquires information from the user about part of speech, subcategorization for nouns --person or place names, mainly--and some morphonological features like final hard-or-soft r distinction. So, the user, giving to the system several answers, makes possible the correct assignment of continuation class and selection marks to the new lemma. In this way, open class entries may be accepted and adequately treated. Entries belonging to other classes may also be entered but no flexions of them will be recognized. This ability of the checker to deal correctly with new lemmas requires, in turn, certain grammatical knowledge from the user.
Our prototype, running on a SUN 3/280 and using a buffer containing 4,096 of the most frequent word-forms, checks an average of 17.1 words per second in a text with a rate of misspellings and unknown words (not present in the current lexicon) of 12.7%. Considering the word-forms the system deems as erroneous, statistical tests have shown that 60% are actual misspellings, 16% would have been recognized had the general lexicon been more comprehensive, and the rest correspond to specific words (technical terms, proper nouns, etc.) which the user should include in his own dictionary.
Within a short time minor changes will provide greater performance. A PC version is also in use.
The Spelling Conector
When a word is not recognized by the spelling checker, the user can choose, among other options, to ask the system for suggestions for replacing the erroneous word. These suggestions, logically, must be correct words which will be similar to the word-form given by the user.
To find similar words to propose, there exists two working lines: 1) Using as a guide the "sources of error" described by Peterson (Peterson,80), errors are basically of two types:
-Errors due to lack of knowledge of the language: these errors are often not dealt with on the assertion that they are infrequent, but Pollock and Zamora (Pollock,84) evaluate their frequency at between 10% and 15%. Moreover, because Basque is a language whose standardization for written use has begun only in recent years, a higher degree of error would be expected for it. -Typographical errors. According to the classic typification by Damerau (Damerau,64) 80% of "typos" are one of the following four types: one exceeding character, one missing character, a mistaken character, or the transposition of two consecutive characters. Following that, n+26(n-1)+26n+(n-1) possible combinations (n being the length of a word) can be generated; they must be examined to verify their validity and the most probable must be selected. For this examination it is normal to use statistical methods which, though not very reliable, are highly efficient (Pollock,84). 2) Definition of a measurement of distance between words and calculation of which words of the dictionary give a lesser distance with respect to the erroneous word (Angell,83;Tanaka,87). The most frequently used measure is the "distance of Levenshtein". This second method, measurement of distance, is slower but much more reliable than the first one, though it is not suitable for a lexicon system where the words are incomplete, as is the case. Due chiefly to this, the chosen option has been the adaptation of the first method, taking into account the following criteria:
Handling of typical errors. A linguistic study has been carried out on typical errors, that is, errors most frequently committed due to lack of knowledge of the language itself or its latest standardization rules, or due to the use of dialectal forms. To store typical errors a parallel two-level lexicon subsystem is used. In this subsystem, each unit is an erroneous morpheme which is directly linked to the corresponding correct one. When searching for words the two-level mechanism is used together with this additional lexicon subsystem. When a word-form is not accepted by the checker the typical errors subsystem is added and the system retries the orthographical checking. If the incorrect form is now correctly analyzed --so, it contains a typical error-the correct morpheme corresponding to the erroneous one is directly obtained from the typical errors subsystem. There will also be additional two-level rules, which will reflect the erroneous, but typical morphonological alternations in dialectal utilizations or training periods.
Generating alternatives. Generating alternatives to typographical errors using Damerau's classification. Trigram analysis. In generating the alternatives, trigram analysis is used both for discarding some of them as well as for classifying them in order of probability.
Spelling checking of proposals. On the basis of the three previous criteria, incorrect word-forms would be offered to the user. Therefore, the wordforms must be ted into the spelling checker to check whether they are valid or not.
The whole process would be specially slow, due mostly to the checking of alternatives. To speed it up the following techniques have been used:
If during the analysis of the word considered wrong a correct morpheme has been found, the criteria of Damerau are applied only in the part unrecognized morphologically, so that the number of possibilities will be considerably lower. This criterion is applied on the basis that far fewer "typos" are committed at the beginning of a word (Yannakoudakis,83). Moreover, on entering the proposals into the checker, the analysis continues from the state it was in at the end of that last recognized morpheme. On doing trigrammatical analysis a trigram table mechanism is used, by means of which generated proposals will be composed only of correct trigrams and classified by their order of probability; thus, correction analysis (the slowest element of the process) is not carried out with erroneous trigrams and the remaining analyses will be in the order of trigrammatical probability. Besides that, the number of proposals to be checked is also limited by filtering the words containing very low frequency trigrams, and never exceeds 20 forms. At any rate, after having obtained three correct proposals, the generation process will end.
If a word is detected as a typical error, it will not be verified as a possible "typo". This requires the analysis of typical errors to take place previous to that of "typos", in spite of being less probable. The justification is that we are particularly interested in giving preferential treatment to typical errors and, what's more, these can be handled more speedily.
The average time for the generation of proposals for a misspelt word-form, on the SUN machine cited above, is 1.5 s. The best case is when three or more alternatives are found in the buffer of most frequent words, and takes less than 0.1 s. The worst case, when no correct proposals are found for a long word-form and when no correct initial morphemes were recognized during its analysis, takes around 6s.
Conclusions
The XUXEN analyzer/checker/corrector has been de~ribed as based on the two-level morphological formalism. It deals with Basque, a highly inflected language recently standardized. At the moment a prototype of the system has been implemented in C language. This implementation is a general tool for Basque useful for texts written by any word processing programme.
As is well known, in the two-level model morphemes are stored in the sublexicons without alterations, unlike in other systems. From a linguistic standpoint, the clarity and respect for the lexical unit promoted by this way of focusing morphological analysis is of great importance. However, long-distance dependencies between morphemes can not be adequately expressed by means of the continuation class mechanism. An improved continuation-class mechanism to solve this problem is suggested.
At present, the lexicon system contains nearly 15,000 items, now the coding of new iemmas in order to reach 50,000 entries is being completed. At this moment finite verb forms (approximately 2,000) are in the lexicon, although they could be seen as analyzable forms. These verb forms have been described by means of their component morphemes taking into account the long-distance dependency problems they present. This have been done using the extension of the continuation-class formalism described in 3.3 which is currently being implemented.
With the lemmas and morphemes coded so far, XUXEN is able to recognize approximately three millions different word-forms without at all counting forms produced by genitive recursion. Considering that most of lemmas in the lexicon can take genitive suffixes, our present implementation of the spelling checker would recognize thousands of millions of word-forms.
User-lexicons can be interactively enriched with new entries enabling XUXEN to recognize from then on all the possible flexions derived from them.
An additional two-level lexicon subsystem is used in our system to store the so-called typical errors. Typical errors are due often to the recent standardization of the language and dialectal uses. This lexicon subsystem is used preferably when suggesting alternatives to the user.
(
I went to him) joan NA TZAI T * (I went to me*) etorri HA TZAI T (You came to me) etorri HA TZAI N * (You came to you* (fern.))
For example: So, these inflectional elements are not repeated in each individual word of a noun phrase as in the Romance languages.etxe zaharreAN
(etxe zaharrean: in the old house)
etxe: noun
(house)
zahar: adjective (old)
r and e: epenthetical elements
A: determinate, singular
N: inessive case
AcknowledgementsProf. Koskenniemi for his fruitful comments on an earlier version of this paper.
Aplicaci6n de la morfologia de dos niveles al euskara. E Agirre, I Alegria, X Arregi, X Artola, A Diaz De Ilarraza, K Sarasola, M Urkia, S.E.P.L.N. 8Agirre E., Alegria I., Arregi X., Artola X., Diaz de Ilarraza A., Sarasola K., Urkia M. Aplicaci6n de la morfologia de dos niveles al euskara. S.E.P.L.N, vol. 8, 87-102. 1989.
Automatic Spelling Correcting using a trigram similarity measure. R Angell, G Freund, P Willety, Information Processing & Management. 194Angell R., Freund G., Willety P. Automatic Spelling Correcting using a trigram similarity measure. Information Processing & Management, vol 19, nQ4, 1983.
Computational Complexity in two-level Morphology. E Barton, Barton, E. Computational Complexity in two-level Morphology, 1985.
A technique for computer detection and correction of spelling errors. F Damerau, Comm. of ACM. 7Damerau F. A technique for computer detection and correction of spelling errors. Comm. of ACM vol. 7 pp. 171-176, 1964.
Aditz laguntzaile batua. Euskaltzaindia, Bilbo. Euskaltzaindia, Euskaltzaindia. Aditz laguntzaile batua. Euskaltzaindia, Bilbo 1973.
Euskal Gramatika: Lehen urratsak (I eta II). Euskaltzaindia, Euskaltzaindia. Euskal Gramatika: Lehen urratsak (I eta II).
. Bilbo Euskaltzaindia, Euskaltzaindia, Bilbo 1985.
Phonological rules andfinitestate transducers. Paper read at the annual meeting of the Linguistic Society of America. R M Kaplan, M Kay, Texas Linguistic Forum. 22Karttunen, L. KIMMO : A two-level Morphological AnalyzerKaplan, R. M., and M. Kay. Phonological rules andfinite- state transducers. Paper read at the annual meeting of the Linguistic Society of America in New York City, 1981. Karttunen, L. KIMMO : A two-level Morphological Analyzer. Texas Linguistic Forum, Vol 22, Pp.165-186, 1983.
A Compiler for Two-Level Phonological Rules in "Tools for Morphological Analysis. L Karttunen, K Koskenniemi, R Kaplan, No. CLSI-87-108Center for the Study of Language and Information. ReportKarttunen L., Koskenniemi K., Kaplan R. A Compiler for Two-Level Phonological Rules in "Tools for Morphological Analysis", Center for the Study of Language and Information, Report No. CLSI-87-108.
. M Kay, Analysis, Kay, M. Morphological Analysis..
Proc. of the Int. Conference on Computational Linguistics (Pisa). A.Zampolli & N Calzolari eds.of the Int. Conference on Computational Linguistics (Pisa)A.Zampolli & N Calzolari eds. (1980). Proc. of the Int. Conference on Computational Linguistics (Pisa), 1973.
Two-level Morphology: A genera] Computational Model for Word-Form Recognition and Production, University of Helsinki, Department of General Linguistics. K Koskenniemi, Koskenniemi, K.. Two-level Morphology: A genera] Computational Model for Word-Form Recognition and Production, University of Helsinki, Department of General Linguistics. Publications n o 11, 1983.
Compilation of Automata fror~ Morphological Two-level Rules. K Koskenniemi, University of HelsinkiPublicatior n ° 15Koskenniemi, K. Compilation of Automata fror~ Morphological Two-level Rules. Pp. 143-149. Publicatior n ° 15. University of Helsinki, 1985.
Computer Programs for detecting anc correcting spelling errors. J L Peterson, Comm. of ACM. 23Peterson J.L. Computer Programs for detecting anc correcting spelling errors. Comm. of ACM vol.23 n~12 1980.
Automatic spelling correction in scientific and scholarly text. J Pollock, A Zamora, Comm. of ACM. 27Pollock J., Zamora A. Automatic spelling correction in scientific and scholarly text. Comm. of ACM vol.27 358- 368, 1984.
A Computational Framework for Lexical Description. G D Ritchie, S G Pulman, A W Black, G J Russell, Computational Linguistics. 13Ritchie, G.D., S.G, Pulman, A.W.Black and G.J. Russell. A Computational Framework for Lexical Description. Computational Linguistics, vol. 13, numbers 3-4, 1987.
Gaurko euskara idatziaren maiztasun-hiztegia. (3gn. liburukia). I Sarasola, GAK, DonostiaSarasola, I. Gaurko euskara idatziaren maiztasun-hiztegia. (3gn. liburukia), GAK, Donostia, 1982.
A lexicon for a German two-level morphology. A Schiller, P Steffens, Paper read at Euralex 1990 (Benalm,'idenaSchiller A. Steffens P. A lexicon for a German two-level morphology. Paper read at Euralex 1990 (Benalm,'idena).
A High Speed String Correction method using a hierarchical file. E Tanaka, Y Kojima, IEEE transactions on pattern analysis and Machine Intelligence. 9Tanaka E., Kojima Y. A High Speed String Correction method using a hierarchical file. IEEE transactions on pattern analysis and Machine Intelligence vol.9 n%, 1987.
The application of two-level morphology to nonconcatenative German morphology. H Trost, 2HelsinkiCOLING-90Trost, H. The application of two-level morphology to non- concatenative German morphology. COLING-90, Helsinki, vol.2 371-376.
Language as a cognitive process. T Winograd, Syntax. Addison-WesleyWinograd, T. Language as a cognitive process. Vol.l: Syntax, pp 544-549. Addison-Wesley, 1983.
The rules of spelling errors. E J Yannakoudakis, Information Processing & Management. 192Yannakoudakis E.J. The rules of spelling errors. Information Processing & Management vol. 19 nQ2, 1983. |
236,460,181 | [] | Stanford MLab at SemEval-2021 Task 1: Tree-Based Modelling of Lexical Complexity Using Word Embeddings
August 5-6, 2021
Erik Rozi [email protected]
Stanford University
Niveditha Iyer [email protected]
Stanford University
Gordon Chi
Stanford University
Enok Choe
Stanford University
Kathy Lee
Stanford University
Kevin Liu
Stanford University
Patrick Liu
Stanford University
Zander Lack
Stanford University
Jillian Tang [email protected]
Stanford University
Ethan A Chi [email protected]
Stanford University
Stanford MLab at SemEval-2021 Task 1: Tree-Based Modelling of Lexical Complexity Using Word Embeddings
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
the 15th International Workshop on Semantic Evaluation (SemEval-2021)Bangkok, ThailandAugust 5-6, 2021688
This paper presents our system for the singleand multi-word lexical complexity prediction tasks of SemEval Task 1: Lexical Complexity Prediction. Text comprehension depends on the reader's ability to understand the words present in it; evaluating the lexical complexity of such texts can enable readers to find an appropriate text and systems to tailor a text to an audience's needs. We present our model pipeline, which applies a combination of embedding-based and manual features to predict lexical complexity on the CompLex English dataset using various tree-based and linear models. Our method is ranked 27 / 54 on single-word prediction and 14 / 37 on multiword prediction.
Introduction
The rapid expansion of social media and other online channels has made readable information available at an astounding rate. However, the accessibility of this information is often limited by the complexity of this information, especially among readers with low literacy levels in the language of the text and those with reading disabilities. Furthermore, even to the average reader, specialized jargon found in governmental documents and scientific fields is often difficult to decipher.
Systems to guide these users may redirect readers to more easily comprehensible sources, convert the text to simpler wording, or provide additional information about what difficult words mean. The development of such systems is benefited by the ability to evaluate the complexity of sections of the text. While there is currently a large amount of available text data, very little of it is labeled with word complexity; automating the labelling process would make much more data available to aid the * Co-first authors.
† Co-senior authors. development of NLP systems in tasks such as text simplification.
Multiple features of a word can affect lexical complexity. In addition to a word's frequency, length and syllable count, the context in which a word is found is likely to affect its understandability. The additional factor of the reader's proficiency in a language makes this task complex as many words have a highly variable complexity.
In this paper, we describe our model that predicts single-and multi-word lexical complexity scores.
Background
Task Overview
All data was provided through SemEval Task 1 (Shardlow et al., 2021). Our dataset consists of an augmented version of the CompLex Corpus (Shardlow et al., 2020), which contains English sentences from three genres of corpora: the Bible, Europarl, and biomedical writing. From each sentence, both single-and multi-word tokens were selected and annotated by approximately 7 annotators. Each token was annotated on complexity from a scale of 1-5, though for this competition, complexity was normalized to a continuous scale between 0 and 1.
Token complexity can differ based on the complexity of the token both with and without context. For example, for one instance, the token river was rated to have a complexity of 0.0, while jurisprudence had a complexity of around 0.672 for another instance. However, token complexities can also change based on the context from which it came from. For example, the token wisdom was given a complexity of 0.125 when it was associated in the sentence "The rod of correction gives wisdom, but a child left to himself causes shame to his mother." However, the same token was given a significantly higher complexity score of 0.368 when associated with the sentence "For in much wisdom is much grief; and he who increases knowledge increases sorrow."
Given that GloVe embeddings (Pennington et al., 2014) store semantic meaning of single words, we chose to use GloVe embeddings to represent both tokens and sentences. With this approach, we determine that despite contextual variation, inherent properties of the token itself are sufficient to explain much of the variance in lexical complexity.
Traditional Text Complexity Metrics
Many traditional metrics for calculating the complexity of text predict with syllable to word count ratios. For example, the Flesch-Kincaid Grade Level Formula 1 (Kincaid et al., 1975) calculates the complexity of a text with the formula GL = 0.39 words sentence + 11.8 syllables word − 15.59.
Other models based on the grade level of a text, such as the Automated Readability Index and the SMOG Index (Kincaid et al., 1975), also exist. Our original hypothesis inferred that these indexes would be good indicators to predict the complexity of a token. However, through empirical analysis, we found that these indicators provided no marginal benefit compared to GloVe sentence embeddings and simpler handcrafted features. As seen in Table 1, we found that the correlation coefficients of traditional complexity metrics to dataset complexity values were low. To test this, we initially included these traditional metrics in our feature space for the following models. Our model reported an R score of 0.63 with the Flesch-Kincaid Grade and SMOG Index as additional features. We removed these features after observing little benefit or worse loss scores (in comparison to Table 2). This suggests that word complexity in context may be embedded in a deeper semantic level than simple word and syllable lengths.
Model
Pearson Flesch-Kincaid Grade 0.07 Automated Readability Index 0.07 SMOG Index 0.03 Pre-trained GloVe embeddings with a dimension of 300 for both the single-word token and each word in the context sentence were used. For the singleword embeddings, PCA with a final dimension of 100 was applied. Since the context sentences contained a variable number of words, we calculated the component-wise mean of all the word-vector representations in the context sentence. We found that sentence features had low mutual information, hence we decided to use a limited number of 10 PCA features to calculate the mean of the sentence features. This mean representation is concatenated with the GloVe embedding of the single-word token.
In other words, let t be the GloVe embedding of the single-word token, and w i be the GloVe embedding for word i in the context sentence, with n words. We calculate the sentence representation s to be The "POS" and "IsInNER" features are obtained from the Stanza NLP package (Qi et al., 2020).
Instead of relying on the frequencies of words in the text we were analyzing, we found that a more representative frequency metric could be obtained by counting word occurrences in all Wikipedia articles. Hence we decided to use frequencies of word as they appear in the English Wikipedia articles as of February 2019. 2 This feature was concatenated with all of the other handcrafted features and GloVe embeddings, leading to a final feature dimensionality of 126.
Learning Models 3
Because the system primarily treats the input datapoints as sets of vectors and other numerical features, most of the models used were regressors made for data. As the baseline, we used linear regression with the GloVe embeddings for only the single-word token and obtained a baseline R of 0.7888 on the train set.
We explored the following machine learning models:
• Ridge regression is a linear least squares model with L2 regularization to reduce overfitting and variance. We use α = 0.00001 as the regularization coefficient to prevent overfitting.
• Support Vector Regression is a Support Vector Machine for a regression task that tolerates errors within a certain degree . We use = 0.02 as the distance within which no penalty is associated, and C = 0.2 as a regularization parameter to reduce overfitting.
• Decision Tree Regression creates a model as a series of decision rules. As a baseline, we created a decision tree with max depth = 6, though other models use varying depths.
• AdaBoost Regression (Freund and Schapire, 1996) sequentially applies decision trees, with each tree placing more weight on data that 2 https://github.com/IlyaSemenov/ wikipedia-word-frequency 3 All models were implemented using SKLearn (Pedregosa et al., 2012) unless otherwise mentioned. previous trees did not fit well to. We use De-cisionTreeRegressors with max depth= 10 as the base estimator, with a total of n estimators = 20 decision trees.
• XGBoost Regressor overcomes the inefficiency in gradient boosting of creating a single decision tree at a time by parallelizing tree building. We used max depth= 4 and λ = 2000 as a regularization parameter. As λ is responsible for L2 regularization of weights, using a higher value would make the model more conservative by encouraging smaller weights.
• LightGBM Regressor 4 (Ke et al., 2017) is a framework that uses tree based learning algorithms for gradient boosting. Our model uses gain-based feature importance, with λ = 50 and n leaves = 40 and a minimum of 100 datapoints per leaf. To avoid overfitting, we regularize with path smoothing of 1, set a maximum tree depth of 15, and trained using DART boosting.
• Stacking We also tested a stack of estimators with a final Ridge regressor to get an ensemble of predictions and reduce overfitting. We stacked five AdaBoost Regressors with n estimators = 50, 100 estimators respectively, each with a base estimator of a Decision Tree Regressor with max depth varying between 5, 7, and 9. On top of this, we stacked two Support Vector Regressors with = 0.01, 0.001 and C = 0.1, 0.01 respectively. Finally, we stacked three LightGBM Regressors, each with 100, 50, and 10 leaves respectively. This method was used with the theory that combining multiple models would result in better predictive power than one model alone.
• Bagging is an ensemble method involving training copies of a base model on independent random samples of the full dataset. We used an LGBM with n leaves = 40, reg lambda = 100, path smooth = 1, max depth = 12, and feature fraction = 0.75 as our base model. We set n estimators = 10, max samples = 0.8, and max features = 0.75 in order to reduce variance of the decision tree.
• BERT We also explore context-dependent deep learning architectures: in particular, we fine-tune the pre-trained BERT model (Devlin et al., 2019). We leverage the pre-trained BERT neural network 5 by tokenizing each sentence, and providing the target word to the model as a second sentence. With 2-3 fully connected layers added on top of the pre-trained model, we fine-tuned this model to generate a numerical complexity prediction, by optimizing on the L2 Loss. All experiments were implemented using SKLearn (Pedregosa et al., 2012) and HuggingFace 6 .
3.2 Multi-word Complexity Score
Data Representation and Features
Our multi-word data representation closely mirrored our single-word token representation. All hand-crafted features were crafted in the same way as the single word counterparts, except for the POS and NER features which were not included. For example, the feature NumLetters includes the number of letters from both words. The context sentence embeddings were calculated with the same methodology of applying PCA with dimension of 10 to the mean of the GloVe embeddings.
The key difference between the two models lies in the representation of the multi-word tokens themselves. The data provided was consistent in that each multi-word token consisted of two words. Therefore, to represent these tokens, we concatenated the GloVe representation of each word in the token, as well as the difference between both GloVe vectors. From there, we applied PCA of dimension 150 to this embedding, which was determined through experimentation, and concatenated this with the other hand-crafted and context sentence features mentioned previously.
More concretely, let t 1 , t 2 be the GloVe embeddings of each word in the multi-word token. We found the new representation of a multi-word token m to be
m = [t 1 , t 2 , t 1 − t 2 ].
This was concatenated with sentence representation s and handcrafted features for a final dimensionality of 174 features. 5 We use tokenizers and pre-trained models from the Hug-gingFace transformers library: https://huggingface.co/ transformers/model doc/bert.html 6 https://huggingface.com
Learning Models
Given the similarity of the multi-word representations versus the single-word representations (the only difference being the addition of a second token's GloVe embedding), we used the LightGBM Regressor outlined in section 3.1.2, as this model performed the best in the single word token setting. This proved to be an effective way to predict multi-word complexity.
Experimental Setup
The train and validation dataset splits provided were used in our experimental setup. In addition, we used K-fold validation to reduce overfitting. Using K-fold, we split the training set into k smaller sets arbitrarily, train using k − 1 folds, and crossvalidate with the remaining fold in the train set. This reduces leakage from the validation set into the model so that we can accurately validate our methods. Task predictions were evaluated using Pearson correlation, though Spearman correlation, mean absolute error, mean squared error, and R-squared were also reported. We compared the performance of our own models using Pearson correlation to keep one consistent evaluation metric.
Results
Single Word Results
From Table 2, LGBMRegressor performs the best in terms of the Pearson metric. Therefore, we chose this model as our final model for submission.
We found that transforming the word frequencies to a logarithmic scale did not improve results across the models we tested. This is expected because tree-based regressors (Adaboost, LGBM, XGB) are invariant to monotonic scaling. Our results on the task evaluation metrics are shown in Table ??.
We suspect Ensemble Stacking overgeneralized and did not perform effectively as a result, though other stacking methods could perform better. Surprisingly, the contextual deep learning approach of BERT did not perform well on the task, only approaching similar performance to the baseline linear regression on GloVe embeddings.
Though we scored 27th place out of 54 teams overall in the Pearson metric for single-words, the top score was only 0.03 points higher than our own evaluation score. We suspect that different methods of stacking regressors and using complex decision trees would have created a model that predicts well with the CompLex dataset. However, whether this type of model will generalize to future datasets is a subject of investigation.
Multi-word Expressions Results
We note that our multi-word expression Pearson metric, as shown in Table ??, performs better than our single word Pearson, and ranks 14th out of 37 teams. This is most likely because averaging the GloVe representations of the two tokens allows for more data points to be represented in the decision tree model.
Conclusion
In this paper we describe tree-based modelling of words in context to predict lexical complexity. We find that lexical complexity is already embedded in GloVe representations of words and that complex architectures provide some increase in predictive performance.
For future work, we suggest taking additional contextual features into account, such as the proximity of each neighboring word. We also suggest looking into newer transformer models to represent contextual embeddings.
As larger bodies of text become widely available to wide audiences for public consumption, we are hopeful that such systems will help readers identify suitable texts for their reading level and help build systems that can tailor text to varied reading levels, allowing for greater accessibility.
features r = [t, s] with a dimensionality of 110 features. On top of this representation, we include handcrafted features. Through manual tuning, we created a set of manual features: • NUMLETTERS: the number of letters in the token • NUMCAPITALS: the number of capital letters in the token • NUMSYLLABLES: the number of syllables in the token • NUMDIGITS: the number of digits in the token • ISFIRSTCAPITAL: whether or not the first letter is capitalized (implying it is a subject or technical term) • NUMSENTWORDS: the number of words in the context sentence • CORPUSTYPE: the type of corpus the sentence is taken from • POS: the part of speech of the token • ISINNER: whether or not the token is in a named entity
Table 1 :
1Pearson correlation between complexity met-
rics and true complexity values (single-word)
1 https://github.com/shivam5992/textstat
Table 2 :
2Experimental results (single-word)Metric
Score Ranking
Pearson
0.7533
(27)
Spearman 0.7044
(34)
MAE
0.0653
(25)
MSE
0.0071
(29)
R2
0.5615
(26)
Table 3 :
3Evaluation results (single-word)
Table 4 :
4Evaluation results (multi-word)
https://github.com/microsoft/LightGBM
AcknowledgmentsThis research effort would not have been possible without the support of Stanford ACMLab. The authors thank Matthew Shardlow, Richard Evans, Gustavo Henrique Paetzold and Marcos Zampieri for organizing SemEval 2021 Task 1: Lexical Complexity Prediction. We also thank Yasmine Mitchell for helpful discussions.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Experiments with a new boosting algorithm. Yoav Freund, Robert E Schapire, Proceedings of the Thirteenth International Conference on International Conference on Machine Learning, ICML'96. the Thirteenth International Conference on International Conference on Machine Learning, ICML'96San Francisco, CA, USAMorgan Kaufmann Publishers IncYoav Freund and Robert E. Schapire. 1996. Exper- iments with a new boosting algorithm. In Pro- ceedings of the Thirteenth International Conference on International Conference on Machine Learning, ICML'96, page 148-156, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Lightgbm: A highly efficient gradient boosting decision tree. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu, Advances in Neural Information Processing Systems. Curran Associates, Inc30Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boost- ing decision tree. In Advances in Neural Informa- tion Processing Systems, volume 30. Curran Asso- ciates, Inc.
Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Institute for Simulation and Training. J , Peter Kincaid, Robert P FishburneJr, Richard L Rogers, Brad S Chissom, J. Peter Kincaid, Robert P. Jr. Fishburne, Richard L. Rogers, and Brad S. Chissom. 1975. Derivation of new readability formulas (automated readability in- dex, fog count and flesch reading ease formula) for navy enlisted personnel. Institute for Simulation and Training.
. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Matthieu Brucher, Matthieu Perrot, and Edouard DuchesnayJake VanderPlas, Alexandre Passos, David CournapeauScikit-learn: Machine learning in python. CoRR, abs/1201.0490Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2012. Scikit-learn: Machine learning in python. CoRR, abs/1201.0490.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.
Stanza: A Python natural language processing toolkit for many human languages. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D Manning, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.
CompLex -a new corpus for lexical complexity prediction from Likert Scale data. Matthew Shardlow, Michael Cooper, Marcos Zampieri, Proceedings of the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI). the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI)Marseille, FranceEuropean Language Resources AssociationMatthew Shardlow, Michael Cooper, and Marcos Zampieri. 2020. CompLex -a new corpus for lexi- cal complexity prediction from Likert Scale data. In Proceedings of the 1st Workshop on Tools and Re- sources to Empower People with REAding DIfficul- ties (READI), pages 57-62, Marseille, France. Euro- pean Language Resources Association.
Semeval-2021 task 1: Lexical complexity prediction. Matthew Shardlow, Richard Evans, Gustavo Paetzold, Marcos Zampieri, Proceedings of the 14th International Workshop on Semantic Evaluation. the 14th International Workshop on Semantic EvaluationSemEval-2021Matthew Shardlow, Richard Evans, Gustavo Paetzold, and Marcos Zampieri. 2021. Semeval-2021 task 1: Lexical complexity prediction. In Proceedings of the 14th International Workshop on Semantic Evalu- ation (SemEval-2021). |
||
219,309,519 | [] | APPENDIX D: SAMPLE TEXTS FROM TST2 TEST SE T
APPENDIX D: SAMPLE TEXTS FROM TST2 TEST SE T
This appendix contains a few representative texts from the main MUC-3 an d MUC-4 corpus . Six examples were selected from the test set named TST2, three text s that were definitely relevant to the data extraction task, one that was marginall y relevant, and two that were not relevant to the task . See appendix E for the template s corresponding to the relevant texts . See appendix A for a description of th e relevance criteria .
THE MURDER OF THE JESUITS, WHAT ARE RELATION S LIKE BETWEEN THE SALVADORAN MILITARY AND THE GRINGO ADVISERS ? [PONCE] AS USUAL, THESE RELATIONS ARE VERY CORDIAL, AND THE Y CONTINUE TO WORK WITH US . AS YOU CAN SEE, TWO ADVISERS ARE HERE WIT H ME NOW AT THE TABLE PRESIDING OVER THE EVENT . THE SAME HAS BEEN THE CASE WITH ALL THE UNITS IN THE COUNTRY ' S INTERIOR . I FAIL TO
SAMPLE OF MARGINALLY RELEVANT TEXT S
A
POLICE DEPARTMENT SPOKESMAN CONFIRMED TODAY THAT "WE HAV E RELIABLE INFORMATION ON HIS WHEREABOUTS," ALTHOUGH HE REFRAINED FRO M GIVING FURTHER DETAILS . UNOFFICIAL SOURCES SAID THAT ESCOBAR GAVIRI A "IS SOMEWHERE IN WESTERN ANTIOQUTA DEPARTMENT" --OF WHICH MEDELLIN I S THE CAPITAL --WHERE THE SEARCH OPERATIONS ARE CONCENTRATED . COLOMBIAN AUTHORITIES HAVE ALSO ASKED FOR THE COOPERATION OF BRAZIL, ECUADOR, PANAMA, PERU, AND VENEZUELA TO STOP THE DRUG TRAFFICKERS FROM ESCAPING TO THOSE COUNTRIES . TST2-MUC4-001 1 SAN SALVADOR, 7 FEB 90 (CANAL DOCE TELEVISION) --[REPORT] [ALFRED O VILLAREAL] [TEXT] THE CHIEF OF THE ARMED FORCES JOINT CHIEFS OF STAF F [COLONEL RENE EMILIO PONCE] HAS CATEGORICALLY DENIED THAT THERE AR E ANY RIFTS BETWEEN SALVADORAN ARMY OFFICERS AND U .S . MILITARY ADVISERS , AS ASSERTED BY THE WASHINGTON POST . THE NEWSPAPER STATED THAT TH E ALLEGED RIFT BETWEEN THE MILITARY OFFICERS BEGAN ONCE IT WA S DISCOVERED WHO WAS RESPONSIBLE FOR THE DEATH OF SIX JESUIT PRIESTS, I N WHICH ONE COLONEL, TWO LIEUTENANTS, AND SIX SOLDIERS ARE BEIN G CHARGED. COLONEL PONCE SAID THAT THE RELATIONS WITH U .S . ADVISERS --ABOUT 52 IN THE COUNTRY --ARE VERY CORDIAL.
17 NOV 89 (RADIO VENCEREMOS) --[TEXT] WE HAVE STARTED A NEW DAY, THE SEVENTH, AND THE FARABUNDO MARTI NATIONAL LIBERATIO N FRONT [FMLN] IS STILL HOLDING ITS POSITIONS . WE HAVE NOT RETREATED A SINGLE STEP . IF WE MAKE ANY MOVE, IT WILL BE TO ADVANCE TOWARD THE ENEMY BARRACKS, BECAUSE WE WILL SEIZE POWER AND EXPEL THE MILITARY DICTATORSHIP THAT HAS INSTALLED ITSELF IN OUR HOMELAND . THE FMLN MAINTAINS FOR THE 7TH DAY ITS POSITIONS IN THE DEPARTMENTAL CAPITALS . WE CONTINUE TO ADVANCE IN SAN MIGUEL; WE HAVE MADE AN IMPORTAN T SEIZURE OF WEAPONS IN SAN MIGUEL . THE SITUATION COULD NOT BE BETTER IN SAN SALVADOR . LAST NIGHT W E REPORTED THAT THE ARCH OF LIBERTY EXISTS IN THE CAPITAL ; IT IS COMPRISED OF NEIGHBORHOODS AND HOUSING PROJECTS -WHERE TH E PROTAGONISTS OF THIS HISTORY ARE THE MASSES, WHICH ARE ORGANIZED FO R THE CONSTRUCTION OF TRENCHES AND BARRICADES . THE PEOPLE ARE THE MOST IMPORTANT PROTAGONISTS IN THIS HISTORIC ACTION . RADIO VENCEREMOS URGES THE PEOPLE TO MULTIPLY THEIR ORGANIZATION I N EACH NEIGHBORHOOD, RESIDENTIAL AREA, AND TOWN. POPULAR COMMITTEES MUST BE ORGANIZED ; THE PEOPLE MUST ORGANIZE THEMSELVES ; BUT IF THE ENEMY IS NEAR, THIS MUST BE DONE CLANDESTINELY . ALL THE GROUPS I N EACH TOWN, MEANING THE RELIGIOUS GROUPS, STUDENT ORGANIZATIONS , TEACHERS ORGANIZATIONS, AND COOPERATIVES, MUST ALL ORGANIZE , REGARDLESS OF THEIR POLITICAL BELIEFS . OUR UNITY MUST BECOME THE BES T TOOL TO EVICT, ONCE AND FOR ALL, THE MURDEROUS MILITARY DICTATORSHI IMPORTANT FOR THE PEOPLE TO ORGANIZE COMMITTEES IN EACH NEIGHBORHOOD, RESIDENTIAL AREA, AND TOWN, AND THE WHOLE COMMUNITY MUS T PARTICIPATE. IF THE ARMED FORCES ARE NEAR, THEN CLANDESTIN E ORGANIZATION MUST BE CONDUCTED IN SMALL TOWNS, VILLAGES, AN D NEIGHBORHOODS . THE PEOPLE MUST ORGANIZE THEMSELVE TO CONSPIRE AGAINS T DEATH, TO CONSPIRE AGAINST THE ASSASSINS, TO CONSPIRE AGAINST POVERTY , TO CONSPIRE AGAINST HUNGER, AND TO CONSPIRE AGAINST THE BOMBINGS . IT IS NECESSARY TO ORGANIZE THE PEOPLE; IT IS NECESSARY TO ORGANIZE TH E ENTIRE COUNTRY. EL SALVADOR MUST BECOME A BODY COMPRISED OF ORGANIZE D CELLS --EVERYWHERE AND IN EVERY POSSIBLE FORM --REGARDLESS OF TH E PEOPLE'S RELIGIOUS BELIEFS OR POLITICAL PARTY . WE ISSUE AN APPEAL TO THE DECEIVED ARENA [NATIONALIST REPUBLICAN ALLIANCE] RANK AND FILE, WHO VOTED BELIEVING THAT THIS GOVERNMEN T WOULD REPRESENT A CHANGE FOR THE BETTER . WE ALSO ISSUE AN APPEAL TO ALL THE PATRIOTIC CITIZENS OF THE HOMELAND, URGING THEM TO STOP TH E DESTRUCTION OF THE CAPITAL, TO STOP THE DESTRUCTION OF THE COUNTRY . WE URGE THEM TO UNITE AND STOP THE WAR ; AND THE END OF THE WAR IS NEA R --WITH THE PEOPLE'S VICTORY . [CONTINUED]3. SAMPLE OF IRRELEVANT TEXT STST2-MUC4-000 5 SANTIAGO, 20 MAR 89 (DOMESTIC SERVICE) --[SPEECH] [CHILEA N PRESIDENT AUGUSTO PINOCHET] [TEXT] FELLOW COUNTRYMEN : I AM ADDRESSING YOU AFTER OUR COUNTRY HAS BEGUN TO OVERCOME, WITH EFFORT AND TENACITY , THE FIRST OBSTACLES OF A SERIOUS CRISIS, THE FUTURE IMPLICATIONS AN D REPERCUSSIONS OF WHICH WE STILL DO NOT KNOW. I THEREFORE WANTED TO CONVEY A MESSAGE OF HOPE AND ENCOURAGEMENT TO EVERY CHILEAN, TO SHO W THEM MY FEELINGS AT THIS TIME OF DEEP REFLECTION . FROM THE VERY MOMENT I LEARNED ABOUT THIS TERRORIST ECONOMI C AGGRESSION, I ORDERED THE CREATION OF FOUR MINISTERIAL COMMISSIONS TO TACKLE THE PROBLEM . I ALSO SENT TWO MINISTERS TO THE UNITED STATES TO MEET WITH U .S . GOVERNMENT OFFICIALS . Al'1 ER TIRING SESSIONS OF TALKS , THESE MINISTERS MANAGED TO OVERCOME A SUBSTANTIAL PART OF THE GRAP E PROBLEM. THE PROBLEM AFFECTING OTHER FRUITS REMAINS TO BE SOLVED . THIS NEW ATTACK AGAINST THE CHILEAN NATION, AN ATTACK THAT DESTIN Y HAS PLACED IN THE NATION'S PATH OF FREEDOM, PROGRESS, AND WELL-BEING , COMPELS US TO CLOSE RANKS BEHIND THE GOVERNMENT, UNIFIED AND READY T O FIGHT --WITH COURAGE AND HIGH SPIRIT --THE PROBLEM THAT FACES US . ON THIS OCCASION THE COUNTRY HAS SEEN HOW THE COUNTLESS THREATS T O BOYCOTT THE NATIONAL ECONOMY HAVE WORKED . MOST OF THE TIME SUCH THREATS HAVE BEEN PROMOTED BY CERTAIN POLITICIANS, LABOR LEADERS, O R COMMUNIST LABOR UNIONS, BOTH AT HOME AND ABROAD . THEY ARE THE ONES WH CALL THEIR FELLOW COUNTRYMEN . THEY DO NOT CARE I F THEY CAUSE UNEMPLOYMENT, POVERTY, AND HUNGER TO 500,000 FELLOW CITIZENS WHO WORK IN THIS AGRICULTURAL ACTIVITY . HOW SHAMELESSLY THEY MAKE CLAIMS AND DEMANDS, OFFERING THEIR SUPPORT AND THEIR SOLIDARITY WHEN CHILE IS SERIOUSLY THREATENED! YET WHEN THE DANGER IS OVER, THEY CHANGE THEIR ATTITUDE AND AGAIN LEVE L CHARGES AGAINST THE GOVERNMENT AND CRITICIZE ITS ACTIONS . IN DOIN G THAT THEY FAIL TO RECALL THAT FROM ITS VERY BEGINNING THE CHILEAN NATION HAS BEEN BUILT UPON NON-NEGOTIABLE PRINCIPLES BECAUSE THEY ARE THE ESSENCE OF THE NATIONAL SOUL . GENTLEMEN : OUR WILL IS BASED ON OUR FEELING OF UNITY IN THE FACE O F ADVERSITY, WHETHER FROM THE ACTION OF NATURE OR FROM THE ACTION O F TREACHEROUS, DISTURBED MINDS THAT RECOGNIZE NO MORAL VALUES , FATHERLAND, OR LAW. CHILEANS : A FEW DAYS AGO I REPORTED ON THE STEPS FORWARD IN TH E INSTITUTIONALIZATION PROCESS AND ON THE PROGRESS THAT OUR COUNTRY HA S MADE IN ALL FIELDS . THE FACT THAT CYANIDE HAS BEEN INJECTED INTO TWO EXPORTED GRAPE S HAS CAUSED INCALCULABLE DAMAGE TO THE ECONOMY ; IT HAS ALSO BROUGHT THE SHADOW OF ANGUISH AND UNCERTAINTY TO THOUSANDS OF HOMES OF FELLO W COUNTRYMEN WHOSE JOBS AND, CONSEQUENTLY, THEIR LEGITIMATE RIGHT TO A LIVELIHOOD HAVE BEEN DRAMATICALLY JEOPARDIZED . IT IS UNFORTUNATE THA T SOME OFFICIALS OF IMPORTING COUNTRIES, WITHOUT WEIGHING THE TRU E EFFECTS OF THIS TYPE OF MEASURE, HAVE RESTRICTED THE IMPORT OF CHILEA N FRUITS, THUS PROVOKING FEAR AND DISTRUST ON THE INTERNATIONAL CONSUMER MARKET . GENTLEMEN, WE MUST REVERSE THIS SITUATION . WE SHOULD ONCE AGAIN ACT IN A UNITED MANNER TO RECOVE R INTERNATIONAL CONFIDENCE AND OUR UNDISPUTED LEADERSHIP AS EXPORTERS . I AM SURE THAT WE WILL SUCCEED WITH A NEW JOINT EFFORT . [CONTINUED ] TST2-MUC4-00 1 4 CLANDESTINE, 14 NOV 89 (RADIO VENCEREMOS) --[TEXT] FIGHTING RAGE D ALL DAY LONG IN THE AREA AROUND SAN TERESA HOSPITAL, IN ZACATELUCA, L A PAZ DEPARTMENT. A REPORT SAYS THAT A DMIFA ARMED FORCES ENGINEERS ' MILITARY DETACHMENT UNIT WAS AT SANTA TERESA HOSPITAL, AND THAT TH E ENTIRE UNIT WAS WIPED OUT . THE SOLDIERS HAD A COMMUNICATIONS RELAY STATION IN THE BUILDING. AS A RESULT OF THE ATTACK, THERE WERE 20 CASUALTIES . AMONG THE CASUALTIES IS A LIEUTENANT WHO WAS THE UNI T CHIEF, AND TWO SERGEANTS . THE COMMUNICATIONS EQUIPMENT WAS TOTALL Y DESTROYED. THE SITUATION IS NOT AS THE FASCIST CRISTIANI REPORTED IT TO BE . HE SAID THERE WAS ONLY ONE SOLDIER AT THE HOSPITAL RECEIVING MEDICA L TREATMENT . THAT IS TOTALLY FALSE . THE BUILDING HOUSED A GARRISON GOING TO CORRECT OUR REPORT ON THE RESULTS OF THE ATTACK ON THE DMIFA UNIT : 20 SOLDIERS WERE KILLED AND 10 WERE WOUNDED . IN ALL , THERE WERE 30 CASUALTIES, INCLUDING TWO SERGEANTS AND THE UNIT CHIEF . WE REPEAT: IN AN IMPRESSIVE ATTACK ON A MILITARY POSITION AT SANT A TERESA HOSPITAL--THE BUILDING WAS NO LONGER A HOSPITAL BUT AN ARM Y COMMUNICATIONS CENTER--OUR FIGTHTERS ANNIHILATED AND DESTROYED THA T POSITION . THE ARMY SUSTAINED 20 KILLED AND 10 WOUNDED . THE COMMUNICATIONS RELAY STATION WAS TOTALLY DESTROYED .
[BEGIN RECORDING] [PONCE] OUR RELATIONS WITH THE NORTH AMERICAN S ARE NORMAL AS USUAL. THE U .S . ADVISERS HAVE HELPED US VERY MUCH IN THE PROFESSIONALIZATION OF THE ARMED FORCES, AND THEY CONTINUE WORKIN G WITH US . [REPORTER] WHERE DO YOU THINK THESE WIRE REPORTS STATING THERE ARE RIFTS BETWEEN SALVADORAN MILITARY OFFICERS AND U .S . ADVISERS HAVE ORGINIATED ? [PONCE] I DO NOT KNOW; I CANNOT UNDERSTAND . THERE ARE ALWAYS THOSE WHO TRY TO BREAK THE STRATEGIC ALLIANCE BETWEEN THE SALVADORA N GOVERNMENT AND ARMED FORCES AND THE UNITED STATES . THIS IS A POOR COUNTRY THAT DEPENDS A GREAT DEAL ON U .S . ECONOMIC AND MILITARY AID .[REPORTER] FOLLOWINGD-2
SEE ANY DIFFERENCE IN RELATIONS BEFORE AND AFTERWARD . [END RECORDING] THE MILITARY CHIEF REPEATED THE NEED TO MAINTAIN THE STATE O F EMERGENCY, NOT ONLY DUE TO THE GUERRILLA ATTACKS, BUT ALSO TO KEEP CONTROL OVER THE ORGANIZATION FRONTS JUST PRIOR TO AN EXPECTED WAVE O F PROTESTS ON THE LATEST GOVERNMENT ECONOMY MEASURES . [BEGIN RECORDING] [PONCE] WE MUST START THINKING WHAT MAY HAPPEN T O DOMESTIC ORDER IN THE FACE OF THE ECONOMIC MEASURES THAT HAVE BEE N ISSUED OR ARE PLANNED. WE MUST THINK WHAT THE REACTION OF THE PEOPL E OR THE FMLN [FARABUNDO MARTI NATIONAL LIBERATION FRONT] ORGANIZATIO N FRONTS WILL BE, AS THEY ALWAYS USE ANY EXCUSE TO DISRUPT DOMESTI C ORDER . IT WOULD BE GOOD TO LOOK INTO WHAT IS NEEDED TO MAINTAI N DOMESTIC ORDER IN THE COUNTRY . [END RECORDING ] THE HIGH-RANKING MILITARY OFFICER GAVE THESE STATEMENTS AT TH E CEREMONY TO TURN OVER COMMAND OF THE CAPTAIN GENERAL GERARDO BARRIO S MILITARY ACADEMY. THE NEW COMMANDER IS COLONEL RICARDO ALFONS O CASANOVA SANDOVAL, WHO REPLACES COLONEL GUILLERMO ALFREDO BENAVIDE S MORENO, REMOVED FROM HIS POST AFTER BEING ACCUSED OF ORDERING TH E MURDER OF THE JESUIT PRIESTS . MEANWHILE, COLONEL HECTOR HERIBERTO HERNANDEZ HAS BEEN APPOINTE D COMMANDER OF THE 6TH INFANTRY BRIGADE BASED IN USULUTAN, AND IN L A UNION DEPARTMENT THE CHANGE OF COMMAND CEREMONIES OF THE ARMED FORCE S MILITARY TRAINING CENTER HAVE TAKEN PLACE, WHERE COLONEL FRANCISC O ARTURO LOPEZ WILL BE IN COMMAND REPLACING COLONEL CARLOS ALFRED O RIVAS, WHO HAS BEEN TRANSFERRED TO THE JOINT CHIEFS OF STAFF. BOGOTA, 27 NOV 89 (RADIO CADENA NACIONAL) --[INTERVIEW WIT H REPORTER ON THE SCENE BY STUDIO REPORTER] [TEXT] [REPORTER, I N PROGRESS] . . . IF THE COLLEAGUES AT THE CENTRAL STUDIO HAVE AN Y QUESTIONS FOR THE BOGOTA ELECTRICAL ENERGY INSTITUTE EMPLOYEE WHO WA S AT THE SITE OF THE INCIDENT. WE ARE OBSERVING A TERRIBLE SCENE O F DESOLATION AND DEATH BECAUSE ALL THE PASSANGERS GOING TO CALI, VALLE DEL CAUCA DEPARTMENT, DIED . THERE IS NO SINGLE SURVIVOR ACCORDING T O NATIONAL POLICE REPORTS . THEY ARE AWAITING CRIMINAL INSTRUCTION JUDGES . HOWEVER, THE ELECTRICITY COMPANY WORKER MAY ANSWER QUESTION S POSED BY STUDIO REPORTERS . LATER ON WE WILL HAVE MORE OFFICIA L INFORMATION . [STUDIO REPORTER] CAN YOU CONFIRM THE ARRIVAL OF THE POLICE 'S BOMB D-3 SQUAD? AUTHORITIES FEAR THE CRASH MAY HAVE BEEN CAUSED BY A N EXPLOSION RATHER THAN AN ACCIDENT INSIDE THE AIRPLANE .[REPORTER] THE F-2 JUDICIAL POLICE AND CRIMINAL STATISTICS EXPERT S ARE HERE. THEY FEEL THAT THE ACCIDENT MAY HAVE BEEN CAUSED BY A TERRORIST ACTION. THE REMARK WAS MADE BASED ON WITNESS ACCOUNTS . THEY NOTED THAT THIS MAY HAVE BEEN A TERRORIST ATTACK BECAUS E AIRPLANES DO NOT HAVE THIS KIND OF ACCIDENT UNLESS THERE ISAN EXPLOSION, SOMETHING THAT HAS VERY SERIOUS CONSEQUENCES . ACCORDING TO REPORTS BY THE NATIONAL POLICE F-2 EXPERTS, THE AIRPLANE BROKE INTO SIX PIECES . IT IS COMPLETELY DESTROYED, WHICH IS WHY THE 10 6 PASSENGERS ABOARD ARE STREWN ACROSS AN AREA 6-8 KM WIDE . ACCESS TO THE AIRPLANE ' S REMAINS IS DIFFICULT BECAUSE THIS IS A MOUNTAINOUS REGION. THE CASQUITO [WORD INDISTINCT] IS THROUGH THE ROAD THAT GOES FROM BOSA, CUNDINAMARCA DEPARTMENT, TO THE SCHOOL NO T FURTHER IDENTIFIED] . AUTHORITIES SAY THAT THIS AREA IS LOCATED VER Y CLOSE TO SLUICE NUMBER 3, MUNA 3, WHICH FEEDS WATER TO THE DAM . THE SITUATION IS VERY DIFFICULT AND DRAMATIC . RIGHT NOW, EXPERTS ARE COLLECTING SAMPLES AND TRYING FIND ALL THEY CAN TO LEARN FOR SURE I F THIS WAS A TERRORIST ACTION OR A SERIOUS ACCIDENT CAUSED BY A FLAW I N THE MECHANICAL SYSTEM OF THE HK-1803 AIRPLANE .TST2-MUC4-004 6 |
||
45,236,887 | Improving Automatic Alignment for Translation Memory Creation | Currently available TM systems usually include an alignment tool to create memories from existing parallel texts. However, the alignments proposed are rarely reliable enough to allow the newly created TM to be exploited without being checked by a human user. This paper will describe a series of experiments using the popular TRADOS WinAlign program with a collection of German-English parallel texts totalling roughly 80 000 words and will look at: • the accuracy of the proposed alignments • which are the misaligned segments and why • designing a misalignment checking tool | [
7126603,
14531125
] | Improving Automatic Alignment for Translation Memory Creation
30 November 2001
UMIST, UK kirsty.macdonald@sapKirsty Macdonald
Com
Improving Automatic Alignment for Translation Memory Creation
Translating and the Computer
2330 November 2001
Currently available TM systems usually include an alignment tool to create memories from existing parallel texts. However, the alignments proposed are rarely reliable enough to allow the newly created TM to be exploited without being checked by a human user. This paper will describe a series of experiments using the popular TRADOS WinAlign program with a collection of German-English parallel texts totalling roughly 80 000 words and will look at: • the accuracy of the proposed alignments • which are the misaligned segments and why • designing a misalignment checking tool
Introduction
Translation Memory (TM) programs facilitate the exploitation of previous translations, which in repetitive domains such as technical documentation are viewed as a valuable resource. Recycling 'old' translations not only saves companies both time and money but also relieves translators of repetitive work freeing up their time for other important tasks. Translation Memory programs crucially use alignment 2 programs to enable parallel corpora (previous source language and translated texts) to be loaded into the "memory".
The most time consuming and painstaking step of the alignment process for TM creation is checking the proposed alignments for mismatches and correcting them. In this paper the problems which arise in the automatic alignment of parallel texts for the creation of translation memories are addressed. It is the aim of this paper to give suggestions as to how the work of the translator or technical support staff correcting automatic alignments could be lessened. This paper looks more closely at the causes of misalignments and goes some way to proposing methods for reducing these misalignments. Such methods are an analysis of factors which contribute to the poor alignments as well as a proposal for a possible tool which would find and highlight misalignments for the checker, thus substantially decreasing the time and concentration that the checking process entails.
A Definition of Alignment
Alignment involves matching the sections of two texts with the same content in one or more languages. Alignment is defined as: "Sentence alignment is the problem of making explicit the relations that exist between the sentences of two texts that are known to be mutual translations." (Simard & Plamondon, 1998:59) "The problem of aligning parallel text is characterized more precisely as follows:
INPUT. A bitext (C,D). Assume that C contains C passages and C is the set {1,2......C} and similarly for D.
OUTPUT. A set A of pairs (i,j), where (i,j) is a coupling designating a passage C i in one text that corresponds to a passage D j in the other, and 1 i C and 1 j D. (In the general case A can indicate a many-to-many map from C to D.)" (Wu in Dale et al. (Eds), 2000:416;emphasis and typography original) The text segments considered to be mutual alignments are called beads. Alignments in which beads preserve the original structure of text are known as monotonic alignments, that is beads occurring in the same place in both passages with no crossing over between source and target language segments.
Alignments where all the matches are of the type one-to-one are called bijective alignments. In bijective alignments there are no text segments left unmatched and they are therefore total alignments. Total alignments rarely occur in real life, other than at the highest levels of granularity e.g. document or chapter level. Most frequently in real life we come across partial alignments, containing some unmatched segments, known as singletons. Many-to-many alignments are also features of real life alignments, with one segment coupled to multiple segments e.g. {1:2, 2:1, 1:3, 3:1}. Many-to-many groupings are often caused by crossing dependencies, changes in the linear order of text. Partial alignments are caused by the fact that, contrary to the underlying mathematical assumption, human translators do not always render one sentence in the source language as one sentence in the target language. The reasons for this are diverse and include syntactic, semantic and stylistic considerations. It is in the nature of localisation that not all information is relevant to all markets meaning that same text in different languages does not always contain the same semantic content.
An important consideration when investigating alignment is the notion of what constitutes an alignment. How much semantic content must overlap between a source and target language sentence pair for it to be considered a bead.
Alignment can be carried out to various document structure levels e.g. document, page, paragraph, sentence, word, etc. Hierarchical alignment is the approach to alignment whereby alignment is carried out at the highest granularity first before the nested constituents are aligned.
Alignment Methods
Theoretically, a variety of sentence alignment techniques 3 exist; they are based on sentence lengths, lexical constraints, and correlations or cognates.
The text type determines the triviality of the task of alignment. Much research has been carried out using parliamentary proceedings (Hansards) which have solid anchor points such as headings. As a result of consistent, literal translation they give rise to a high level of sentence and paragraph correspondence between source and target texts. Other text types can be, are however, a great deal messier, that is, contain more noise.
The length-based approach to alignment is more easily implemented although lexical approaches tend to give slightly better results.
The first proposals of length-based alignment techniques were put forward by Gale and Church (1991) and Brown et al. (1991) with a more thorough analysis of the results in Gale and Church (1993). Length-based methods use dynamic programming to find a minimum cost alignment, that is the alignment with the highest probability of being correct.
Alternatively, lexical information can be used as a guide for the alignment process, creating more robust methods of alignment which would be better able to cope with noisy imperfect input. The advantage of this approach is that it still aligns sentence beads rather than offsets, as in the previously described methods. Kay and Röscheisen (1993) used lexical information in a computationally intensive model. Their approach is to give confirmation of alignments especially in cases of similar length, moving away from the lexically poor methods of Brown et al. and Gale and Church. They induce alignments from partial alignments of lexical elements. By using lexical cues they side step the need for prior alignment at the paragraph level.
Sentence alignment is no great problem when working with clean texts. Real life problems: less than literal translations, languages with few cognates, and languages with differing scripts pose a serious problems. In general, methods of modelling relationships are more robust and language universal. However, these techniques are still very crude in relation to fine grained document structure. The method to be used depends on the languages involved, the level of accuracy required.
Of particular interest are also the specific difficulties of aligning unrelated languages, e.g. English-Chinese, caused by the lack of structural markers, e.g. punctuation, in languages with non-Latin scripts, which complicates the alignment problem, c.f. Wu (1994).
Tools
WinAlign, the alignment tool component of the Freelance Version of Trados Translation Solution Edition 3, was used in this investigation. Using WinAlign segment pairs can be manipulated by dragging and dropping segment links to create the correct alignment. All alignments must be approved by the user before a project can be exported from WinAlign and imported into an empty translation memory in the Translator's Workbench or merged into an existing memory. WinAlign allows the user to edit text during the alignment process. The alignment results are exported in ASCII format and can be further manipulated by the user.
The first step in the process of preparing and carrying out an alignment project using Trados WinAlign is creating the alignment project. First, the settings used for the alignment are chosen, these settings have a direct effect on the way in which the alignment is carried out, for example the degree of granularity of the alignment. Next, the files to be aligned are imported into the project. After the alignment algorithm has run the results are checked, corrected and confirmed manually.
In the WinAlign user interface the alignment is displayed at several different structural levels. These correspond to the different levels of alignment which in turn correspond to the different levels of document structure in the source and target files. The WinAlign hierarchical display allows the user to first check the alignment at superstructure level (document and paragraph levels) and then at substructure level (translation units at the sentence level and lower).
Experiment
In this investigation documentation from the software company SAP AG was aligned. Software help documentation is a text type normally translated with the aid of TM, it is produced and translated in 'soft' format, has to comply with strict standards and guidelines on language style and formatting, and is updated at regular intervals (at least once or twice a year at SAP for each new Release).
Before alignment description and correction was carried out a framework for the evaluation and recording of data was devised. Two key factors, hierarchy of alignment and match type data, were to be investigated. They were both recorded for the raw alignment and for the corrected version of the alignment to facilitate later comparison and analysis.
It was necessary to implement a method of describing the misalignments and their knock-on effects for the hierarchy of the alignment as well as documenting the number of different match types thrown up by the alignment algorithm. Furthermore, it was necessary to record this data in parallel for both the raw and the 'hand corrected' alignment hierarchies.
The raw alignment was recorded, checked and corrected one level at a time. It was important that this process be carried out hierarchically because any misalignments at the topmost levels would mean that all segments below them were also incorrectly aligned.
First of all the alignment was checked at the file level (structure view level 1). It is possible to align up to twenty files simultaneously using WinAlign so mismatches do sometimes occur at this level. The next stage is the checking any misalignments at the paragraph level (structure view levels 2). If misalignments at this level are overlooked they will cause significant problems later. The third and most time consuming stage of checking is that of misalignments at the sentence level (structure view levels 2-5). These misalignments were recorded and their knock-on effect for the rest of the alignment also noted.
Alignments at structure view levels 1 to 3 generally include just one text segment for the source language and one text segment for the target language at both the superstructure and the substructure levels. At structure view levels 4 and 5 one match pair at the superstructure level can include many match pairs at the substructure level. These substructure beads are the aligned sentence level text segments.
The alignment hierarchy was depicted graphically at levels 1 to 5 for both the source and target text. Besides which, match type data was recorded for any substructure alignments containing more than one text segment pair. These normally occurred at structure view level 4 or 5 but in some cases they even occurred at structure view level 1. The number of text segments at this, the lowest substructure level, was recorded for both the source and the target language texts. A tally was kept of the number of substructure levels which needed correction. Match type data was recorded only for alignments at the substructure level.
The process of data collection was very time consuming and demanding, as a high degree of concentration was needed to ensure no mistakes were made.
Alignment data
Hierarchy data
The full alignment hierarchies were recorded graphically in tables. The hierarchy of the alignment is divided into several different levels, as described above. Here superstructure and substructure level are discussed separately.
As the hierarchy of the raw alignment was recorded and corrected a tally was made of the number of superstructure alignment beads which governed multi-bead substructure alignments for which corrections were necessary. Just under half (49%) of such superstructure beads required correction at the level of the substructure alignments.
The alignment hierarchies show that 6 misalignments occurred at the superstructure level. In the results tables these misalignments are highlighted in blue (for the source language) and red (for the target language). These misalignments are important not only because they cause misalignment in all substructure segments they govern, but also because they cause a misalignment 'domino effect', causing knock-on misalignment of subsequent beads at the superstructure level.
Misalignments occur most frequently at the substructure level. Due to the fact that this type of misalignment occurs more often and is more complex, they are more difficult to characterise.
Individual examples of misalignments at the substructure level are the case in which a 2:1 misalignment must be corrected to give a 1:1 and a 1:0 match. Or the co-occurrence of a 2:1 misalignment followed by a 1:2 misalignment causing a domino effect of 1:1 misalignments until another 2:1 misalignment occurs. The original two misalignments must be corrected to three 1:1 alignments and the final 2:1 alignment also corrected in its context.
Match type data
Match type data was collected for the substructure alignment beads. This data is described in the following sections.
The bar chart above shows the difference between the match types which occurred in the raw and the corrected alignments. Alignments of the type 1:1 are by far the most frequent. However, the WinAlign algorithm does not always recognise them correctly. There was a tendency towards finding 0:1 and 1:0 matches which must subsequently be corrected manually. Rare alignments of the type 1:3, 3:1 and 1:4 did occur in the test corpus, but the number of such alignments was so small that they do not show up on the scale of the bar chart above. These rare alignments were not recognised correctly by WinAlign, but were found during the manual checking process. The pie charts in the following section show the breakdown of the different match types in the raw and corrected alignments in greater detail.
The pie chart below clearly shows that the most frequently occurring match type for the raw alignment is 1:1 followed by 1:0 and 0:1 type matches.
The pie chart below shows the breakdown of match types for the corrected alignment. In this case, 1:1 type matches account for a larger slice of the pie, again followed by 0:1 type matches. This time, though, 2:1 type matches make up 1% more of the pie than 0:1 type matches. In the corrected alignment 1:3 type, 3:1 type, and 1:4 type matches did occur. However, they account for a negligible proportion of the whole alignment: 1:3 type and 3:1 type matches making up 0.1% of the total each and 1:4 type matches accounting for 0.05%.
The results data show that misalignments occur at both the superstructure and substructure level of alignment. Misalignments in the alignment hierarchy can give rise to a 'domino effect' passing on false alignments down the hierarchy.
The match type alignment data shows that the largest proportion of matches are of the type 1:1, with this figure increasing further after manual checking and correction. Manual checking also finds 'rare' alignments which are missed by the algorithm.
Discussion
In this section the results of the investigation are discussed and analysed. Firstly, the implications of the volume of data studied in the investigation are looked at. Next, the results of the raw and corrected alignment, presented in the previous section, are analysed. Finally, the quality of the WinAlign alignment algorithm is discussed with reference to the factors listed above.
The alignment was carried out on two large pairs of documents. The bar chart above compares the volume of source and target language text in these documents. Interestingly, when measured in paragraphs, lines, or characters there is a greater volume of the German (source language) text. However when the volume of text is measured in words the quantity of words in the English (target language) text is greater.
This data shows there is 14% more text in the German version of the documentation than in the English. This value is gained when the text is measured in characters not including white space. However, when a comparison is made of document size measured in words the German text is smaller by 8%.
The fact that there are fewer words in the German documents than the English ones can be easily explained by linguistic factors. Compound nouns are formed in German by 'sticking' one or more words together, whereas English compound nouns are morphologically separate units. Thus, there are fewer and longer words in the German texts. For this reason it is sensible to take number of characters in a document (not counting white space) as a measure of volume of text as the degree of variance is likely to be a lot smaller between the two documents.
Alignment data
In the coming sections sources of misalignments are discussed. Both linguistic and stylistic influences are considered. This data will then be used in the conclusion to put forward proposals for an alignment checking tool.
Hierarchy data
At the level of alignment hierarchy data, sources of misalignment include: omission or insertion of information by the translator, differences in linguistic expression between the source and target languages, stylistic differences such as sentence length, and poor use of punctuation and formatting.
Analysing the alignment hierarchy data shows that the degree of complexity of misalignment and the causes of misalignments cannot easily be specified with any degree of accuracy.
Volume of data Checking at the superstructure alignment level is particularly important as the knock-oneffects of misalignments at this level are particularly dramatic, 49% of superstructure beads governed substructure alignments which needed manual correction. These results show the necessity of a checking tool.
It is important to note that, as with the superstructure alignments, it is difficult to characterise misalignments at the substructure level and, thus, it is difficult to pinpoint the exact causes of the misalignments.
Match type data
The results of checking and correction of match type also show the need for alignment checking. The data for the corrected alignment shows that although 1:1 alignments are the most frequent, other match types do occur.
It has already been noted above, that the German prose style differs from English insofar as long sentences occur far more frequently than in English. This fact accounts for the occurrence of 2:1 type matches. The converse could be said to account for less frequent 1:2 type matches.
The occurrence of 1:0 type matches can be explained by differences in style and formatting conventions followed in English and in German. In the German text certain set phrases occurred regularly which were not rendered at all in the English, as such information is considered to be implicit in the text. There are of course cases in which the English translator may feel the need to spell out information to the English reader which would be considered implicit by the German reader, this scenario gives rise to 0:1 type matches.
Trados WinAlign
Trados' WinAlign program is based on a robust alignment algorithm. The application did not crash once during the period of investigation. The performance of the algorithm was very good in terms of speed of alignment, even the very large files which were used as a basis of this investigation aligned quickly. However, the quality of the raw alignment itself was not perfect and post-checking was essential to ensure the resulting alignment would be of use as a Translation Memory.
It is clear that Trados have had to make trade off the stability of the application, speed of alignment and modest processor requirements against the degree of alignment accuracy.
An application to aid the task of alignment checking would be of value, saving on time and manpower invested in the alignment task and improving the overall accuracy and quality of the alignment.
Conclusion
In this section conclusions are drawn from the information gained during this investigation. Firstly, suggestions are put forward as to how measures in the documentation preparation process could ease the task of alignment. Secondly, search keys are proposed for a potential alignment checking tool. Next, this tool is described in some detail. Finally, suggestions are made for further work which could be carried out following on from this project.
Document preparation
Due to the fact that SAP documentation is written to conform to a relatively strict framework of standards and guidelines it was possible to carry out document structure level checks quickly. However, some of the inconsistencies detected at the levels of document structure and formatting between the source and target text could have led to a lower quality alignment.
Rigorous checking carried out at all stages of the authoring process would prevent certain types of misalignments from occurring. The purpose of such checks would be: firstly, to ensure that the content and structure of the source and target documents are as close to one another as possible, and secondly, to ensure that no forbidden formatting which would cause problems for the WinAlign program was used in the documents.
Checks on document content and structure would entail comparing the size of the documents being checked, bearing in mind language differences. Section headings and automatically created contents tables could be used to compare the linear order of text and to check for omission or insertion of textual material.
Suggestions for keys for a potential alignment checking tool Basically, keys for searching for potential misalignments are the misalignment patterns and their causes described and discussed above.
Match type data is one such key: 0:1 or 1:0 type matches (resulting from insertion or omission of extra textual information), many-to-one and one-to-many type matches (caused by differences in linguistic expression between languages), and so on.
Other intuitive factors such as text segment length (taking into account language variance) and segment content (cognates and constants which should occur in both the source and target language text segment) could also be used as checking keys.
Further investigation of larger aligned corpora could be used to gain statistical weightings for such key data.
When looking at different match type combinations the statistical weightings would tell the checking algorithm which matches it should 'prefer' as possible misalignments.
Tool proposal
In this section a tool is proposed for the automatic checking of alignment files. This tool would be an application integrated in a workflow process for the translation Memory creation and implementation.
The checking tool will offer the user a choice of settings which dictate which keys are used for the alignment checking. At this stage the user also gives the alignment checking tool information about the project being checked e.g. source and target languages, creation and alignment tool, etc.
Next, the alignment project exported from the alignment tool (in the case of WinAlign a plain text file) would be imported into the checking tool. Now, the checking algorithm could be run. Following the alignment check the alignment would be displayed to the user in an intuitive format similar to that of WinAlign with high chance misalignments flagged for closer scrutiny from the checker.
The tool would offer the user several options as to what can be done with the flagged segments: ignore misalignment (if in fact the alignment is correct), correct misalignment as suggested (if a suggestion is given for correction), correct misalignment manually (if the suggested correction is considered incorrect by the checker, or if there is no suggestion).
The user has the option of rechecking or skimming the entire hierarchy to assure that the alignment is correct before saving and exporting the corrected alignment.
Further work
The goal of improving text alignment is an ambitious one. Further work is necessary before an alignment tool of the type described above can be developed.
It is necessary to invest more time into investigating and characterising crossingdependencies and their effects. Crossing dependencies are alignments which do not occur in the same linear order in both source and target text. If the translator decides that a text reads more intuitively in the target language in a different order to that in which it has been written in the source language these changes may cause difficulties for the alignment algorithm. The phenomenon is particularly complicated. Creating a descriptive framework for this phenomenon would be aid more detailed analysis.
For statistically significant results on misalignment data this investigation should be carried out on a larger quantity of data, i.e. more documentation should be aligned and assessed.
More work is needed in describing and quantifying a larger corpus of alignment data to gain statistical data to train the alignment checking algorithm.
Summary
To summarise the conclusions of this investigation, it is suggested that more consistent authoring and translation practise would reduce the number of misalignments. To this end quality checking tools, such as term checking and the use of a controlled language, could be integrated into the documentation and translation workflow to carry out automatic checks of document structure and formatting.
A tool for automatic alignment checking was proposed as were keys which the algorithm behind this tool would use to search for potential misalignments. The keys for alignment checking include match type data, segment length, cognates and constants, and other statistically weighted data. The tool itself would be a discreet environment with an import/export function for the files to be checked. After the alignment checking algorithm had run the tool would flag potential misalignments so that the user could check them more closely.
Of course, this is a very ambitious undertaking and before the tool can be developed more groundwork must be carried out. Most important, is the collection of more alignment data so that statistically significant results can be analysed from which an algorithm could be developed and the tool trained.
Since 01/09/01 member of the MultiLingual Technology group at SAP AG, Germany 2 Text alignment is used for many Natural Language Processing (NLP) tasks and applications including bilingual lexicography and terminology work, Example-based Machine Translation (EMT), multilingual Information Retrieval (IR), corpora as an information source, and word sense disambiguation.
Two very thorough secondary sources (Wu in Dale et al. (Eds), 2000 and Manning & Schütze, 1999) exist dealing with this subject.
Aligning sentences in parallel corpora. B F Brown, J C Lai, R L Mercer, 29th Annual meeting of the Association for Computational Linguistics. Berkley, CABrown, B. F., Lai, J. C. & Mercer, R. L. (1991) Aligning sentences in parallel corpora. 29th Annual meeting of the Association for Computational Linguistics, Berkley, CA pp. 169-179
Handbook of Natural Language Processing. R Dale, H Moisl, & Somers H.DekkerNew YorkDale, R., Moisl, H., & Somers H. (Eds) (2000) Handbook of Natural Language Processing. New York: Dekker pp. 415-459
(1993) A Program for Aligning Sentences in Bilingual Corpora. W A Gale, K W Church, 94In Computational Linguistics. Statistical Research Gale W. A. & Church, K. W.19AT&T Bell LaboratoriesTechnical ReportA Program for Aligning Sentences in Bilingual CorporaGale W. A. & Church, K. W. (1991) A Program for Aligning Sentences in Bilingual Corpora. Technical Report 94, AT&T Bell Laboratories, Statistical Research Gale W. A. & Church, K. W. (1993) A Program for Aligning Sentences in Bilingual Corpora. In Computational Linguistics 19, 75-102
Text-Translation Alignment. M Kay, M Röscheisen, Computational Linguistics. 19Kay, M. & Röscheisen, M. (1993) Text-Translation Alignment. Computational Linguistics 19, 121-143
Foundation of Statistical Natural Language Processing. C D Manning, H Schütze, MIT PressCambridge, MAManning, C. D. & Schütze H. (1999) Foundation of Statistical Natural Language Processing. Cambridge, MA: MIT Press pp. 463-494
Bilingual Sentence Alignment: Balancing Robustness and Accuracy. M Simard, P Plamondon, Machine Translation. 13Simard, M. & Plamondon, P. (1998) Bilingual Sentence Alignment: Balancing Robustness and Accuracy. Machine Translation 13, 59-80
Aligning a parallel English-Chinese corpus statistically with lexical criteria. D Wu, 32Wu, D. (1994) Aligning a parallel English-Chinese corpus statistically with lexical criteria. 32nd
Annual Meeting of the Association for Computational Linguistics. Las Cruces, NMAnnual Meeting of the Association for Computational Linguistics, Las Cruces, NM pp. 80-87 |
259,370,596 | Large-Scale Correlation Analysis of Automated Metrics for Topic Models | Automated coherence metrics constitute an important and popular way to evaluate topic models. Previous works present a mixed picture of their presumed correlation with human judgement. In this paper, we conduct a large-scale correlation analysis of coherence metrics. We propose a novel sampling approach to mine topics for the purpose of metric evaluation, and conduct the analysis via three large corpora showing that certain automated coherence metrics are correlated. Moreover, we extend the analysis to measure topical differences between corpora. Lastly, we examine the reliability of human judgement by conducting an extensive user study, which is designed as an amalgamation of different proxy tasks to derive a finer insight into the human decision-making processes. Our findings reveal some correlation between automated coherence metrics and human judgement, especially for generic corpora. | [] | Large-Scale Correlation Analysis of Automated Metrics for Topic Models
Long PapersCopyright Long PapersJuly 9-14, 2023
Jia Peng Lim
Singapore Management University
Singapore Management University
Hady W Lauw [email protected]
Singapore Management University
Singapore Management University
Large-Scale Correlation Analysis of Automated Metrics for Topic Models
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
the 61st Annual Meeting of the Association for Computational LinguisticsLong Papers1July 9-14, 2023
Automated coherence metrics constitute an important and popular way to evaluate topic models. Previous works present a mixed picture of their presumed correlation with human judgement. In this paper, we conduct a large-scale correlation analysis of coherence metrics. We propose a novel sampling approach to mine topics for the purpose of metric evaluation, and conduct the analysis via three large corpora showing that certain automated coherence metrics are correlated. Moreover, we extend the analysis to measure topical differences between corpora. Lastly, we examine the reliability of human judgement by conducting an extensive user study, which is designed as an amalgamation of different proxy tasks to derive a finer insight into the human decision-making processes. Our findings reveal some correlation between automated coherence metrics and human judgement, especially for generic corpora.
Introduction
Topic modelling is an important tool in the analysis and exploration of text corpora in terms of their salient topics (Blei et al., 2003). To evaluate the effectiveness of topic models, the preponderance of topic modeling literature rely on automated coherence metrics. A key benefit is convenience, allowing researchers to sidestep expensive and timeconsuming user studies. The basis for this reliance is the assumption that the coherence metrics correlate with human judgement (Mimno et al., 2011;Lau et al., 2014;Röder et al., 2015).
The presumed correlation with human judgement should not be taken for granted. There are recent works that challenge the assumption. Doogan and Buntine (2021) highlight the inconsistencies of automated coherence metrics via correlation analysis within each metric. In Hoyle et al. (2021), they claimed some disagreement between human judgement and automated coherence metrics.
We postulate that the reasons behind such a mixed picture could be the differences in the topic samples as well as the underlying corpora from which the statistics were derived, resulting in localised "biases" that affect the conclusions reached by respective studies. Given their importance, we seek to conduct an extended analysis of automated coherence metrics on a larger scale than anything previously attempted. This study includes orders of magnitudes greater than the number of topics typically analysed, covering three large corpora, employing a comprehensive user study with extensive labels, across most of the widely used metrics.
There is a strong motivation for quantity. Given a vocabulary, a combinatorially large number of possible topics exist. If each topic is a vector of its scores on different metrics, the resulting curse of dimensionality (Bellman and Kalaba, 1959) necessitates a larger sample size. We argue that evaluating thousands of topics might not be sufficient, and a larger sample size is required to approximate a diverse distribution, where sampled topics is representative of the corpus and the metrics.
We surmise that the previous practice of using topic models to generate topics could introduce a bias in the analysis. Firstly, topic models vary in performance, Hoyle et al. (2021) compiled a lengthy list. There is also emerging debate on the performance between traditional and neural topic models (Doogan and Buntine, 2021). Additionally, some neural models might be inconsistent, producing different topic sets in independent runs (Hoyle et al., 2022). Conversely, topic model might be too stable and generate similar topics (Xing and Paul, 2018). To objectively evaluate whether the coherence metrics are usable, we propose to generate candidate topics independently of topic models.
In this paper, our contributions are three-fold. First, we begin by analysing the inter-metric correlations (see Section 4). We propose a novel approach to sample "topics" for the purpose of evaluating automated coherence metrics (see Section 4.1). Compared to prior works, we sample these topics free from topic model bias, and in a meaningful diverse manner. Evaluated on three large corpora, we reaffirm that certain selected metrics do not contradict each other, and highlight the underestimated effects of ϵ (see Section 4.2).
Second, we extend our analysis to investigate inter-corpora correlations (see Section 5). We examine the understated differences of corpora statistics on the metrics by comparing the correlations across corpora. While such correlations do exist to some degree, the metrics are still dependent on each corpus. Thus, any expectation that these metrics would correlate uniformly with human judgement on all possible corpora may be misplaced.
Finally, pivotal to any interpretability research, we design and conduct a user study, which is the keystone of our work (see Section 6). Compared to prior work, its design is more complex as we seek to benchmark human judgement at a finer granularity across different random user study groups (see Section 6.1). We analyse the user study results via a few novel proxy measures, revealing that human judgement is nuanced and varies between individuals, metric correlation to human judgement is corpus-dependant, with the average participant being attuned to the generic corpora (see Section 6.2).
Our implementation and releasable resources can be found here 1 , and we hope that it will enable convenient coherence evaluation of topic models and to further advance interpretability research.
Related Work
Topic models. There are many approaches for topic modelling Blei et al. (2003) (Yang et al., 2020;Shen et al., 2021;Zhang and Lauw, 2022), and hierarchical methods (Meng et al., 2020). A common factor is the use of automated coherence metrics to benchmark against baselines. We select several popular metrics for evaluation as listed in Section 3. Topic models are applied in downstream tasks (Lau et al., 2017;Wang et al., 2019Wang et al., , 2020.
User studies in metric evaluation. Mimno et al.
1 https://github.com/PreferredAI/topic-metrics (2011) utilize expert annotators to independently label 148 topics, using another 10 expert annotators to evaluate the same topics via intruder word detection tasks. Röder et al. (2015) benchmark topics against different permutations of metrics with the largest evaluation set containing 900 topics with human ratings aggregated from prior works (Aletras and Stevenson, 2013;Lau et al., 2014;Rosner et al., 2014). In Hoyle et al. (2021), a minimum of 15 crowdworkers were employed in simple rating and word intrusion tasks evaluating 40 topic-modelgenerated (Griffiths and Steyvers, 2004;Burkhardt and Kramer, 2019;Dieng et al., 2020) and 16 synthetic random topics. In Doogan and Buntine (2021), their largest user study required 4 subject matter experts creating 3,120 labels across 390 topics generated via topic models (Blei et al., 2003;Zhao et al., 2017a). In comparison, our study has both large quantities of topics and study participants, annotating 800 unbiased topics split between 40 study participants with at least an undergraduate level of education, generating 180K word-pair labels 2 . Our automated experiments deal with hundreds of thousands of unique topics. Human involvement. There are many interesting research that examine linguistic problems via the human lens. Card et al. (2020) investigates the number of annotators required to achieve significant statistical power. Plank (2022) examines the variation in human labels. Ethayarajh and Jurafsky (2022) questions the authenticity of annotators. Clark et al. (2021) tests the human ability to learn how to differentiate between machine-generated and human-generated texts. Human-in-the-loop systems or processes, such as Li et al. (2022), are also being actively explored.
Preliminaries
In this section, we define the automated coherence metrics that we will be using, and describe the corpora we use to obtain the word probabilities.
Coherence Metrics
We follow the definition styles of Röder et al. (2015), where direct confirmation measure m is a function of a word-pair statistic. Direct coherence metrics is defined as a mean aggregation of m between word-pairs (Equation 1), where t is a topic which is a k-sized set of words. For our evaluations, we set k = 10. Within t, the words are arranged based on P (w|t) in descending order. Since our approach does not produce P (w|t), we can locally optimize the word positions within a topic to obtain the best possible score for position-sensitive metrics C UMass and C P (See Appendix B). We use subscript s to denote alphabetical order and subscript o to denote optimized positions. Let p = |t|·|t−1| 2 , which represents the number of word-pairs in a topic.
C(t, m) = 1 p w i ∈t w j ∈t i>j m(w i , w j )(1)
C NPMI (Equation 2) is the mean aggregation of m nlr , defined as Normalised Pointwise Mutual Information (NPMI) (Bouma, 2009) value, between word-pair statistics in a topic. We exclude C UCI as it uses Point-wise Mutual Information (Church and Hanks, 1990;Lau et al., 2014), which is correlated to NPMI.
C NPMI (t) = 1 p w i ∈t w j ∈t i>j m nlr (w i , w j ) (2) m nlr (w i , w j ) = log P (w i ,w j )+ϵ P (w i )·P (w j ) − log(P (w i , w j ) + ϵ)(3)
C UMass is the mean ordinal aggregation of m lc (Mimno et al., 2011), which measures the log conditional probability between ordered word-pair in a topic:
C UMass (t) = 1 p w i ∈t w j ∈t i>j m lc (w i , w j ) (4) m lc (w i , w j ) = log P (w i , w j ) + ϵ P (w j )(5)
C P is the mean ordinal aggregation of m f , Fitelson's coherence (Fitelson, 2003), interpreted as the degree to which w i supports w j , between ordered word-pairs in a topic:
C P (t) = 1 p w i ∈t w j ∈t i>j m f (w i , w j ) (6) m f (w i , w j ) = P (w i |w j ) − P (w i |¬w j ) P (w i |w j ) + P (w i |¬w j )(7)
C V (Equation 8) is the final metric that we are using. C V is considered as an indirect coherence metric, as it uses word-group relations as opposed to word-pairs relations like aforementioned direct coherence metrics. Intuitively, it measures the mean cosine similarity (Equation 9) between each word's feature vector and the topic's feature vector represented as the sum of all of its words' feature vectors (Equation 10).
C V (t, γ) = w i ∈t s cos (v(w i , t, γ),v(t, γ)) |t| (8) s cos (⃗ v i , ⃗ v j ) = ⃗ v i · ⃗ v j ||⃗ v i || 2 · || ⃗ v j || 2 (9) v(t, γ) = w j ∈t v(w j , t, γ) (10) v(w, t, γ) = {m nlr (w, w j ) γ ∀w j ∈ t}(11)
For indirect confirmation measurem, instead of directly using word-word probabilities, it uses m to create a vector of features v (Aletras and Stevenson, 2013) that represent a word w from the topic t it belongs to, distorted by hyper-parameter γ (Equation 11). We will evaluate γ at 1 and 2 3 . Wiki. We use the English-Wikipedia dump 6 of August'22 processed using Attardi (2015). We consider the content of the article as a document.
Corpora
To check for correctness, we also use the popular benchmark Palmetto (Röder et al., 2015), which uses a subset of Wikipedia'11.
For each corpus, we apply processing steps suggested in Hoyle et al. (2021), retaining up to 40K frequently occurring words. Moreover, we generate a lemmatized (denoted with the suffix -lemma) and unlemmatized variant (original) for further analysis. More information on common vocabulary between corpora can be found in Table 14, Appendix C.
Examining Inter-Metric Correlations
Intuitively, if two different metrics are to correlate with human judgement, we would expect the scores of these metrics to correlate. However, it is claimed in Doogan and Buntine (2021) that these metrics do not correlate well. For reasons described in Section 1, we propose a new non-topic modelling approach to sample topics to evaluate these metrics.
Approach: Balanced Sampling
There are few tested methods to generate topics: from topic models (Aletras and Stevenson, 2013;Lau et al., 2014), beam search optimized on coherence (Rosner et al., 2014), random sampling of words (Hoyle et al., 2021). Considering only optimized topics, or completely random topics (mostly bad), would generate a skewed distribution. In contrast, we seek to mine topics that emulates a balanced distribution for a meaningful comparison. We also desire uniqueness among topics, which avoids repetition and is representative of the corpus. Figure 1 illustrates an overview of our approach.
Mining topics of k words can be framed as the classical k-clique listing problem (Chiba and Nishizeki, 1985;Danisch et al., 2018). To generate meaningful topics, we can map the corpus-level information as a graph, treating each word from its vocabulary set V as a vertex. Each word will share an edge with every other word. We choose m nlr to determine the value of the edges between two vertices as its normalised range is intuitive allowing us to easily identify the range of values for sub-graph generation. In contrast, using m lc and m f increases sampling's complexity as they are order-dependant resulting in bi-directional edges in its sub-graph. Sampling using any m, not only 6 dumps.wikimedia.org Table 2: Average quantity of topics mined by our balanced sampling approach by segments per corpus from the 5 independent sampling runs. Quantities of lemmatized variants are similar with the exception of ext segment, where it has half the numbers. m nlr , might introduce bias, which our approach seeks to mitigate.
The initial graph will be a complete graph of |V | vertices. A topic of k words would be a ksized sub-graph. Combinatorially, there are |V | choose k number of possible unique topics. It is practically infeasible and unnecessary to list all kcliques. For a more tractable approach, we modify the routine from Yuan et al. (2022) (pseudo-code in Appendix A) to include:
Sub-graphs of varying quality. This routine seeks to generate smaller graphs from the original complete graph to cover the spectrum of topic quality. We eliminate edges conditionally via their value, and the remaining edges and connected vertices constitute the new sub-graph. We generate three different kinds of sub-graphs, pos where edgevalues are above a given lower-bound, mid where edge-values are between threshold values, and neg where edges are below an upper-bound 7 .
Topic extraction. Inspired by Perozzi et al. (2014), instead of iterating through all the neighbouring nodes or searching for the next best node, we randomly select a neighbour, that has an edge with all explored nodes, to explore. We extract the explored k-path as our sampled topic.
Topic uniqueness. To attain a variety of topics, we remove all edges in a mined clique, making it impossible to sample a similar topic from the same sub-graph. Figure 2 illustrates this feature.
Balance distribution of topics. For a given corpus, we further introduce common topics sampled from a different corpora, which differ in its word distribution. We refer to this segment of external topics as ext. Lastly, random is a segment, comprising of groups of random words, included to represent topics that might not have been covered via the other segments. Table 2 shows the result from this mining approach. The total would thus be more balanced, comprising topics of varying scores along the spectrum. We evaluate the correlation (Pearson's r 8 ) between different automated metrics measured on Wiki (see Table 3), Pubmed, and ArXiv (see Table 10, Appendix C). We expect a high positive correlation score between metrics if they are both purportedly measuring for coherence. Our first inter-metric analysis (see Table 3a), with metrics calculated at ϵ = 1e−12, shows the poor correlation of C V metrics against other metrics. Theoretically, C V relies on m nlr as its features, and given an unrelated topic, where word-pair scored on m nlr with ϵ = 1e−12 produces similar m nlr vectors which scores highly on C V . This phenomenon of high cosine similarity between the equally negative m nlr vectors, results in contradicting scores between C V and other metrics.
Evaluation: Metric Correlations Analysis
ϵ C γ=1 V C γ=2 V C NPMI C P,o C UMass,o C γ=1 V - 0.09 0.69 0.64 0.11 C γ=2 V 0.09 - -0.59 -0.63 -0.72 C NPMI 0.69 -0.59 - 0.91 0.58 C P,o 0.64 -0.63 0.91 - 0.71 C UMass,o 0.11 -0.72 0.58 0.71 - (a) Correlation scores with ϵ = 1e−12 ̸ ϵ C γ=1 V C γ=2 V C NPMI C P,o C UMass,o C γ=1 V - 0.
Hence, for our second inter-metric analysis (see Table 3b) we evaluate the metrics at ϵ = 0, denoted with subscript ̸ ϵ. For the resulting undefined calculations, we default to 0. Intuitively, the purpose of setting ϵ = 1e−12 is to prevent and to penalise word-pairs that produces undefined calculation. In contrast, ϵ = 0 treats these word-pairs neutrally. Comparing the new results in Table 3b to the previous results in Table 3a, we note that correlation scores between C V metric and other automated coherence metrics improved greatly, suggesting alleviation of the contradicting factor. Additionally, we note that for C P and C UMass , ϵ is essential. We then examine these metrics with their better ϵ mode (see Table 4a), and most metrics (except C UMass ) have a decent correlation with other metrics, implying that they do not contradict each other.
There could be a concern that the neg and random sampled sections would have an outsized influence in the previous analysis. In this ablation, we restrict the same analysis to only topics where C NPMI > 0. Comparing to the previous results (see Table 4a), we derive a similar interpretation from this constrained results (see Table 4b), suggesting that our balanced sampling approach is effective as the behaviour of the full set of data is similar to its smaller subset. -0.14 -0.02 -0.14 -0.09 -0.20 -(b) Correlation scores of metrics on subsection of data used in Table 4a where CNPMI > 0. Table 5: Pearson's r between exact automated coherence metric measured on different corpus-pairs (independent samples aggregated totalling |T | topics). See Table 13, Appendix C for complete results.
Examining Inter-Corpus Correlations
A natural extension after inter-metrics comparison, is to compare metrics measured on different corpora. It is a common expectation that research works would employ multiple corpora, with the differences between corpora quantified superficially (such as in Section 3.2). We propose an alternative approach to quantify the differences, at a topical level, using common topics measured using automated coherence metrics. If the corpora are thematically similar, we would expect a high correlation. Analysis. Using the common topics from the paired corpora, we conduct a correlation analysis on the scores measured on each corpus per metric. Table 5 shows decent correlations between each corpus. However, even as they are positive, these correlations do not imply identical statistics in various corpora. Assuming that human judgement is constant for a given topic, we posit that variance in scores measured on different corpora could result in a lower correlation due to the missing themes (c) Selected topics, measured on the unlemmatized corpus, are compared to its lemmatized variants, which are measured on the lemmatized corpus. Table 6: Pearson's r (mean from 5 independently sampled sets of size |T |) of automated coherence metric measured on different scenarios. Each selected topic will have two variants that will produce two scores for each metric. We compare the correlation of the two set of scores for a set of topics. Error bars omitted as S.D ≤ 0.01. See Table 12 and Table 11, Appendix C for additional quantitative data on the topics.
Corpus |T | C γ=1 V,̸ e C γ=2 V,̸ e C NPMI,̸ e C NPMI C P,o C UMass,
within the shared vocabulary space in either corpus.
We conduct a control analysis on pairs of similar corpus differing in lemmatization, originating from the same documents, in Table 6a. These corpora would be thematically similar whilst being superficially different. Our previous analysis in Table 5, comparing to the control analysis in Table 6a, shows lower correlation scores suggesting some topical difference between the various corpora. This difference highlights the metrics' strong dependency on the corpus used, with a subset of common topics disagreeing on the scores, revealing that these metrics are not a one-size-fits-all solution for coherence evaluation.
Ablations. While we know how lemmatization affects topic modelling (Schofield and Mimno, 2016), its effect on evaluation is unclear. We carried out two additional ablations simulating lemmatizing topics post-training. For the first ablation, we shortlist topics that contain at least one unlemmatized word, where if lemmatized, the lemmatized word can be found in the same unlemmatized corpus. We compare the correlation of the original and lemmatized topic, with their scores measured on the same unlemmatized corpus. Their scores have a strong correlation (see Table 6b), suggesting that the difference between lemmatized topics and unlemmatized topics is small. For the second ablation, the shortlisting process is similar, however, with lemmatized topics measured on the lemmatized corpus. Our results (see Table 6c) show a strong correlation across the various metrics and imply that post-processing topics for evaluation is a viable option.
User Study
Previous works measure human judgement through simple evaluation tasks such as rating the coherence of a topic on a few-point ordinal scale (Mimno et al., 2011;Aletras and Stevenson, 2013), identifying the intruder word that introduced into the topic (Chang et al., 2009), or both (Lau et al., 2014Hoyle et al., 2021). For word intrusion, the detection of outliers signals the cohesiveness of the topic, which is similar to rating topics on an ordinal scale. However for both tasks, qualitative gaps might exist. In word intrusion, study participants are restricted to just one outlier per topic, assuming perfect coding, it results in exponential drop in scoring, i.e. 100% detection for a perfect topic, 50% for a topic with a clear outlier, and so forth. For topic ratings, topics of differing qualities might get the same score, i.e. a perfect topic and a topic with a clear outlier might both get the same scores.
Additionally, while the decisions between human annotators might be equivalent, it is not evident if their thought processes are similar. The key reason for this line of inquiry stems from the observation that everyone is different in some aspects, such as knowledge, culture, and experiences. Assuming our understanding of words is influenced by our prior beliefs, what and how we perceive similarity and coherence might differ from person to person.
For these reasons, we decide to design a user study that combines both word intrusion and topic rating tasks but measured at a finer granularity such that we can quantify the decision-making process. Users are tasked to cluster word-groups which indicate coherent and outlier word-groups. We then examine the relationships between automated coherence metrics and different proxy tasks derived from the user study.
User Study Design
For our study S, we recruit 8 user study groups U , S = {U 1 , . . . , U 8 }, 5 study participants per group. Majority of the participants recruited have Figure 3: Format of question that is presented to study participants. Each word is to be assigned to only one group whose members are deemed coherent together. The topic displayed in this example is manually created to serve as a verification question and is not included in the evaluation. Refer to Appendix D for a sample of actual examples used.
at least a graduate degree or are undergraduates. For each study group, we prepared 8 unique question sets Q = {T 1 , . . . , T 8 }, each containing 100 10-word topics,
T i = {t 1,i , . . . , t 100,i } and t = {w 0,j,i , . . . , w 10,j,i }. For each participant u ∈ U i , we present each t j,i ∈ T i individually sorted alpha- betically.
We ask participants to cluster words in t j,i that they deem similar to form coherent word groups g, where their response R u,j,i to t j,i is a set of unique g. We constrain each word to only belong to one coherent word group to limit the task complexity. Additionally, a word considered to be unrelated may form its group of one. We use Likert matrix 9 as the response format (see Figure 3), mandating a response for each word w k,j,i ∈ t j,i . Actual instructions are shown in Appendix E.
Topic selection. We construct an initial pool of 1000 topics. To achieve comparability between corpus, we randomly sample 400 common topics from Wiki, ArXiv, and Pubmed. To represent nonscientific topics, we randomly sample 200 topics from Wiki that do not appear in ArXiv/Pubmed. For ArXiv/Pubmed exclusive topics, we randomly sample 200 topics each, with these topics also appearing in Wiki. We sample in a 7:1:1:1 ratio of pos/mid/neg/random segments of the corpus, seeking to emulate a uniform score distribution. To account for word familiarity, we select lemmatized topics with words found in 20K most frequently used words 10 . For each user study, we randomly 9 There is no scaling. 10 Corpus of Contemporary American English sampled 100 topics from the pool without replacement. For topics not found in ArXiv or Pubmed, we exclude them during evaluation of those corpus.
Proxy Tasks. Representing coherence as wordclusters allows us to derive a deeper insight into what we perceive as human judgement. From our user study task, we further decompose this study into a few proxy tasks, where we measure the correlation (Spearman's ρ 11 ) of its results to automated coherent metrics. We propose three topic-level human coherence measures. Using density of human agreement, we define P 1 as the mean agreement of U i on all possible word-pairs on any topic t j,i :
P 1 (t j,i ) = u∈U i g∈Ru |g|(|g| − 1) |U i | |t j,i |(|t j,i |−1) 2(12)
If t j,i has perfect agreement on coherence, we expect P 1 (t j,i ) to have a value of 1, and for incoherence, a value of 0. Subsequently, we consider the largest selected word group within t j,i , and define P 2 as the mean of this measure amongst U i :
P 2 (t j,i ) = 1 |U i | u∈U i max({|g||g ∈ R u }) (13)
A value of 1 will suggest that each word in t j,i have no relations to each other and a value of |t j,i | suggest perfect agreement on coherence. Lastly, we define P 3 as the mean number of annotated word groups amongst U i :
P 3 (t j,i ) = 1 |U i | u∈U i |R u |(14)
The interpretation of P 3 is the inverse of P 2 . While these group-wise measures might seem similar, they measure different nuances of humanannotated data. P 1 evaluates the sizes of multiword groups, weighted towards larger groups. P 2 only accounts for the largest word group, which ignores the properties of the other remaining group. P 3 ignores group sizes to a certain extent and includes single-word "outlier" groups. We evaluate these measures' correlation against various C(t j,i ).
User Study Results
We find that the three different proxy tasks produce similar results 12 , shown in Table 7a, 7b, and 7c re- 11 We use Spearman's ρ instead of Pearson's r, as we generally obtain a better r (than ρ shown) through distortion of scores. To ensure parity, we use ρ instead. 12 We note that these results include outlier U3, whose negative results differ radically from other groups. Individual (c) Proxy Task III: Mean of coherent group counts between study participants. For this task, stronger negative score is better as a completely coherent topic gets P3(t) = 1 and an incoherent topic gets P3(t) = 10. Hence, this proxy measure is inversely related to the coherence metric score where a larger score indicates coherence. Full Breakdown in Table 18, Appendix C. Table 7: Average Spearman's ρ between automated coherence metrics and respective proxy measure. The values shown are the mean correlation scores from the 8 study groups with error bars. The lemmatized version of corpus are ommitted as its values are similar to the original. C UMass,s and C P,s ommited as they are almost identical to their o variant. spectively, indicating correlations between human judgement and some automated coherence metrics. Since most of our study participants have some science-related background, we are surprised by ArXiv's lower correlation scores relative to Wiki in each proxy task. These results imply that our perception of coherence might be biased towards the word distribution of a generic corpus such as Wiki. Lastly, in each proxy task, the higher variances in ArXiv's and Pubmed's correlation scores compared to Wiki's might imply increased subjectivity.
Inter-rater reliability (IRR). There are many factors that wil affect the variation for IRR (Belur et al., 2021). For our user study, we attempted to mitigate some of these factors. In terms of framresults detailed in Appendix C.
ing and education, study participants were given a short introductory primer as well as some example questions prior to starting the tasks (Appendix E). To mitigate fatigue effect, we allowed the study participants a week to work on the task, pausing and continuing at their own pace. We were not concerned about learning effect, as our presented topics spans across a plethora of themes and the correctness of the task is subjective to their own personal preference. As our objective is to poll for their beliefs, with many possible valid answers, there is not a need to review and enforce consistency between study participants.
We use Krippendorf's α (Krippendorff, 2011) , defining pair-wise rater similarity as Jaccard distance measuring common answers between raters. We treat each w k,j,i ∈ t j,i as a multi-classification question, comprising of other words (in t j,i ) and "not related" as categories, producing boolean vector representations. The meanᾱ is 0.366 with a standard deviation of 0.04, lowest α at 0.325 and highest α at 0.464 (see Table 15, Appendix C). A completely random study response will have an α of 0.12, being significantly less than the study's α, giving us some confidence about the reliability of the responses. Overall, considering that there are many possible combinations for each topic response, the α reported suggests some degree of similarity between different responses.
ArXiv
Pubmed Wiki C P,s 0.115 ± 0.062 0.139 ± 0.043 0.285 ± 0.091 C P,o 0.201 ± 0.066 0.269 ± 0.036 0.447 ± 0.072 C UMass,s 0.119 ± 0.057 0.072 ± 0.039 0.128 ± 0.043 C UMass,o 0.185 ± 0.068 0.101 ± 0.037 0.209 ± 0.037 Table 8: Average Spearman's ρ between automated coherence metrics pair-wise proxy measure, similar in evaluation and interpretation to Table 7. This table shows the difference in correlation results between sorted (s) and optimal (o) position-dependent metrics. Full Breakdown in Table 19, Appendix C.
User study ablations. We examine if positioning affects position-dependent automated coherence metrics via human pair-wise agreement proxy task P 4 . We detail our optimizing approach in Appendix B. We define P 4 as the percentage of agreement between any word-pairs w a and w b from t j,i from T i evaluated by its corresponding U i :
P 4 (w a , w b ) = 1 |U i | u∈U i g∈Ru w a ∈ g ∧ w b ∈ g(15)
We measure the correlation of P 4 (w a , w b ) in a group to its pair-wise automated coherence metric score via m(w a , w b ) from different orderings. Our results in Table 8 show some non-significant differences in correlation on the pair-wise level. However, that difference disappears when we evaluate the topics as a group, with the sorted and optimized variant achieving similar correlations (see Table 7). Furthermore, this difference of coherence at the pair-wise and group-wise levels, suggests that the presence of other words in the topic has an influence on the human perception of word-pair coherence. Finally, we replicate most experiments with the corpus statistics from Palmetto (Röder et al., 2015), which produced similar correlation results to Wiki.
Conclusion
Our large-scale analysis reaffirms that these automated coherence metrics are still meaningful. We are confident in using these metrics measured on generic corpus such as Wiki, and specialised corpora, Arxiv and Pubmed, for nicher tasks. Our user study empirically supports this conclusion, as our participants' collective response correlates well to metrics measured on Wiki, albeit weaker but meaningful correlation on the specialized corpora. This work shows that popular automated coherence metrics, C NPMI , C V , and C P , are alive and well, and works regardless of lemmatization. Furthermore, we stress that the selection of the reference corpus is just as important as the selection of the metric, with Wiki being the best reference corpus that correlates with human perception of coherence. Moving forward, when evaluating for coherence aligned towards human interpretability, we recommend future topic models to be evaluated against Wiki-variants. We also recommend calculating C V with ϵ = 0, to avoid the confusion from its contradiction of other metrics at ϵ = 1e−12.
Limitations
User Study. Most, if not all, of the participants are pursuing or have obtained at least a university degree/bachelor's. While we attempted to recruit widely, majority of our participants' education background is science-related, with strong leanings towards technology. Furthermore, we assume that our participants are proficient in English from their education level and the fact that they are based in a city that uses English as the common language. It is possible that there are some unknown common bias such as culture or knowledge that might affect the results. The tie-breaking constrain in our study, where study participants are required to assign one word to its most coherent group, might affect the correlation scores for the user study.
Corpora. The selected corpora are constructed from documents that are formal in prose, with the purpose of being informative and instructional. We do not know if the user study results are applicable to a corpus with documents that are informal in prose, such as that of a conversational nature. However, one can always evaluate topics on a large external generic corpus to determine coherence relative to human judgement.
Ethics Statement
User Study. Prior to carrying out our user study, the survey methodology was reviewed and approved by our Institutional Review Board for ethical compliance. While unlikely, we examined each question for its appropriateness. To ensure participants' anonymity, the responses are anonymized and aggregated, and it is extremely unlikely that a participant can be identified via their response. In terms of fair compensation, we paid S$15 for each complete response of 100 questions, assuming an hour's worth of work, it is higher than our institution's prevailing rate for undergraduate student work. To ensure their well-being, study participants are allowed up to a week to complete the tasks, at their own preferred pace and place.
Corpora. We select corpora that have open licensing agreements that allows for non-profit academic use, and the permissions allowing us to transform and re-distribute the processed corpora as word-pair counts. Table 9: Hyper-parameter threshold for different subgraphs. Multiple thresholds are indicative of multiple runs. random and ext are not hyper-parameter dependant. When possible, hyper-parameters were chosen to produce to control sub-graph density.
References
Pre-processing steps to reduce complexity, Algorithm 1 and Algorithm 2, remain unchanged from Yuan et al. (2022). These steps can be skipped when the graph is large and dense, such as during neg sub-graphs generation. Our modification in Algorithm 3 and Algorithm 4 introduces randomness via permutations and early stopping, when a k-clique is found in Algorithm 3 and a desired number of k-cliques found in Algorithm 4. The subgraph reduction is implemented in Algorithm 3.
Algorithm 1 PRE-CORE(G, k)
Prune vertices with less than k edges from G Input: A graph G and a positive integer
k Q ← ∅, F ← ∅ for u ∈ G do if du < k − 1 then Q.push(u) F ← F ∪ {u} end if end for while Q ̸ = ∅ do u ← Q.pop() for node v ∈ neighbours Nu do dv ← dv − 1 if dv < k − 1 ∧ v / ∈ F then F ← F ∪ {v} Q.push(v) end if end for end while
Algorithm 2 PRE-LIST(G,k) Find exact k-cliques and remove them from G for each connected components C ∈ G do m c ← |E(C)|, n c ← |V (C)| if m c = (n c − 1)n c then remove C from G output k-cliques C end if end for A set of connected components refers to a set of nodes where each node shares an edge with all
Algorithm 3 SDegreeList(k, R, C, ⃗ G) for u ∈ Permutate(C) do if |C| ≤ l − 2 then continue end if if k < 2 then return ∅ end if C ← N + u ∩ C if k = 2 ∧ |Ĉ| > 0 then O ← R ∪ {u} remove (u i , u j ) from ⃗ G ∀u i , u j ∈ O return O end if if |Ĉ| > l − 2 then return SDegreeList(k − 1, R,Ĉ, ⃗ G) end if end for
other nodes in the set. Finding next connected com-ponentsĈ, requires a set intersection operation between all possible neighbours of randomly selected node u, denoted N + u , and current connected components C.
Algorithm 4 Main(G, k, target)
G ← PRE-CORE(G, k) G ← PRE-LIST(G, k) Generate DAG ⃗ G O ← ∅ for u ∈ Permutate( ⃗ G) do r ← SDegreeList(k − 1, {u}, N + u , ⃗ G) if |r| == k then O = O ∪ {r} end if if target == |O| then return O end if end for
The main algorithm gets invoked once per subgraph, we can generate multiple sub-graphs by selecting a set of words that neighbours a randomly chosen word. We then truncate the edges that do not fulfill the edge-conditions.
B Optimizing Position-Based Scoring
Given a set of k words as a topic, our goal is to optimize the position-based score. We can reduce this problem to a weighted activity selection prob-lem, which is equivalent to finding a max-weight independent set in an interval graph and can be solved in polynomial time (Bar-Noy et al., 2001).
Consider a word w at the j th position, index starting from 0, we can visualize the ordering as having j incoming edges, indicating precedence of other words, and k − j + 1 outgoing edges, indicating w precedence to other ensuing words. An activity will be defined by its start-time (position) and its preceding and ensuing activities. Each activity has an equal interval and the weight of the activity is determined by the difference of outgoing and incoming edges to all other words scored via m. We can transform the activities into an interval graph, with |C l j | · |C l l−j+1 | combinatorial number of possible instances for each word per time slot in the schedule.
Our transformation will result in an interval graph of k disjoint graphs. While the number of activities might seem to be combinatorially explosive, selecting the first activity at T = 0, only involves k activities, and upon selection prunes multiple branches, resulting in k − j choices at T = j. Hence, we are only required to select the best activity within each disjoint graph conditioned on availability (word not selected before).
C Supplementary Tables
This section lists tables with quantitative supplementary information.
Table 10 details the results for ArXiv and Pubmed corpus for inter-metric correlation analysis in Section 4.2.
Table 11 provides additional information on the similarity between control and treated topics for the lemmatization effect ablation in Section 5. Table 12 provides a detailed breakdown of subgraph segments that is shortlisted for the lemmatization effect ablation in Section 5. Table 13 details the full complete results for intercorpus correlation analysis, its partial table can be found in Table 5, Section 5. Table 14 has additional quantitative information regarding the quantity of common topics in corpuspairs used in the inter-corpus experiments of Section 5. Table 15 has the individual Krippendorf's α for each user study group U for the user study in Section 6.
Tables 16, 17, 18, and 19 has the individual correlation scores of each user study group U to the various coherence metrics for Proxy Task I, II, III, and pair-wise ablation respectively. Its averages are tabled in Tables 7a, 7b, 7c, and 8 in Section 6. Table 6b 7.2 7.9 7.7 Table 6c 7.7 8.5 8.6 Groups Table 18: Detailed breakdown of Proxy Task III, values are Spearman's ρ of mean of group counts and coherence scores. C UMass,s and C P,s ommited as they are almost identical to their o variant. For this task, a stronger negative value is better as a completely coherent topic have a group count of 1 and an incoherent topic will have a group count of 10. Hence, the proxy measure is inversely related to the coherence metric score where a larger score indicates coherence.
ϵ C γ=1 V C γ=2 V C NPMI C P,o C UMass,o C γ=1 V - 0.̸ ϵ C γ=1 V C γ=2 V C NPMI C P,o C UMass,o C γ=1 V - 0.
ArXiv Pubmed Wiki
U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8Mean
Groups
U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8 Mean (S.D) ArXiv C γ=1 V,
Groups
U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8 Mean (S.D) ArXiv C γ=1 V,
Groups
U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8 Mean (S.D) ArXiv C γ=1 V,̸ e
Groups
U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8 Mean (S.D) ArXiv C γ=1 V,
D Topic Examples (User Study)
This set of 100 topics belongs to T 1 , and were shown to U 1 :
E User Study Instructions E.1 Primer on Task
Evaluating the relations between words from a computational lens serves to further the research and understanding of artificial intelligence linguistic research.
A group of words can be considered coherent if they share a similar theme. For example, the group "apples banana coconut durian" can be considered coherent as most people would identify "fruit", "food" or "tree" as the common theme or link.
However, some group of words might be more ambiguous and the common theme might not be as straightforward. For example, "trees ore corn hydrogen" might be considered incoherent to some, while others might identify the common theme as "resources".
Ultimately, it is up to one's personal preferences and experiences to decide on whether a group of words are coherent.
E.2 Task Instructions
You will be presented with 10 English words. These words belongs to the 20,000 most frequently used words, so it is unlikely that you will encounter strange words. If you do encounter words that you have never seen before, you are free to use a dictionary or search engine (e.g. Google).
You will then be asked to assign each word to groups, where each group contains words that you think are coherent when grouped together.
Given an example: alcohol athlete breakfast drink eat habit intake meal obesity sleep Some might divide the words into two groups identifying Group 1 is "alcoholic"-themed and Group 2 is "healthy"-themed. We want to emphasise that there are no right or wrong answers for the tasks, we wish to capture your beliefs on what you think is "correct". We understand that at times, you might encounter words that belong to multiple groups, however to simplify the tasks, we ask that you be the tiebreaker and assign it to the word-group with the strongest similarity.
-cited 4 -ours
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Ethics
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Ethics B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Ethics B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3, Limitations, Ethics B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3, 4, 5, 6, Appendix C, D C Did you run computational experiments? 4, 5, 6 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 3, Appendix C C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4,5,6
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 3 D Did you use human annotators (e.g., crowdworkers) or research with human participants? 6 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix E, Institute Review Board application withheld for anonymity D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Ethics D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Ethics D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Ethics D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Ethics
, from non-neural based Zhao et al. (2017b); Hoffman et al. (2010), to many other neural-based methods, via autoencoders (Kingma and Welling, 2014) such as Miao et al. (2016); Srivastava and Sutton (2017); Dieng et al. (2020); Zhang and Lauw (2020); Bianchi et al. (2021), via graph neural networks
Figure 1 :Figure 2 :
12Illustration of our Balanced Sampling Illustration of the process of sampling a topic from a sub-graph.
Correlation scores measured on Pubmed with ϵ = 1e-12
Correlation scores measured on Pubmed with ϵ = 0
(S.D) Kripp's α 0.463 0.391 0.323 0.376 0.325 0.366 0.333 0.347 0.366 (0.04)
Did you discuss any potential risks of your work? Not applicable. Left blank.A3. Do the abstract and introduction summarize the paper's main claims?Abstract, 1 A4. Have you used AI writing assistants when working on this paper? Did you cite the creators of artifacts you used?
Table 1 :
1Corpus #Docs. Mean Doc. Size Vocab. Numerical descriptions of the corpora used. Lemmatized variants are similar with the exception of ArXiv-lemma where its vocabulary size is 22K.Size
Table 3 :
3Pearson's r scores (Mean of 5 independently
sampled sets of topics) between coherence metrics mea-
sured on Wiki. Bold indicates the better value across
both tables. Error bars omitted as S.D ≤ 0.02.
C NPMI,̸ e C NPMI C P,o C UMass,oC γ=1
V,̸ e
C γ=2
V,̸ e
C γ=1
V,̸ e
-
0.87
0.95
0.74
0.81
0.33
C γ=2
V,̸ e
0.87
-
0.94
0.56
0.66
0.24
C NPMI,̸ e 0.95
0.94
-
0.63
0.73
0.25
C NPMI 0.74
0.56
0.63
-
0.91
0.58
C P,o 0.81
0.66
0.73
0.91
-
0.71
C UMass,o 0.33
0.24
0.25
0.58
0.71
-
(a) Correlation scores of metrics measured on Wiki. Combined
results of Table 3 on selected metrics.
C γ=1
V,̸ e
C γ=2
V,̸ e
C NPMI,̸ e C NPMI C P,o C UMass,o
C γ=1
V,̸ e
-
0.92
0.98
0.95
0.99
-0.14
C γ=2
V,̸ e
0.92
-
0.95
0.94
0.90
-0.02
C NPMI,̸ e
0.98
0.95
-
0.98
0.98
-0.14
C NPMI
0.95
0.94
0.98
-
0.95
-0.09
C P,o
0.99
0.90
0.98
0.95
-
-0.20
C UMass,o
Table 4 :
4Comparing correlations (Mean of 5 indepen-
dently sampled sets of topics) between selected auto-
mated coherence metrics with their better mode of ϵ
measured on Wiki. Error bars omitted as S.D ≤ 0.02.
The results on ArXiv and Pubmed are similar.
corpus-pairs
|T |
C γ=1
V,̸ e
C γ=2
V,̸ e
C NPMI,̸ e CNPMI C P,o CUMass,o
ArXiv/Pubmed 267K 0.55
0.55
0.63
0.77
0.66
0.63
ArXiv/Wiki
338K 0.58
0.55
0.60
0.73
0.63
0.49
Pubmed/Wiki 341K 0.67
0.65
0.62
0.74
0.75
0.70
UMass,o 0.243 ± 0.176 0.183 ± 0.161 0.329 ± 0.066 (a) Proxy Task I: Density of agreement among study participants. Full Breakdown in Table 16, Appendix C. UMass,o 0.232 ± 0.182 0.170 ± 0.152 0.320 ± 0.060 (b) Proxy Task II: Mean of maximum coherent group between study participants. Full Breakdown in Table 17, Appendix C. UMass,o -0.277 ± 0.172 -0.202 ± 0.126 -0.354 ± 0.053ArXiv
Pubmed
Wiki
C γ=1
V,̸ e
0.319 ± 0.152 0.516 ± 0.067 0.651 ± 0.099
C γ=2
V,̸ e
0.356 ± 0.146 0.510 ± 0.095 0.652 ± 0.119
C NPMI,̸ e 0.366 ± 0.136 0.521 ± 0.064 0.664 ± 0.094
C NPMI 0.304 ± 0.169 0.428 ± 0.111 0.624 ± 0.087
C P,o 0.266 ± 0.178 0.459 ± 0.093 0.634 ± 0.091
C ArXiv
Pubmed
Wiki
C γ=1
V,̸ e
0.316 ± 0.159 0.511 ± 0.053 0.643 ± 0.110
C γ=2
V,̸ e
0.355 ± 0.153 0.507 ± 0.080 0.648 ± 0.130
C NPMI,̸ e 0.369 ± 0.135 0.517 ± 0.049 0.654 ± 0.104
C NPMI 0.303 ± 0.175 0.421 ± 0.094 0.615 ± 0.090
C P,o 0.260 ± 0.182 0.454 ± 0.081 0.624 ± 0.103
C ArXiv
Pubmed
Wiki
C γ=1
V,̸ e
-0.382 ± 0.164 -0.547 ± 0.109 -0.645 ± 0.085
C γ=2
V,̸ e
-0.415 ± 0.168 -0.541 ± 0.135 -0.648 ± 0.100
C NPMI,̸ e -0.434 ± 0.171 -0.549 ± 0.118 -0.660 ± 0.084
C NPMI -0.342 ± 0.195 -0.453 ± 0.118 -0.627 ± 0.085
C P,o -0.320 ± 0.200 -0.484 ± 0.107 -0.631 ± 0.082
C
Nikolaos Aletras and Mark Stevenson. 2013. Evaluating topic coherence using distributional semantics. In Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, and Elisabetta Fersini. 2021. Cross-lingual contextualized topic models with zero-shot learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1676-1683, Online. Association for Computational Linguistics.Proceedings of the 10th International Conference on
Computational Semantics (IWCS 2013) -Long Pa-
pers, pages 13-22, Potsdam, Germany. Association
for Computational Linguistics.
Giusepppe Attardi. 2015. Wikiextractor. https://
github.com/attardi/wikiextractor.
Amotz Bar-Noy, Reuven Bar-Yehuda, Ari Freund,
Joseph (Seffi) Naor, and Baruch Schieber. 2001. A
unified approach to approximating resource alloca-
tion and scheduling. J. ACM, 48(5):1069-1090.
Richard Bellman and Robert Kalaba. 1959. A mathe-
matical theory of adaptive control processes. Pro-
ceedings of the National Academy of Sciences,
45(8):1288-1290.
Jyoti Belur, Lisa Tompson, Amy Thornton, and Miranda
Simon. 2021. Interrater reliability in systematic
review methodology: Exploring variation in coder
decision-making. Sociological Methods & Research,
50(2):837-865.
David M. Blei, Andrew Y. Ng, and Michael I. Jordan.
2003. Latent dirichlet allocation. J. Mach. Learn.
Res., 3:993-1022.
Gerlof Bouma. 2009. Normalized (pointwise) mutual
information in collocation extraction. Proceedings
of the Biennial GSCL Conference 2009.
Sophie Burkhardt and Stefan Kramer. 2019. Decoupling
sparsity and smoothness in the dirichlet variational
autoencoder topic model. Journal of Machine Learn-
ing Research, 20(131):1-27.
Dallas Card, Peter Henderson, Urvashi Khandelwal,
Robin Jia, Kyle Mahowald, and Dan Jurafsky. 2020.
With little power comes great responsibility. In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 9263-9274, Online. Association for Computa-
tional Linguistics.
Jonathan Chang, Sean Gerrish, Chong Wang, Jordan
Boyd-graber, and David Blei. 2009. Reading tea
leaves: How humans interpret topic models. In Ad-
vances in Neural Information Processing Systems,
volume 22. Curran Associates, Inc.
Norishige Chiba and Takao Nishizeki. 1985. Arboricity
and subgraph listing algorithms. SIAM J. Comput.,
14:210-223.
Kenneth Ward Church and Patrick Hanks. 1990. Word
association norms, mutual information, and lexicog-
raphy. Computational Linguistics, 16(1):22-29.
Elizabeth Clark, Tal August, Sofia Serrano, Nikita
Haduong, Suchin Gururangan, and Noah A. Smith.
2021. All that's 'human' is not gold: Evaluating
human evaluation of generated text. In Proceedings
of the 59th Annual Meeting of the Association for
Computational Linguistics and the 11th International
Joint Conference on Natural Language Processing
(Volume 1: Long Papers), pages 7282-7296, Online.
Association for Computational Linguistics.
Maximilien Danisch, Oana Balalau, and Mauro Sozio.
2018. Listing k-cliques in sparse real-world graphs*.
In Proceedings of the 2018 World Wide Web Con-
ference, WWW '18, page 589-598, Republic and
Canton of Geneva, CHE. International World Wide
Web Conferences Steering Committee.
Adji B. Dieng, Francisco J. R. Ruiz, and David M. Blei.
2020. Topic modeling in embedding spaces. Trans-
actions of the Association for Computational Linguis-
tics, 8:439-453.
Caitlin Doogan and Wray Buntine. 2021. Topic model
or topic twaddle? re-evaluating semantic inter-
pretability measures. In Proceedings of the 2021
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, pages 3824-3848, Online.
Association for Computational Linguistics.
Kawin Ethayarajh and Dan Jurafsky. 2022. The authen-
ticity gap in human evaluation. In Proceedings of
the 2022 Conference on Empirical Methods in Nat-
ural Language Processing, page 6056-6070, Abu
Dhabi, United Arab Emirates. Association for Com-
putational Linguistics.
Branden Fitelson. 2003. A probabilistic theory of co-
herence. Analysis, 63(3):194-199.
Thomas Griffiths and Mark Steyvers. 2004. Finding sci-
entific topics. Proceedings of the National Academy
of Sciences of the United States of America, 101
Suppl 1:5228-35.
Matthew Hoffman, Francis Bach, and David Blei. 2010.
Online learning for latent dirichlet allocation. In
Advances in Neural Information Processing Systems,
volume 23.
Alexander Hoyle, Pranav Goel, Denis Peskov, An-
drew Hian-Cheong, Jordan Boyd-Graber, and Philip
Resnik. 2021. Is automated topic model evaluation
broken?: The incoherence of coherence. In Neural
Information Processing Systems.
Alexander Hoyle, Pranav Goel, Rupak Sarkar, and
Philip Resnik. 2022. Are neural topic models bro-
ken? In Findings of the Association for Computa-
tional Linguistics: EMNLP 2022, page 5321-5344,
Abu Dhabi, United Arab Emirates. Association for
Computational Linguistics.
Diederik P. Kingma and Max Welling. 2014. Auto-
Encoding Variational Bayes. In 2nd International
Conference on Learning Representations, ICLR 2014,
Banff, AB, Canada, April 14-16, 2014, Conference
Track Proceedings.
K. Krippendorff. 2011. Computing krippendorff's
alpha-reliability.
Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017.
Topically driven neural language model. In Proceed-
ings of the 55th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 355-365, Vancouver, Canada. Association for
Computational Linguistics.
Jey Han Lau, David Newman, and Timothy Baldwin.
2014. Machine reading tea leaves: Automatically
evaluating topic coherence and topic model quality.
In Proceedings of the 14th Conference of the Euro-
pean Chapter of the Association for Computational
Linguistics, pages 530-539, Gothenburg, Sweden.
Association for Computational Linguistics.
Raymond Li, Wen Xiao, Linzi Xing, Lanjun Wang,
Gabriel Murray, and Giuseppe Carenini. 2022. Hu-
man guided exploitation of interpretable attention
patterns in summarization and topic segmentation.
In Proceedings of the 2022 Conference on Empiri-
cal Methods in Natural Language Processing, pages
10189-10204, Abu Dhabi, United Arab Emirates. As-
sociation for Computational Linguistics.
Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Chao
Zhang, and Jiawei Han. 2020. Hierarchical topic
mining via joint spherical tree and text embedding. In
Proceedings of the 26th ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining,
KDD '20, page 1908-1917, New York, NY, USA.
Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neu-
ral variational inference for text processing. In Pro-
ceedings of The 33rd International Conference on
Machine Learning, volume 48 of Proceedings of Ma-
chine Learning Research, pages 1727-1736, New
York, New York, USA.
David Mimno, Hanna Wallach, Edmund Talley, Miriam
Leenders, and Andrew McCallum. 2011. Optimizing
semantic coherence in topic models. In Proceedings
of the 2011 Conference on Empirical Methods in
Natural Language Processing, pages 262-272, Edin-
burgh, Scotland, UK. Association for Computational
Linguistics.
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014.
Deepwalk: Online learning of social representations.
In Proceedings of the 20th ACM SIGKDD Interna-
tional Conference on Knowledge Discovery and Data
Mining, KDD '14, pages 701-710, New York, NY,
USA. ACM.
Barbara Plank. 2022. The 'problem' of human label
variation: On ground truth in data, modeling and
evaluation. In Proceedings of the 2022 Conference
on Empirical Methods in Natural Language Process-
ing, page 10671-10682, Abu Dhabi, United Arab
Emirates. Association for Computational Linguistics.
Frank Rosner, Alexander Hinneburg, Michael Röder,
Martin Nettling, and Andreas Both. 2014. Evaluating
topic coherence measures.
Michael Röder, Andreas Both, and Alexander Hinneb-
urg. 2015. Exploring the space of topic coherence
measures. In WSDM, pages 399-408.
Alexandra Schofield and David Mimno. 2016. Com-
paring apples to apple: The effects of stemmers on
topic models. Transactions of the Association for
Computational Linguistics, 4:287-300.
Dazhong Shen, Chuan Qin, Chao Wang, Zheng Dong,
Hengshu Zhu, and Hui Xiong. 2021. Topic model-
ing revisited: A document graph-based neural net-
work perspective. In Advances in Neural Information
Processing Systems 34 -35th Conference on Neural
Information Processing Systems, NeurIPS 2021, Ad-
vances in Neural Information Processing Systems,
pages 14681-14693. Neural information processing
systems foundation.
Akash Srivastava and Charles Sutton. 2017. Autoencod-
ing variational inference for topic models. In ICLR
(Poster).
Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang,
Guoyin Wang, Dinghan Shen, Changyou Chen, and
Lawrence Carin. 2019. Topic-guided variational
auto-encoder for text generation. In Proceedings
of the 2019 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and
Short Papers), pages 166-177, Minneapolis, Min-
nesota. Association for Computational Linguistics.
Zhengjue Wang, Zhibin Duan, Hao Zhang, Chaojie
Wang, Long Tian, Bo Chen, and Mingyuan Zhou.
2020. Friendly topic assistant for transformer based
abstractive summarization. In Proceedings of the
2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 485-497, On-
line. Association for Computational Linguistics.
Linzi Xing and Michael Paul. 2018. Diagnosing and im-
proving topic models by analyzing posterior variabil-
ity. Proceedings of the AAAI Conference on Artificial
Intelligence, 32(1).
Liang Yang, Fan Wu, Junhua Gu, Chuan Wang, Xi-
aochun Cao, Di Jin, and Yuanfang Guo. 2020. Graph
attention topic modeling network. In Proceedings
of The Web Conference 2020, WWW '20, page
144-154, New York, NY, USA. Association for Com-
puting Machinery.
Zhirong Yuan, You Peng, Peng Cheng, Li Han, Xuemin
Lin, Lei Chen, and Wenjie Zhang. 2022. Efficient
k − clique listing with set intersection speedup. In
2022 IEEE 38th International Conference on Data
Engineering (ICDE), pages 1955-1968.
Ce Zhang and Hady W Lauw. 2020. Topic modeling
on document networks with adjacent-encoder. In
Proceedings of the AAAI Conference on Artificial
Intelligence, volume 34, pages 6737-6745.
Delvin Ce Zhang and Hady W Lauw. 2022. Variational
graph author topic modeling. In Proceedings of the
28th ACM SIGKDD Conference on Knowledge Dis-
covery and Data Mining, pages 2429-2438.
He Zhao, Lan Du, Wray Buntine, and Gang Liu. 2017a.
Metalda: A topic model that efficiently incorporates
meta information. In 2017 IEEE International Con-
ference on Data Mining (ICDM), pages 635-644.
Renbo Zhao, Vincent Tan, and Huan Xu. 2017b. Online
Nonnegative Matrix Factorization with General Di-
vergences. In Proceedings of the 20th International
Conference on Artificial Intelligence and Statistics,
volume 54 of Proceedings of Machine Learning Re-
search, pages 37-45.
Table 10 :
10Pearson's r scores (Mean of 5 independently
sampled sets of topics) between automated coherence
metrics within ArXiv/Pubmed corpus. Bold indicates
better correlation score across both tables. Error bars
omitted as S.D ≤ 0.02.
Table 11 :
11Accompanying statistics for respective lemma-
tization effect ablation experiments (see Section 5).
Value indicates mean number of similar words per topic.
While the variants contain similar words, we note that
the word probabilities differ and reflects the composi-
tion of lemmatized and base words in the vocabulary.
Table 12 :
12Quantity of segmentation of sampled topics for respective lemmatization effect ablation experiments (see
Section 5).
corpus-pairs
|T |
C γ=1
V,̸ e
C γ=2
V,̸ e
C NPMI,̸ e C NPMI C P,o C UMass,o
ArXiv/Pubmed
267K 0.55
0.55
0.63
0.77
0.66
0.63
ArXiv/Wiki
338K 0.58
0.55
0.60
0.73
0.63
0.49
ArXiv/Palmetto
114K 0.51
0.54
0.57
0.50
0.44
0.44
Pubmed/Wiki
341K 0.67
0.65
0.62
0.74
0.75
0.70
Pubmed/Palmetto 130K 0.67
0.67
0.65
0.69
0.69
0.55
Wiki/Palmetto
447K 0.98
0.98
0.98
0.98
0.95
0.84
Wiki-l/ArXiv-l
114K 0.54
0.55
0.60
0.60
0.47
0.70
Pubmed-l/ArXiv-l 101K 0.59
0.57
0.70
0.76
0.59
0.78
Pubmed-l/Wiki-l
125K 0.70
0.68
0.71
0.78
0.74
0.78
Pubmed-l/Palmetto 125K 0.70
0.67
0.69
0.77
0.74
0.59
ArXiv-l/Palmetto 114K 0.54
0.55
0.58
0.58
0.49
0.49
Wiki-l/Palmetto
447K 0.99
0.99
0.99
0.99
0.97
0.91
Table 13 :
13Pearson's r (independent samples were aggregated) between exact automated coherence metric measured
on different corpus-pairs (independent samples were aggregated). Suffix -l. short form for -lemma.
corpus ArXiv ArXiv-l. Pubmed Pubmed-l.
Wiki Wiki-l. Palmetto
Total 26,620
22,184
38,829
39,997 40003 40,009
16,567
ArXiv
-
19,637
13,138
10,527 12,955 10,230
6,827
ArXiv-l 19,637
-
9,636
11,015
9,563 10,504
7,130
Pubmed 13,138
9,636
-
23,328 15,459 12,565
8,006
Pubmed-l 10,527
11,015
23,328
-12,637 14,112
8,932
Wiki 12,955
9,563
15,459
12,637
-31,047
13,136
Wiki-l 10,230
10,504
12,565
14,112 31,047
-
14,392
Palmetto
6,827
7,130
8,006
8,932 13,136 14,392
-
Table 14 :
14Quantity of common vocabularies between corpus. Suffix -l. short form for -lemma. Palmetto was re-constructed using 20K most frequent words excluding stop words.
Table 15 :
15Detailed Krippendorf's α for each user study.
Table 16 :
16Detailed breakdown of Proxy Task I, values are Spearman's ρ of density of agreement and coherence scores. C UMass,s and C P,s ommited as they are almost identical to their o variant.
Table 17 :
17Detailed breakdown of Proxy Task II, values are Spearman's ρ of mean of maximum group counts and coherence scores. C UMass,s and C P,s ommited as they are almost identical to their o variant.
Table 19 :
19Detailed breakdown of Pair-wise Proxy Task. Values are Spearman's ρ.
Group 1 Group 2 Group 3 Group 4 Not RelatedIn another example given: atom calcium component material reduction temperature titanium typical weight yield Some might group most of the words as "chemistry"-themed.If you believe that certain word(s) do not belong in any group, select the "Not Related" option in the last column. There can be multiple words that are not related to each other.For example: animal bed carrot fungible great osmosis paradise star telcommunication wateralcohol
O
athlete
O
breakfast
O
drink
O
eat
O
habit
O
intake
O
meal
O
obesity
O
sleep
O
Group 1 Group 2 Group 3 Group 4 Not Related
atom
O
calcium
O
component
O
material
O
reduction
O
temperature
O
titanium
O
typical
O
weight
O
yield
O
Group 1 Group 2 Group 3 Group 4 Not Related
animal
O
bed
O
carrot
O
fungible
O
great
O
osmosis
O
paradise
O
star
O
telcommunication
O
water
O
Each question has 45 possible combinations of wordpairs, each label is binary, denoting coherence relations.
Prior to version 0.1.4 (released Sep 21, 2022), Palmetto's (Röder et al., 2015) γ was set to 2. 4 Kaggle -Cornell-University/ArXiv 5 ncbi.nlm.nih.gov/pmc/tools/openftlist
Hyper-parameters listed inTable 9, Appendix A
Based on reasons provided in Doogan and Buntine(2021), with the main argument that datasets (scores) are continuous and have a bi-variate normal distribution.
We use word co-occurrences statistics obtained from three large corpora: ArXiv. We use ArXiv abstracts dataset 4 where we consider each abstract as a document. These abstracts mainly comprise of research work related to non-medical science disciplines.Pubmed. We use PubMed Central (PMC) Open Access Subset 5 that contains journal articles and pre-prints related to medical research and information. We consider each article body as a document and we remove citations within it.AcknowledgmentsThis research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-020). Hady W. Lauw gratefully acknowledges the support by the Lee Kong Chian Fellowship awarded by Singapore Management University. We extend our gratitude to our user study participants for their efforts, as well as, our reviewers for their kind feedback. |
259,370,854 | What does a Text Classifier Learn about Morality? An Explainable Method for Cross-Domain Comparison of Moral Rhetoric | Moral rhetoric influences our judgement. Although social scientists recognize moral expression as domain specific, there are no systematic methods for analyzing whether a text classifier learns the domain-specific expression of moral language or not. We propose Tomea, a method to compare a supervised classifier's representation of moral rhetoric across domains. Tomea enables quantitative and qualitative comparisons of moral rhetoric via an interpretable exploration of similarities and differences across moral concepts and domains. We apply Tomea on moral narratives in thirtyfive thousand tweets from seven domains. We extensively evaluate the method via a crowd study, a series of cross-domain moral classification comparisons, and a qualitative analysis of cross-domain moral expression. | [
248094722,
17859685,
248496871,
250390668,
220968818,
12603509,
215828184,
12245213
] | What does a Text Classifier Learn about Morality? An Explainable Method for Cross-Domain Comparison of Moral Rhetoric
Long PapersCopyright Long PapersJuly 9-14, 2023
Enrico Liscio
TU Delft
Delftthe Netherlands
Oscar Araque
Universidad Politécnica de Madrid
MadridSpain
Lorenzo Gatti
University of Twente
Enschedethe Netherlands
Ionut Constantinescu
ETH Zürich
ZürichSwitzerland
Catholijn M Jonker
TU Delft
Delftthe Netherlands
Leiden University
Leidenthe Netherlands
Kyriaki Kalimeri
ISI Foundation
TurinItaly
Pradeep K Murukannaiah [email protected]@[email protected]@[email protected]
TU Delft
Delftthe Netherlands
What does a Text Classifier Learn about Morality? An Explainable Method for Cross-Domain Comparison of Moral Rhetoric
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
the 61st Annual Meeting of the Association for Computational LinguisticsLong Papers1July 9-14, 2023
Moral rhetoric influences our judgement. Although social scientists recognize moral expression as domain specific, there are no systematic methods for analyzing whether a text classifier learns the domain-specific expression of moral language or not. We propose Tomea, a method to compare a supervised classifier's representation of moral rhetoric across domains. Tomea enables quantitative and qualitative comparisons of moral rhetoric via an interpretable exploration of similarities and differences across moral concepts and domains. We apply Tomea on moral narratives in thirtyfive thousand tweets from seven domains. We extensively evaluate the method via a crowd study, a series of cross-domain moral classification comparisons, and a qualitative analysis of cross-domain moral expression.
Introduction
Moral narratives play a fundamental role in stance taken on controversial social issues (Fulgoni et al., 2016). Recognizing moral narratives helps understand the argumentation around important topics such as vaccine hesitancy (Kalimeri et al., 2019b), violent protests (Mooijman et al., 2018), and climate change (Dickinson et al., 2016).
Language reveals deep psychological constructs, including moral values (Graham et al., 2013). Thus, language is an important avenue for analyzing moral expression. In particular, supervised text classification models have been showing promising results on morality prediction (Lourie et al., 2021;Hendrycks et al., 2021;. These models leverage the wisdom of crowds (via annotations of moral expression) to attain a descriptive understanding of morality. However, the supervised learning paradigm can lead to black-box models (Danilevsky et al., 2020). Understanding what these models learn is crucial, especially for the morality classification task, which is likely to be used in sensitive applications like healthcare (Wen et al., 2019;Carriere et al., 2021).
Moral expression is context dependent (Hill and Lapsley, 2009;Brännmark, 2015;Kola et al., 2022), where context refers to factors such as actors, actions, judges, and values (Schein, 2020). For a text classifier, the domain from which the training data is sourced represents the context. For example, in the context of recent Iranian protests, tweets tagged #mahsaamini can form the training domain. We expect this domain to have a different moral expression than the training domain of #prolife tweets, representing a different context.
Recent works (Liscio et al., 2022a;Huang et al., 2022) analyze the out-of-domain performance of morality classifiers. However, what leads classifiers to perform differently across domains has not been systematically explored. Such an insight is essential for understanding whether classifiers can learn a domain-specific representation of morality.
We propose Tomea (from the Greek τ oµ´ α, meaning "domain") to compare a text classifier's representation of morality across domains. Tomea employs the SHAP method (Lundberg and Lee, 2017) to compile domain-specific moral lexicons, composed of the lemmas that the classifier deems most predictive of a moral concept in a domain, for each moral concept and domain. Through such moral lexicons, Tomea enables a direct comparison of the linguistic cues that a classification model prioritizes for morality prediction across domains.
We employ Tomea to compare moral rhetoric across the seven social domains in the Moral Foundation Twitter Corpus (MFTC) (Hoover et al., 2020). Then, we perform a crowdsourced evaluation to assess the agreement between the human intuition and the automatically obtained results of Tomea. We show that this agreement is consistent across domains but varies across moral concepts. Further, we find a strong correlation between the results of Tomea and the out-of-domain performance of the models used for obtaining the moral lexicons. In addition, we perform qualitative analyses of the moral impact of specific lemmas, unveiling insightful differences in moral concepts and domains.
Tomea allows to inspect and compare the extent to which a supervised classifier can learn domain-specific moral rhetoric from crowdsourced annotations. Tomea can guide computer scientists and practitioners (e.g., social scientists or policymakers) in the responsible use of transfer learning approaches. In transfer learning, large datasets are used to pre-train language models, which are then finetuned with data collected in the domain of interest. Such pre-training typically helps in improving performance in the finetuning domain. However, increased performance may come at the cost of critical mistakes which may hinder the usage of the model, especially when the finetuning domain concerns minority groups (Nadeem et al., 2021). Tomea can assist in the qualitative comparison of pre-training and finetuning domains by unveiling potential critical differences and guiding practitioners in judging the appropriateness of using a morality prediction model in an application.
Related Works
We introduce the theoretical background and review related works in morality classification in text, domain dependency in NLP models, and explainability in NLP.
Moral Theories
The expression of morality in language has been explored via constructs such as rules-of-thumb on acceptable social behavior (Forbes et al., 2020), moral norms (Lourie et al., 2021;Emelin et al., 2021), and ethical judgements (Hendrycks et al., 2021). However, these constructs are too abstract for our purpose of understanding the domain-specific expression of morality.
We base our work on models of human values, which represent morality in the form of innate moral elements. Two well-known models of human values are the Moral Foundation Theory (MFT) (Graham et al., 2013) and the Schwartz Theory of Basic Human Values (Schwartz, 2012).
In this work, we explore the domain-specific expression of moral elements of the MFT. The MFT consists of five foundations, each consisting of a vice-virtue duality, resulting in 10 moral elements, as shown in Table 1. We choose the MFT because of the availability of the Moral Foundation Twitter Corpus (MFTC) (Hoover et al., 2020), a corpus of seven datasets corresponding to seven domains (Section 4.1), enabling cross-domain analyses. Morality Classification Classification of moral elements in text has been approached via moral lexicons, lists of words depictive of moral elements. Lexicons are generated manually (Graham et al., 2009;Schwartz, 2012), via semi-automated methods (Wilson et al., 2018;Araque et al., 2020), or expanding a seed list with NLP techniques (Ponizovskiy et al., 2020;Araque et al., 2022). The lexicons are then used to classify morality using text similarity (Bahgat et al., 2020;Pavan et al., 2020). Moral elements have also been described as knowledge graphs to perform zero-shot classification (Asprino et al., 2022).
More recent methods adopt instead supervised machine learning (Qiu et al., 2022;Kiesel et al., 2022;Liscio et al., 2022a;Huang et al., 2022;Lan and Paraboni, 2022). A textual dataset is annotated with the moral elements, and the resulting labels are used to train a supervised model. This approach represents the starting point for our analysis in this paper.
Domain Dependency Domain dependency is a well-known issue in sentiment analysis (Al-Moslmi et al., 2017), where it is often addressed through domain adaptation, the challenge to adapt a lexicon or a machine learning algorithm to a novel domain (Hamilton et al., 2016;Wu and Huang, 2016;Wilson and Cook, 2020;Mohamad Beigi and Moattar, 2021). Our main goal in this paper is to analyze the differences in morality across domains, but not to adapt a lexicon or a model to novel domains.
Explainability Explainable AI (XAI) has been used extensively in NLP (Danilevsky et al., 2020).
We do not contribute a new method to XAI, but our work is a novel application of an XAI method.
A key distinction is whether an XAI method generates local or global explanations. Local explanations expose the rationale behind an individual prediction, e.g., by highlighting the most important words in a sentence (Ribeiro et al., 2016;Lundberg and Lee, 2017). Global explanations expose the rationale behind the whole decision-making of the model, e.g., by inducing taxonomies of words that are predictive of the classified labels (Pryzant et al., 2018;Liu et al., 2018). In our analysis, we induce lexicons to explain the decision-making of the models, as they provide an intuitive global explanation.
The Tomea Method
Tomea 1 is a method for comparing a text classifier's representation of morality across domains. Tomea takes as input two dataset, classifier pairs, where, in each pair, the classifier is trained on the corresponding dataset. Since Tomea intends to compare moral expressions across domains, the two datasets input to it are assumed to be collected in different domains. Tomea's output is a qualitative and quantitative representation of the differences in moral expressions between the two input domains. Figure 1 shows the two key steps in the method. First, we generate moral lexicons capturing the classifiers' interpretable representations of the moral elements specific to their domains. Then, we compare the moral lexicons in two ways. (1) We compare the moral lexicons generated for the same moral elements in different domains. (2) We combine the moral lexicons generated for the same domains and provide a single measure of moral rhetoric similarity between two domains.
Moral and Domain Lexicons
A moral lexicon represents how a morality classifier interprets the expression of a moral element in a domain. We represent the expression of morality by determining the impact that each word has toward the classification of a moral element in a domain. Thus, a moral lexicon consists of (w, i) pairs, where w in each pair is a word that the classifier considers relevant for predicting the examined moral element in the domain under analysis and i is its impact. This way, we generate a lexicon for each moral element in each domain. We refer to 1 https://github.com/enricoliscio/tomea the union of the moral lexicons generated for all moral elements in a domain as the domain lexicon.
Lexicon Generation
We use Shapley Additive Explanations (SHAP) (Lundberg and Lee, 2017) to generate the lexicons. SHAP uses Shapley values to quantify the extent to which an input component (a word) contributes toward predicting a label (a moral element).
The impact of a word is computed as the marginal contribution of the word toward a label prediction. Intuitively, the marginal contribution of the word is calculated by removing the word from the sentence and evaluating the difference between the sentence with and without the word. All combinations of words in the sentence (i.e., the power set of features) are created to compute the impact of each word. The resulting impact is positive (if the likelihood of predicting a certain label increases when the word is present) or negative (if the likelihood decreases). We aggregate the local explanations to obtain a global ranking of word impact for each moral element. This can be done by adding the local impact of words for each entry of the dataset due to the additive nature of SHAP.
Tomea executes the following steps to obtain moral lexicons from a dataset and a model. (1) Execute SHAP on each entry of the dataset with the related model, resulting in a (w, i) pair for each word that appears in the dataset.
(2) Replace each word w with its lemma, if one can be found using NLTK's WordNet-based lemmatizer (Bird et al., 2009).
(3) Combine words that share the same lemma by adding their impact i together.
Lexicon Comparison
Tomea enables the comparisons of (1) moral lexicons across domains, and (2) domain lexicons.
Moral Lexicons First, Tomea normalizes each moral lexicon by substituting each word's impact with its z-score (Triola, 2017) based on the distribution of the impact scores of all words in a moral lexicon. Then, Tomea computes an m-distance (moral element distance) to compare the lexicons of a moral element generated in different domains. Let W = {w 1 , · · · , w n } be the set of n common words between the moral lexicons of a moral element M i (one of the ten in MFT) in the two domains D A and D B (in practice, all words that appear in both lexicons). Then, let the two vectors,
i (D A ,M i ) = [i (D A ) 1 , · · · , i (D A ) n ] and i (D B ,M i ) = [i (D B ) 1 , · · · , i (D B ) n ],
represent the impacts of the words belonging to W on M i in domains D A and D B , respectively.
Then, the m-distance compares the impacts that the same set of words has in the two domains D A and D B for the moral element M i as:
m-distance (D A ,D B ) M i = d(i (D A ,M i ) , i (D B ,M i ) )/n,(1)
where d is Euclidean distance. The common set of words W offers a common reference point for measuring the distance between lexicons-however, we employ the full domain vocabulary to perform qualitative comparisons between domains (Section 5.4). We normalize the distance by n to reward domains with larger sets of common words. For a domain pair we compute ten m-distances, one for each M i .
Domain Lexicons
To compare two domain lexicons, Tomea computes a d-distance. The d-distance between two domains D A and D B is the Euclidean norm of the vector of all m-distances computed between the two domains. Intuitively, the Euclidean norm represents the length of the vector of m-distances-the larger the m-distances between two domains, the larger the d-distance. For MFT, with ten moral elements, d-distance is:
d-distance (D A ,D B ) = 10 i=1 (m-distance (D A ,D B ) M i ) 2
(2)
Experiment Design
We evaluate Tomea on MFTC (Hoover et al., 2020). Using Tomea, we generate moral and domain lexicons for the seven MFTC domains and perform pairwise comparisons, obtaining 10 mdistances and one d-distance per comparison. The m-distances and d-distances are intended to compare the classifiers' representation of moral rhetoric across domains. We perform two types of evaluation to inspect the extent to which these distances capture the differences in moral expression across domains. We also perform a qualitative analysis to find fine-grained differences across domains.
Dataset
MFTC consists of 35,108 tweets, divided into seven datasets, each corresponding to a different subject: All Lives Matter (ALM), Baltimore protests (BLT), Black Lives Matter (BLM), hate speech and offensive language (DAV) (Davidson et al., 2017), 2016 presidential election (ELE), MeToo movement (MT), and hurricane Sandy (SND). Since MFTC consists of datasets from different domains but annotated with the same moral theory, we can perform cross-domain comparisons on the corpus.
Each tweet is labeled with one or more of the 10 moral elements of MFT or a nonmoral label. Thus, a tweet can have 11 possible labels. To compensate for the subjectivity of morality annotation, each tweet is annotated by multiple annotators (ranging from 3 to 8). The authors of MFTC apply a majority vote to select the definitive label(s) of each tweet, and tweets with no majority label are labeled as nonmoral. Table 2 shows the distribution of labels and the MeanIR, a measure of label imbalance (Charte et al., 2015) for MFTC. The imbalance is high for some domains, which turns out to be an important factor in the cross-domain comparisons.
Model Training
We treat morality classification as a multi-class multi-label classification with BERT (Devlin et al., 2019), similar to the recent approaches (Liscio et al., 2022a;Kiesel et 2022; Huang et al., 2022). We create seven models (one per domain) using the sequential training paradigm (Lourie et al., 2021). That is, for each domain, the model is first pre-trained on the other six domains, and then continued training on the seventh. We choose this paradigm since: (1) it is shown to offer the best performance in transfer learning (Lourie et al., 2021;Liscio et al., 2022a), and (2) it represents a realistic scenario, where it is fair to assume that several annotated datasets are available when a novel dataset is collected. Appendix A includes additional details on training.
Pairwise Comparisons
We employ Tomea to perform pairwise comparisons across the seven domains. First, we generate a moral lexicon for each of the ten moral elements in each of the seven domains (we neglect the nonmoral label as it does not expose moral rhetoric). This yields 70 moral lexicons. For each moral element, we perform pairwise comparisons across the seven domains, resulting in 21 m-distances per element. Finally, we perform pairwise comparisons of the seven domain lexicons to obtain 21 d-distances.
Evaluation
We evaluate the extent to which m-distances and d-distances are predictive of differences in moral expression across domains. First, we perform a crowd evaluation to compare moral lexicons and their related m-distances. Then, we evaluate domain lexicons and d-distances by correlating them to the out-of-domain performances of the models.
Crowd Evaluation
We recruited human annotators on the crowdsourcing platform Prolific 2 to evaluate the comparisons of moral lexicons generated for the same moral element across domains (i.e., the m-distances). We designed our annotation task with the covfee annotation tool (Vargas Quiros et al., 2022). The Ethics Committee of the Delft University of Technology approved this study, and we received an informed consent from each subject. Tomea provides m-distances that indicate the distance between domains for each moral element. We evaluate whether humans reach the same conclusions of domain similarity given the moral lexicons generated by Tomea. However, directly providing a distance or similarity between two domains is a challenging task for humans since it lacks a reference point for comparison. Thus, we re-frame the task as a simpler comparative evaluation.
Crowd task We represent each moral lexicon through a word bubble plot, where the 10 most impactful words are depicted inside bubbles scaled by word impact (Figure 2 shows an example). A crowd worker is shown three word bubbles, generated for the same moral element in three domains, D A , D B , and D C . We ask the worker to indicate on a 6-point Likert scale whether D A is more similar to D B or D C based on the shown word bubbles. Appendix B shows a visual example of the task. We fix one domain as D A and choose all possible combinations of the other six domains as D B and D C , leading to (6 * 5)/2 = 15 combinations. We employ each of the seven domains as D A , leading to 105 combinations. We generate these combinations for each of the ten moral elements, resulting in 1050 unique tasks. To account for the subjectivity in the annotation, we ensure that each task is performed by three annotators, pushing the total number of required annotations to 3150. Each annotator performed 20 tasks, resulting in a total of 159 annotators. We included four control tasks in each annotator's assignment. Appendix B provides additional details on the crowd study.
Evaluation To compare the results of Tomea and the crowd annotations, we compute the correlation between m-distances and crowd answers. Since the Shapiro test showed that the crowd answers are not normally distributed, we choose Spearman correlation in which only the rank order matters.
In the crowd task, workers choose domain similarity on a six-point Likert scale. Given a domain triple (D A , D B , D C ), we represent the three choices indicating D A to be more similar to D B than D C as [−2.5, −1.5, −0.5], and D A to be more similar to D C than D B as [0.5, 1.5, 2.5]. For each annotation task, we average the answers received by the three annotators that performed it.
In contrast, Tomea computes scores for a domain pair. To compare Tomea's output with the output of the crowd workers, we transform the results of Tomea into the same triples evaluated in the crowd task. To do so, for a domain triple (D A , D B , D C ) and a moral element M i , we compute:
S = m-distance (D A ,D B ) M i − m-distance (D A ,D C ) M i
As m-distances reflect distance between domains, a negative S indicates that D A is more similar to D B than D C and a positive S indicates that D A is more similar to D C than D B . We correlate S and crowd answers for all 1050 annotated combinations.
Out-of-Domain Performance
The d-distances computed by Tomea indicate the similarity between two domains. The more similar the two domains are, the better we expect the outof-domain performance to be. That is, if domains D A and D B are similar, we expect a model trained on D A to have good classification performance on D B , and vice versa. Thus, we evaluate the ddistances by correlating them to the out-of-domain performances of the models, computed by evaluating each model on the remaining six domains.
Results and Discussion
First, we describe the pairwise comparisons resulting from Tomea. Then, we describe the results from the evaluations. Finally, we perform a qualitative analysis to provide fine-grained insights.
Cross-Domain Comparisons
For each moral element we perform pairwise comparisons across the seven domains, resulting in 21 m-distances per element. We aggregate the moral lexicons obtained for the ten moral elements to attain seven domain lexicons. We perform pairwise comparisons across the seven domain lexicons to obtain 21 d-distances, which we display in Figure 3 First, we observe that the d-distances have a small magnitude and variation. This is due to the normalization in Equation 1 (the length of the shared vocabulary, n, is in the order of thousands).
Second, we intuitively expect the moral rhetoric in the domains ALM and BLM to be relatively similar compared to other domain pairs involving ALM or BLM. The d-distances support this intuition.
Third, the BLT and DAV domains have the largest overall distances from the other domains. This can be explained by their label distribution (Table 2), which leads to poor accuracy in predicting moral elements (Liscio et al., 2022a;Huang et al., 2022). As these two domains contain fewer tweets labeled with moral elements, the moral lexicons inferred in these domains are of low quality. This may explain why BLM and BLT, both domains involving protests, do not have a low d-distance.
Finally, we caution that the d-distances in Table 3 are aggregated across moral elements. Although the d-distances provide some intuition, the underlying m-distances provide more fine-grained information (Section 5.4 and Appendix C).
Crowd Evaluation
Recall that the crowd evaluation consisted of 1050 domain triples and each triple was annotated by three annotators. The resulting Intra-Class Correlation (ICC) between the annotators, an inter-rater reliability (IRR) metric for ordinal data, was 0.66, which can be considered good but not excellent (Hallgren, 2012). This shows that crowd workers did not annotate randomly, but can interpret the moral elements differently. Such subjectivity is inevitable when annotating constructs such as morality (Hoover et al., 2020;Liscio et al., 2022b).
We compute the Spearman's rank correlation (ρ) between the crowd annotations and the m-distances as described in Section 4.4.1. Table 4 We make two observations. First, despite the subjectivity and complexity in comparing moral lexicons, Tomea's results are positively and moderately correlated with human judgment. This shows that Tomea can quantify the differences in how moral elements are represented across domains.
Second, although the agreement between Tomea and humans is consistent across domains, there are large variations across moral elements-spanning strong (e.g., fairness), weak (e.g., authority), and negligible (e.g., purity) correlations. Although the lack of annotations for some moral elements in the corpus has likely influenced these results, such variations cannot be solely explained by the label imbalance. In fact, there is only a weak correlation (ρ = 0.24) between the average number of annotations of a moral element across domains (Table 2) and the results in Table 4b. Thus, we conjecture that other factors influence these variations. On the one hand, some moral elements could be more difficult to identify in text than others (Araque et al., 2020; Kennedy et al., 2021). On the other hand, a strong correlation for a moral element could suggest clear differences in representing that element across domains, which both humans and Tomea recognize. Instead, a weak correlation indicates that the agreement between Tomea and humans is almost random, which could suggest that the differences across domains are small or hard to identify.
Out-of-Domain Performance
To compare the domain lexicons, we compare the d-distances to the out-of-domain performance of the models (Section 4.4.2). We notice that no single domain stands out as the best source for all targets. Thus, the choice of the source domain influences a model's out-ofdomain performance in a target domain. Hence, we investigate whether the distances Tomea computes are indicative of the out-of-domain performances.
We find a strong negative correlation (ρ = −0.79) between the d-distances in Table 3 and the out-of-domain F 1 -scores in Table 5. Thus, the smaller the d-distance between domains, the higher the out-of-domain performance. This demonstrates that Tomea can provide valuable insights on the out-of-domain performance of a model. To scrutinize this result further, we group the correlations by domain in Table 6. There is a moderate to strong negative correlation in all domains except BLT and DAV. We believe that these exceptions are because of the label imbalance and poor model performance in these two domains mentioned in Section 5.1.
Qualitative Analysis
In addition to quantitative analyses, Tomea enables deep qualitative analyses of the moral expression across domains. In this section, we show examples of (1) words that have high impact on the same moral element across domains, (2) words that have largely different impact on the same moral element across domains, and (3) words that have relatively high impact on two different moral elements in two different domains. Then, we show an example procedure for analyzing the differences between two domains. All lexicon values indicated in these analyses are normalized using the z-score.
First, Tomea can detect words that have a high impact on a moral element across domains. For example, the word 'equality' has high impact on fairness in both ALM (21.9) and BLM (27.7) domains; similarly, the word 'fraudulent' has high impact on cheating in both domains (22.6 for ALM and 16.0 for BLM). Such consistencies with a large number of words shared between the domains show a consistent moral rhetoric across the domains.
Second, Tomea can detect words whose impact on a moral element largely varies across domains. This information offers a qualitative perspective on the domain dependency of moral elements. For example, ALM and BLM are two of the most similar domains (Table 3). Yet, Tomea indicates that the word 'treason' has a relatively low impact on the moral element of betrayal in ALM (2.6) but a considerably higher impact in BLM (24.6); similarly, the word 'brotherhood' has a high impact on purity in ALM (26.9) but a comparably lower impact in BLM (8.3). Another interesting comparison can be found between the SND and BLT domains, where the word 'embarrassing' has negligible impact on degradation in SND (-0.1) but a high impact in BLT (27.2). These differences can be explained by anecdotal knowledge-that is, the word 'embarrassing' is not relevant for degradation in the Hurricane Sandy relief domain, but it is more relevant in the domain of the Baltimore protests.
Third, Tomea can indicate how a word's impact can vary across moral elements, depending on the domain. For example, the word 'crook' has comparable impacts on cheating in the ELE domain (3.1) and on degradation in the MT domain (3.9); similarly, the word 'looting' has a significant impact on harm in ALM (3.5) and on cheating in ELE (6.4). These examples demonstrate why domain is crucial in interpreting the moral meaning of a word.
Finally, Tomea facilitates fine-grained comparisons among specific domains of interest. Take ALM and BLM, two very similar domains according to Table 3, for instance. Generally, the mdistances of the moral elements are low for these two domains, as shown in Table 7. However, the m-distances for authority and subversion are relatively higher than others. We can inspect this further using the moral lexicons generated by Tomea. For example, in subversion, words such as 'overthrow' and 'mayhem' have a high impact in ALM, whereas words such as 'encourage' and 'defiance' have a high impact in BLM. This is in line with our intuition that subversion has different connotations in the two domains-whereas subversion is negative in ALM, it is instead encouraged in BLM. The analyses above are not meant to be exhaustive. We pick examples of moral elements, domains, and words to demonstrate the fine-grained analyses Tomea can facilitate. Our observations, considering that we only analyzed a few examples, may not be significant in themselves. Further, these observations may change with more (or other) data.
Moral
Conclusions and Directions
Tomea is a novel method for comparing a text classifier's representation of morality across domains. Tomea offers quantitative measures of similarity in moral rhetoric across moral elements and domains. Further, being an interpretable method, Tomea supports a fine-grained exploration of moral lexicons. Tomea is generalizable over a variety of classification models, domains, and moral constructs.
The similarities computed by Tomea positively correlate with human annotations as well as the out-of-domain performance of morality prediction models. Importantly, Tomea can shed light on how domain-specific language conveys morality, e.g., the word 'brotherhood' has a high impact on moral elements in the ALM domain, whereas the word 'treason' has a high impact in the BLM domain.
Tomea can be a valuable tool for researchers and practitioners. It can be used to study how a text classifier represents moral rhetoric across personal, situational, and temporal dimensions, and across different types of moral values (Pommeranz et al., 2012;Liscio et al., 2022b). Tomea can support societal applications such as modeling stakeholders' preferences on societal issues (Mouter et al., 2021;Siebert et al., 2022;Liscio et al., 2023), analyzing the impact of events like the COVID-19 pandemic (van de Poel et al., 2022), and predicting violent protests (Mooijman et al., 2018). Finally, Tomea can assist NLP researchers in generating morally aligned text (Ammanabrolu et al., 2022;Bakker et al., 2022) that is domain specific.
A key direction to improve Tomea is incorporating refined explanations, e.g., by rule-based inferences (Zhou et al., 2022). Additional distance metrics and normalization procedures may also provide a more accurate lexicon comparison. Finally, the qualitative analysis that we performed could be systematized as a methodology for analysts.
Ethical Considerations and Limitations
There is a growing interest in investigating human morality in text (Russell et al., 2015;Gabriel, 2020). However, like most technologies, morality classification can be misused, especially targeting sensitive features including ethnicity and political orientation (Kalimeri et al., 2019a;Talat et al., 2022). For instance, authorities in non-liberal countries could use Tomea to identify repressed minorities by detecting moral language that diverges from the expected moral rhetoric. Ongoing research is investigating such issues, e.g., by creating methods that mitigate bias and unfairness by design (Dinan et al., 2020;Vargas and Cotterell, 2020).
We discuss three main limitations of our analyses related to the corpus we use (MFTC). First, MFTC is composed of English tweets, and we employ a version of BERT that was pre-trained on large-scale English data. Our experiments show that Tomea produces insightful results under these conditions. However, the performance of Tomea with models pre-trained on smaller datasets, e.g., datasets for morphologically richer languages, remains to be investigated. Further, the scalability of Tomea to longer text formats (e.g., news articles) and different mediums of communication (e.g., surveys) is yet to be explored.
Second, the tweets in the MFTC were collected using the Twitter API, which only yields public posts. Thus, following Twitter's Terms of Service, deleted content will not be available (limiting the reproducibility of any Twitter-based study). Further, the demographic and cultural distribution of Twitter users may not be representative of the general population, In addition, we required the crowd workers involved in the evaluation to be fluent in English, and their demographic distribution (Appendix B.3) is skewed towards Europe. These factors could possibly lead to the perpetuation of Western values and biases (Mehrabi et al., 2021) in our analyses. Additional experiments are needed to investigate whether Tomea would produce insightful results when applied on a dataset collected on a more extensive slice of the population, with a broader set of linguistical expressions.
Third, the MFTC is focused on US-centric topics. However, when recruiting annotators for our crowd evaluation, we did not require familiarity with such topics. Even though the annotators were not exposed to the original tweets but to a processed version of the dataset (i.e., the output of Tomea, see Section 4.4.1), the potential lack of familiarity may have influenced the evaluation results.
Finally, we remind that Tomea's d-distances measure how (dis-)similar two domains are, and are thus not a (binary) judgment of (dis-)similarity. Further, two corpora collected in the same domain (e.g., two datasets on BLM protests) will likely not have a d-distance of 0. It is left to the user to judge the similarity of the two corpora, supported by Tomea's quantitative and qualitative metrics.
A Experimental Details
We provide here all the information needed for reproducing our experimental results. Code and the complete set of results are provided as supplemental material. The models cannot be shared due to upload size limit, thus will be shared at publication.
A.1 Data Preprocessing
We preprocess the tweets by removing URLs, emails, usernames and mentions. Next, we employ the Ekphrasis package 3 to correct common spelling mistakes and unpack contractions. Finally, emojis are transformed into their respective words using the Python Emoji package 4 .
A.2 Hyperparameters
To select the hyperparameters, we trained and evaluated the model on the entire MFTC corpus with 10-fold cross-validation. Table A1 shows the hyperparameters that were compared in this setting, highlighting in bold the best performing option that we then used in the experiments described in the paper. If a parameter is not present in the table, the default value supplied by the framework was used.
Hyperparameters Options
Model name bert-base-
A.3 Model Training
As introduced in Section 4.2, we trained seven models on the seven domains of the MFTC, respectively. Each model was first trained on the remaining six domains, and then continued training on the domain under analysis. The training on the seventh domain was performed on 90% of the domain, leaving 10% out for evaluation. Table A2 shows the performances of the models on the domains portions left out for evaluation. Table A2: Models performance (macro F 1 -score).
A.4 Computing Infrastructure
The following are the main libraries and computing environment used in our experiments. We spent 7 GPU hours to train the seven models used in the experiments. We spent 70 CPU hours to generate the moral lexicons.
A.5 Random Seeds
In our experiments, to control for randomness, we fixed the random seeds in the following libraries:
• Python (random.seed)
• NumPy (numpy.random.seed)
• PyTorch (torch.manual_seed)
• CUDA (torch.cuda. manual_seed_all)
A.6 Artifacts Usage
We have mainly used three artifacts in this research: the MFTC (Hoover et al., 2020), SHAP (Lundberg and Lee, 2017), and BERT (Devlin et al., 2019). The MFTC was collected with the intent of facilitating NLP research on morality. It can be downloaded 5 and used under the Creative Commons Attribution 4.0 license.
SHAP was intended to explain the output of any machine learning model. Thus, we are using it as originally intended, under its MIT license 6 .
BERT was created with the intent of performing, among others, text classification. Thus, we are using it as originally intended, under its Apache 2.0 distribution license 7 .
B Crowd Evaluation
Section 4.4.1 introduces the crowd experiment. We first opened a pilot annotation job on Prolific for nine users with an expected completion time of 25 minutes. The average completion time was 21 minutes and the average ICC 0.61. These results encouraged us to proceed with the rest of the experiment. Ultimately, the average time spent by a crowd worker on a job was 22 minutes (± 12 minutes SD). Each worker was paid £3.75 (at the rate of £9/h as per Prolific suggestion of fair retribution).
B.1 Annotation Job Layout
Upon taking the annotation job on Prolific, workers were redirected to a web application hosted on our servers. Here, after accepting the informed consent form, they were asked demographic questions and then were given a brief introduction to the annotation tasks and the moral elements involved. Informed consent form, instructions, and all word bubbles are provided as supplemental material. Figure B2 shows an example of an annotation task. In each individual task, annotators needed to indicate whether the word bubble describing domain D A was more similar to the one describing domain D B or D C . The annotators were given the following six options on a Likert scale: 1. A is clearly more similar to B (than to C) 2. A is more similar to B (than to C) 3. A is slightly more similar to B (than to C) 4. A is slightly more similar to C (than to B) 5. A is more similar to C (than to B) 6. A is clearly more similar to C (than to B) After the initial instructions, each annotator was guided through four sections. Each section contained five tasks where all word bubbles were generated for the same moral element (but multiple different domains), plus one control task (as described in Section B.2). Before each section, the annotator was introduced to the moral element concerned in the following section. Thus, each annotator was introduced to four different moral elements. These elements were chosen from two different moral foundations, for a total of two moral foundations per annotator. For instance, one annotation job could be composed of four annotation sections corresponding to the moral elements of care, harm, authority, and subversion, resulting in 24 annotations tasks (including four control tasks).
B.2 Quality Control
The crowd workers were required to be fluent in English and have submitted at least 100 Prolific jobs with at least 95% acceptance rate. We included four control tasks, one per section. In each, the word bubbles describing D A and D B were identical, and different from the word bubble describing D C .
A total of 186 workers completed the job. Using the Likert options enumeration introduced in Section B.1, we included a worker's job in our analysis only if (1) all four control tasks were answered with options 1, 2, or 3; and (2) at least two control tasks were answered with options 1 or 2. These criteria were set before any analysis of crowd work was done. Of the 186 workers, 159 satisfied the criteria above.
B.3 User demographics
Upon giving informed consent, workers were asked the following demographic information:
• What is your age?
• What gender do you identify as?
• Where is your home located?
• What is the highest degree or level or education you have completed? Figure B1 shows the demographics of the 159 users whose submissions were considered in the study. The following word bubbles describe the moral concept of care. Please indicate whether the word bubble A is more similar to the word bubble B or C. Please make sure to read all the words in the bubbles.
A is clearly more similar to B (than to C)
A is more similar to B (than to C)
A is slightly more similar to C (than to B)
A is more similar to C (than to B)
A is slightly more similar to B (than to C)
A is clearly more similar to C (than to B) Figure B2: The annotator is asked to take a choice on a 6-points Likert scale based on the shown word bubbles.
C Extended Results
C.1 m-distances
In Table 3 we show the d-distances describing the distance between domains. In tables C1a to C1j we display the m-distances describing the distance between domains for each moral element. For readability, we show the scores multiplied by 100. The most apparent consideration is that moral expression similarity is not consistent across domains, but rather depends on the moral element under analysis. In Section 5.4 we provide examples on how to explore such fine-grained differences across domains. On top of the explored cases, another insightful example is represented by two domains that ranked with a higher distance, ALM and SND. Nevertheless, the domains ranked relatively more similar in the care element. Let us inspect closely the moral lexicons generated for care for ALM and SND. At first, we notice some differences, such as the words 'rescue' and 'donation' that are specific to the SND domain, being especially relevant in a hurricane relief domain. However, we also notice many similarities, such as the words 'protect' and 'compassion', typical for describing in-group care. Table C2 shows the Spearman correlation (ρ) by moral element and domain. We notice that ρ is generally consistent across moral elements-for instance, the elements of fairness and betrayal have the highest ρ, while purity have the lowest. However, there are some exceptions. SND has a comparatively low ρ for harm, and MT for subversion, despite having a large number of annotations (Table 2). A possible reason is that the expression of these elements in these domains is less domain specific than in other domains, leading to lower ρ with crowd intuition. Instead, DAV has a high ρ for harm and betrayal. This can be explained by the nature of the domain (hate speech), which would lead to highly specific lexicons for these elements.
C.2 Correlation by Domain and Element
C.3 Qualitative Analysis
In Section 5.4 we suggest methods for qualitatively comparing moral rhetoric across domains. In particular, we show similarities and differences between two domains, ALM and BLM. These are among the most similar domains for the moral elements of fairness (Table C1c) and cheating (Table C1d). For both domains, the words 'equality' and 'fraud' are among the most impactful words for the two elements, respectively. In Table C3 we show examples of tweets where these words are used, in order to provide additional context on their usage. On the other hand, ALM and BLM differ in the moral element of subversion (Table C1h). Here, words such as 'overthrow' and 'mayhem' have high impact in ALM, whereas words such as 'encourage' and 'defiance' have high impact in BLM. In Table C4 we show examples of tweets where these words are used, in order to provide additional context on their usage. A2. Did you discuss any potential risks of your work?
Section 7
A3. Do the abstract and introduction summarize the paper's main claims?
Abstract and Section 1 A4. Have you used AI writing assistants when working on this paper?
Left blank.
B Did you use or create scientific artifacts?
Section 4 and Appendix A B1. Did you cite the creators of artifacts you used?
Sections 2, 3, 4, Appendix A B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Appendix A6
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix A6
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The data was collected by Hoover et al. (2020), see Section 4.1. In their paper they discuss the anonimization and filtering process. We further process the tweets by removing URLs, emails, usernames and mentions, as described in Appendix A1.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Details of the artifacts we use are provided by the original authors of MFTC (Hoover et al., 2020) and BERT (Devlin et al., 2019).
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 4.2 and Appendix A3
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C Did you run computational experiments?
Section 4 and Appendix A C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A2 and A4
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A2
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Appendix A3
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3.2 and Appendix A D Did you use human annotators (e.g., crowdworkers) or research with human participants?
Section 4,5, Appendix B D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix B1 and supplemental material (data)
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 4.4.1 and Appendix B
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Appendix B and supplemental material (data)
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Section 4.4.1 and supplemental material (data)
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix B3
Figure 1 :
1Tomea takes as input two dataset, model pairs (where the datasets are collected in different domains) and returns the distance in moral expressions across moral elements and domains.
Figure 2 :
2Word bubble plot used in the crowd evaluation for the moral element betrayal in the BLT domain.
ALM BLT BLM DAV ELE MT SND ρ -1.0 0.43 -0.89 0.31 -0.71 -0.83 -0.54
Figure B1 :
B1Demographics
Table 1 :
1The moral elements (virtue/vice) of MFT.
,
al.Element ALM BLT BLM DAV ELE MT SNDCare
456 171 321
9 398 206 992
Harm
735 244 1037 138 588 433 793
Fairness
515 133 522
4 560 391 179
Cheating
505 519 876
62 620 685 459
Loyalty
244 373 523
41 207 322 415
Betrayal
40 621 169
41 128 366 146
Authority
244
17 276
20 169 415 443
Subversion
91 257 303
7 165 874 451
Purity
81
40 108
5 409 173
56
Degradation 122
28 186
67 138 941
91
Nonmoral
1744 3826 1583 4509 2501 1565 1313
Total
4424 5593 5257 5358 4961 4591 4891
MeanIR
11.5 51.3
5.4 344.8 9.6 4.0 6.4
Table 2 :
2Labels distribution per domain of the MFTC.
as a 7x7 symmetric matrix. For readability, we show the scores multiplied by 100.ALM BLT BLM DAV ELE MT SND
ALM
-
6.24 4.64 6.84 5.29 5.38 5.55
BLT
6.24
-
6.23 6.09 5.37 5.50 5.56
BLM 4.64 6.23
-
6.27 4.68 5.14 5.25
DAV
6.84 6.09 6.27
-
5.96 6.54 6.80
ELE
5.29 5.37 4.68 5.96
-
4.72 4.62
MT
5.38 5.50 5.14 6.54 4.72
-
4.96
SND
5.55 5.56 5.25 6.80 4.62 4.96
-
Table 3 :
3d-distances with moral rhetoric distance between domains. Darker color depicts smaller distance.
groups the correlations by domains and moral elements. The mean correlation (without any grouping) is 0.4.Domain
ρ
ALM
0.38
BLT
0.31
BLM
0.43
DAV
0.50
ELE
0.39
MT
0.42
SND
0.31
Average 0.39 ± 0.07
(a) Correlation by domain.
Moral Element
ρ
Care
0.34
Harm
0.57
Fairness
0.74
Cheating
0.23
Loyalty
0.52
Betrayal
0.63
Authority
0.20
Subversion
0.51
Purity
-0.05
Degradation
0.35
Average
0.4 ± 0.24
(b) Correlation by element.
Table 4 :
4Correlation between crowd annotations and mdistances, divided by domain and moral element.
Table 5
5shows the outof-domain macro F 1 -scores of the models. The rows indicate the domain on which the model was trained, and the columns indicate the domain on which the model was evaluated. For each target domain (i.e., each column) we highlight in bold the source domain that performed best.Target → ALM BLT BLM DAV ELE MT SND
Source ↓
ALM
-
48.2 83.7 11.0 68.6 61.9 61.2
BLT
58.5
-
71.6 10.7 56.2 52.2 52.7
BLM
74.0 49.9
-
12.8 75.5 64.3 64.9
DAV
49.3 31.7 64.5
-
37.9 40.4 37.1
ELE
73.9 53.6 87.6 11.9
-67.0 67.5
MT
71.5 56.2 84.4 11.5 72.9 -72.3
SND
73.4 51.6 88.0 12.7 72.1 67.7 -
Table 5 :
5Macro F 1 -scores of models trained on the source domain and evaluated on the target domain.
Table 6 :
6Correlation between Tomea results and out-ofdomain performance of the models, divided by domain.
Table 7 :
7The m-distances between ALM and BLM.
of Morally Framed Arguments. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL '22, pages 8782-8797, Dublin, Ireland. Association for Computational Linguistics.Prithviraj Ammanabrolu, Liwei Jiang, Maarten Sap,
Hannaneh Hajishirzi, and Yejin Choi. 2022. Align-
ing to Social Norms and Values in Interactive Narra-
tives. In Proceedings of the 2022 Conference of the
North American Chapter ofthe Association for Com-
putational Linguistics: Human Language Technolo-
gies, NAACL '22, pages 5994-6017, Seattle, USA.
Association for Computational Linguistics.
Oscar Araque, Lorenzo Gatti, and Kyriaki Kalimeri.
2020. MoralStrength: Exploiting a moral lexicon
and embedding similarity for moral foundations pre-
diction. Knowledge-Based Systems, 191:1-11.
Oscar Araque, Lorenzo Gatti, and Kyriaki Kalimeri.
2022. LibertyMFD: A Lexicon to Assess the Moral
Foundation of Liberty. In Proceedings of the 2022
ACM Conference on Information Technology for So-
cial Good, GoodIT '22, page 154-160, New York,
NY, USA. Association for Computing Machinery.
Luigi Asprino, Luana Bulla, Stefano De Giorgis, Aldo
Gangemi, Ludovica Marinucci, and Misael Mon-
giovi. 2022. Uncovering values: Detecting la-
tent moral content from natural language with ex-
plainable and non-trained methods. In Proceedings
of Deep Learning Inside Out: The 3rd Workshop
on Knowledge Extraction and Integration for Deep
Learning Architectures, DeeLIO '22, pages 33-41,
Dublin, Ireland and Online. Association for Compu-
tational Linguistics.
Mohamed Bahgat, Steven R. Wilson, and Walid Magdy.
2020. Towards Using Word Embedding Vector
Space for Better Cohort Analysis. In Proceedings
of the International AAAI Conference on Web and
Social Media, ICWSM '20, pages 919-923, Atlanta,
Georgia. AAAI Press.
Michiel Bakker, Martin Chadwick, Hannah Sheahan,
Michael Tessler, Lucy Campbell-Gillingham, Jan
Balaguer, Nat McAleese, Amelia Glaese, John
Aslanides, Matt Botvinick, and Christopher Sum-
merfield. 2022. Fine-tuning language models to find
agreement among humans with diverse preferences.
In Advances in Neural Information Processing Sys-
tems, NeurIPS '22, pages 38176-38189. Curran As-
sociates, Inc.
Steven Bird, Ewan Klein, and Edward Loper. 2009.
Natural language processing with Python: analyz-
ing text with the natural language toolkit. O'Reilly
Media, Inc.
Johan Brännmark. 2015. Moral disunitarianism. The
Philosophical Quarterly, 66(264):481-499.
Jay Carriere, Hareem Shafi, Katelyn Brehon, Kiran
Pohar Manhas, Katie Churchill, Chester Ho, and
Mahdi Tavakoli. 2021. Case Report: Utilizing AI
and NLP to Assist with Healthcare and Rehabilita-
tion During the COVID-19 Pandemic. Frontiers in
Artificial Intelligence, 4(2):1-7.
Francisco Charte, Antonio J. Rivera, María J. del Jesus,
and Francisco Herrera. 2015. Addressing imbalance
in multilabel classification: Measures and random
resampling algorithms. Neurocomputing, 163:3-16.
Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis
Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Sur-
vey of the State of Explainable AI for Natural Lan-
guage Processing. In Proceedings of the 1st Confer-
ence of the Asia-Pacific Chapter of the Association
for Computational Linguistics and the 10th Interna-
tional Joint Conference on Natural Language Pro-
cessing, AACL '20, page 447-459, Suzhou, China.
Thomas Davidson, Dana Warmsley, Michael Macy,
and Ingmar Weber. 2017. Automated Hate Speech
Detection and the Problem of Offensive Language.
In Proceedings of the 11th International Conference
on Web and Social Media, ICWSM '17, pages 512-
515.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
Deep Bidirectional Transformers for Language Un-
derstanding. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics, NAACL '19, page
4171-4186.
Janis L Dickinson, Poppy McLeod, Robert Bloomfield,
and Shorna Allred. 2016. Which moral foundations
predict willingness to make lifestyle changes to avert
climate change in the USA? PLoS ONE, 11(10):1-
11.
Emily Dinan, Angela Fan, Ledell Wu, Jason Weston,
Douwe Kiela, and Adina Williams. 2020. Multi-
Dimensional Gender Bias Classification. In Pro-
ceedings of the 2020 Conference on Empirical Meth-
ods in Natural Language Processing, EMNLP '20,
pages 314-331.
Denis Emelin, Ronan Le Bras, Jena D. Hwang,
Maxwell Forbes, and Yejin Choi. 2021. Moral Sto-
ries: Situated Reasoning about Norms, Intents, Ac-
tions, and their Consequences. In Proceedings of the
2021 Conference on Empirical Methods in Natural
Language Processing, EMNLP '21, pages 698-718,
Online and Punta Cana, Dominican Republic. Asso-
ciation for Computational Linguistics.
Maxwell Forbes, Jena D. Hwang, Vered Shwartz,
Maarten Sap, and Yejin Choi. 2020. Social Chem-
istry 101: Learning to Reason about Social and
Moral Norms. In Proceedings of the 2020 Confer-
ence on Empirical Methods in Natural Language
Processing, EMNLP '20, pages 653-670, Online.
Association for Computational Linguistics.
Table A1 :
A1Hyperparameters tested and selected.
SND 1.12 1.38 1.06 2.11 1.11 1.02 -(b) m-distances for the harm element. SND 1.96 1.73 1.99 2.75 1.68 1.59 -(e) m-distances for the loyalty element. SND 1.13 1.73 0.90 1.61 0.87 0.88 -(f) m-distances for the betrayal element. (g) m-distances for the authority element. (h) m-distances for the subversion element. (i) m-distances for the purity element. SND 1.03 1.40 1.21 1.44 1.09 1.76 -(j) m-distances for the degradation element.14128
ALM BLT BLM DAV ELE MT SND
ALM
-
1.66 1.62 2.28 1.72 1.51 1.43
BLT
1.66
-
1.68 1.13 1.70 1.62 1.53
BLM 1.62 1.68
-
1.28 1.41 1.98 1.80
DAV 2.28 1.13 1.28
-
1.67 1.96 2.26
ELE
1.72 1.70 1.41 1.67
-
1.82 1.64
MT
1.51 1.62 1.98 1.96 1.82
-
1.61
SND 1.43 1.53 1.80 2.26 1.64 1.61
-
(a) m-distances for the care element.
ALM BLT BLM DAV ELE MT SND
ALM
-
1.45 1.15 2.48 1.26 1.23 1.12
BLT
1.45
-
1.44 1.85 1.34 1.33 1.38
BLM 1.15 1.44
-
2.19 1.17 1.14 1.06
DAV 2.48 1.85 2.19
-
1.69 2.15 2.11
ELE
1.26 1.34 1.17 1.69
-
1.11 1.11
MT
1.23 1.33 1.14 2.15 1.11
-
1.02
ALM BLT BLM DAV ELE MT SND
ALM
-
2.17 1.49 2.21 1.65 1.66 1.86
BLT
2.17
-
2.34 2.24 1.96 1.98 2.09
BLM 1.49 2.34
-
2.22 1.67 1.82 1.93
DAV 2.21 2.24 2.22
-
2.14 2.17 2.49
ELE
1.65 1.96 1.67 2.14
-
1.58 1.66
MT
1.66 1.98 1.82 2.17 1.58
-
1.73
SND 1.86 2.09 1.93 2.49 1.66 1.73
-
(c) m-distances for the fairness element.
ALM BLT BLM DAV ELE MT SND
ALM
-
1.82 1.30 2.06 1.34 1.60 1.62
BLT
1.82
-
1.84 1.79 1.63 1.62 1.75
BLM 1.30 1.84
-
2.09 1.24 1.35 1.44
DAV 2.06 1.79 2.09
-
2.06 1.98 2.31
ELE
1.34 1.63 1.24 2.06
-
1.23 1.35
MT
1.60 1.62 1.35 1.98 1.23
-
1.47
SND 1.62 1.75 1.44 2.31 1.35 1.47
-
(d) m-distances for the cheating element.
ALM BLT BLM DAV ELE MT SND
ALM
-
1.58 1.54 2.46 1.93 2.01 1.96
BLT
1.58
-
1.82 1.36 1.65 1.91 1.73
BLM 1.54 1.82
-
2.35 1.60 1.55 1.99
DAV 2.46 1.36 2.35
-
2.40 2.40 2.75
ELE
1.93 1.65 1.60 2.40
-
1.30 1.68
MT
2.01 1.91 1.55 2.40 1.30
-
1.59
ALM BLT BLM DAV ELE MT SND
ALM
-
2.02 1.34 1.75 1.19 1.21 1.13
BLT
2.02
-
1.92 2.04 1.56 1.84 1.73
BLM 1.34 1.92
-
1.69 0.85 1.12 0.90
DAV 1.75 2.04 1.69
-
1.56 1.73 1.61
ELE
1.19 1.56 0.85 1.56
-
1.05 0.87
MT
1.21 1.84 1.12 1.73 1.05
-
0.88
ALM BLT BLM DAV ELE MT SND
ALM
-
2.18 1.80 2.21 2.02 1.87 2.00
BLT
2.18
-
2.20 2.31 1.67 1.75 1.65
BLM 1.80 2.20
-
1.81 1.80 1.62 1.79
DAV 2.21 2.31 1.81
-
1.61 2.06 1.82
ELE
2.02 1.67 1.80 1.61
-
1.77 1.63
MT
1.87 1.75 1.62 2.06 1.77
-
1.58
SND 2.00 1.65 1.79 1.82 1.63 1.58
-
ALM BLT BLM DAV ELE MT SND
ALM
-
2.10 1.85 2.48 1.84 2.17 2.30
BLT
2.10
-
1.98 2.12 1.87 1.78 1.66
BLM 1.85 1.98
-
2.30 1.61 2.05 2.05
DAV 2.48 2.12 2.30
-
2.11 2.00 2.35
ELE
1.84 1.87 1.61 2.11
-
1.72 1.63
MT
2.17 1.78 2.05 2.00 1.72
-
1.84
SND 2.30 1.66 2.05 2.35 1.63 1.84
-
ALM BLT BLM DAV ELE MT SND
ALM
-
2.86 1.10 1.85 2.14 1.56 2.44
BLT
2.86
-
2.78 2.29 2.24 1.98 2.40
BLM 1.10 2.78
-
1.75 1.79 1.72 1.94
DAV 1.85 2.29 1.75
-
1.61 1.71 2.00
ELE
2.14 2.24 1.79 1.61
-
1.51 1.67
MT
1.56 1.98 1.72 1.71 1.51
-
1.87
SND 2.44 2.40 1.94 2.00 1.67 1.87
-
ALM BLT BLM DAV ELE MT SND
ALM
-
1.44 1.30 1.65 1.34 1.94 1.03
BLT
1.44
-
1.27 1.77 1.11 1.47 1.40
BLM 1.30 1.27
-
1.89 1.38 1.61 1.21
DAV 1.65 1.77 1.89
-
1.77 2.40 1.44
ELE
1.34 1.11 1.38 1.77
-
1.60 1.09
MT
1.94 1.47 1.61 2.40 1.60
-
1.76
Table C1 :
C1m-distances for the ten moral elements. Darker color indicates smaller distance between domains.Care Harm Fairness Cheating Loyalty Betrayal Authority Subversion Purity Degradation
ALM
0.49
0.53
0.65
0.34
0.49
0.63
0.11
0.47
0.03
0.25
BLT
0.10
0.46
0.73
0.15
0.17
0.59
0.38
0.37
-0.01
0.29
BLM
0.20
0.54
0.66
0.27
0.60
0.67
0.27
0.61
0.16
0.36
DAV
0.43
0.84
0.80
0.18
0.63
0.75
0.39
0.65
-0.26
0.45
ELE
0.41
0.58
0.69
0.43
0.48
0.55
-0.11
0.70
-0.19
0.42
MT
0.36
0.50
0.76
0.24
0.51
0.53
0.25
0.30
0.08
0.44
SND
0.37
0.25
0.73
0.05
0.58
0.69
-0.01
0.47
-0.13
0.21
Table C2 :
C2Spearman correlation (ρ) between m-distances and crowd results, divided by domain and moral element. Darker color indicates higher correlation.
Praying for Justice and equality BLM fairness Of course #AllLivesMatter Shep, you self righteous, dangerously politically correct fraud posing as a fair journalist.ALM cheatingShaun King is/was a fraud and a liar and deserved to be outed as such. #BlackLivesMatter deserves better.Tweet
Domain Label
Equality is key. #AllLivesMatter
pray over everyone. Cherish your
life cause today you never know
ALM
fairness
BLM
cheating
Table C3 :
C3Examples of tweets with similar moral rhetoric in the ALM and BLM domains.
TweetDomain Label I am a proponent of civil disobedience and logic driven protest only; not non irrational violence, pillage & mayhem!ALM subversionFor those who try to confuse acts of defiance with deliberate acts of racist terrorism, we pray BLM subversion
Table C4 :
C4Examples of tweets with different moral rhetoric in the ALM and BLM domains. ACL 2023 Responsible NLP Checklist A For every submission: A1. Did you describe the limitations of your work?14130
Section 7
www.prolific.co
https://osf.io/k5n7y/ 6 https://github.com/slundberg/shap/ blob/master/LICENSE 7 https://github.com/google-research/ bert/blob/master/LICENSE
AcknowledgmentsThis research was partially supported by the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organization for Scientific Research. Oscar Araque acknowledges the funding by the European Union's Horizon 2020 research and innovation program under grant agreement 962547 (PARTICIPATION).
Approaches to Cross-Domain Sentiment Analysis: A Systematic Literature Review. Tareq Al-Moslmi, Nazlia Omar, Salwani Abdullah, 10.1109/ACCESS.2017.2690342IEEE Access. 5Tareq Al-Moslmi, Nazlia Omar, Salwani Abdullah, and Mohammed Albared. 2017. Approaches to Cross- Domain Sentiment Analysis: A Systematic Litera- ture Review. IEEE Access, 5:16173-16192.
An empirical exploration of moral foundations theory in partisan news sources. Milad Alshomary, Roxanne El Baff, Timon Gurcke, Henning Wachsmuth ; Jordan Carpenter, Lyle Ungar, Daniel Preoţiuc-Pietro, Proceedings of the Tenth International Conference on Language Resources and Evaluation, LREC '16. the Tenth International Conference on Language Resources and Evaluation, LREC '16The Moral Debater: A Study on the Computational Generation Dean FulgoniMilad Alshomary, Roxanne El Baff, Timon Gurcke, and Henning Wachsmuth. 2022. The Moral De- bater: A Study on the Computational Generation Dean Fulgoni, Jordan Carpenter, Lyle Ungar, and Daniel Preoţiuc-Pietro. 2016. An empirical explo- ration of moral foundations theory in partisan news sources. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, LREC '16, pages 3730-3736.
Artificial Intelligence, Values, and Alignment. Minds and Machines. Iason Gabriel, 10.1007/s11023-020-09539-230Iason Gabriel. 2020. Artificial Intelligence, Values, and Alignment. Minds and Machines, 30(3):411- 437.
Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism. Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, Peter H Ditto, 10.1016/B978-0-12-407236-7.00002-4Advances in Experimental Social Psychology. Amsterdam, the NetherlandsElsevier47Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P. Wojcik, and Peter H. Ditto. 2013. Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism. In Advances in Experi- mental Social Psychology, volume 47, pages 55-130. Elsevier, Amsterdam, the Netherlands.
Liberals and Conservatives Rely on Different Sets of Moral Foundations. Jesse Graham, Jonathan Haidt, Brian A Nosek, 10.1037/a0015141Journal of Personality and Social Psychology. 965Jesse Graham, Jonathan Haidt, and Brian A. Nosek. 2009. Liberals and Conservatives Rely on Different Sets of Moral Foundations. Journal of Personality and Social Psychology, 96(5):1029-1046.
Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial. Kevin A Hallgren, 10.1080/11035896009449194Tutor Quant Methods Psychol. 81Kevin A. Hallgren. 2012. Computing Inter-Rater Re- liability for Observational Data: An Overview and Tutorial. Tutor Quant Methods Psychol, 8(1):23-34.
Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora. William L Hamilton, Kevin Clark, Jure Leskovec, Dan Jurafsky, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP '16. the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP '16Austin, Texas, USAWilliam L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing Domain-Specific Sen- timent Lexicons from Unlabeled Corpora. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP '16, pages 595-605, Austin, Texas, USA.
Aligning AI With Shared Human Values. Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, Jacob Steinhardt, Proceedings of the 2021 International Conference on Learning Representations, ICLR '21. the 2021 International Conference on Learning Representations, ICLR '21Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2021. Aligning AI With Shared Human Values. In Proceedings of the 2021 International Conference on Learning Representations, ICLR '21, pages 1- 29.
Persons and situations in the moral domain. L Patrick, Daniel K Hill, Lapsley, 10.1016/j.jrp.2008.12.034Journal of Research in Personality. 432Patrick L. Hill and Daniel K. Lapsley. 2009. Persons and situations in the moral domain. Journal of Re- search in Personality, 43(2):245-246.
Jun Yen Leung, Arineh Mirinjian, and Morteza Dehghani. 2020. Moral Foundations Twitter Corpus: A Collection of 35k Tweets Annotated for Moral Sentiment. Joe Hoover, Gwenyth Portillo-Wightman, Leigh Yeh, Shreya Havaldar, Aida Mostafazadeh Davani, Ying Lin, Brendan Kennedy, Mohammad Atari, Zahra Kamel, Madelyn Mendlen, Gabriela Moreno, Christina Park, Tingyee E Chang, Jenna Chin, Christian Leong, 10.1177/1948550619876629Social Psychological and Personality Science. 118Joe Hoover, Gwenyth Portillo-Wightman, Leigh Yeh, Shreya Havaldar, Aida Mostafazadeh Davani, Ying Lin, Brendan Kennedy, Mohammad Atari, Zahra Kamel, Madelyn Mendlen, Gabriela Moreno, Christina Park, Tingyee E. Chang, Jenna Chin, Christian Leong, Jun Yen Leung, Arineh Mirinjian, and Morteza Dehghani. 2020. Moral Foundations Twitter Corpus: A Collection of 35k Tweets An- notated for Moral Sentiment. Social Psychological and Personality Science, 11(8):1057-1071.
Learning to Adapt Domain Shifts of Moral Values via Instance Weighting. Xiaolei Huang, Alexandra Wormley, Adam Cohen, 10.1145/3511095.3531269Proceedings of the 33rd ACM Conference on Hypertext and Social Media, HT '22. the 33rd ACM Conference on Hypertext and Social Media, HT '22Association for Computing MachineryXiaolei Huang, Alexandra Wormley, and Adam Co- hen. 2022. Learning to Adapt Domain Shifts of Moral Values via Instance Weighting. In Proceed- ings of the 33rd ACM Conference on Hypertext and Social Media, HT '22, pages 121-131. Association for Computing Machinery.
Predicting demographics, moral foundations, and human values from digital behaviours. Kyriaki Kalimeri, Mariano G Beiró, Matteo Delfino, Robert Raleigh, Ciro Cattuto, 10.1016/j.chb.2018.11.024Computers in Human Behavior. 92Kyriaki Kalimeri, Mariano G. Beiró, Matteo Delfino, Robert Raleigh, and Ciro Cattuto. 2019a. Predicting demographics, moral foundations, and human val- ues from digital behaviours. Computers in Human Behavior, 92:428-445.
Human values and attitudes towards vaccination in social media. Kyriaki Kalimeri, Mariano G Beiró, Alessandra Urbinati, Andrea Bonanomi, Alessandro Rosina, Ciro Cattuto, 10.1145/3308560.3316489Companion Proceedings of The 2019 World Wide Web Conference, WWW '19. Kyriaki Kalimeri, Mariano G. Beiró, Alessandra Urbinati, Andrea Bonanomi, Alessandro Rosina, and Ciro Cattuto. 2019b. Human values and atti- tudes towards vaccination in social media. In Com- panion Proceedings of The 2019 World Wide Web Conference, WWW '19, pages 248-254.
Brendan Kennedy, Mohammad Atari, Aida Mostafazadeh Davani, Joe Hoover, Ali Omrani, Jesse Graham, Morteza Dehghani, 10.1016/j.cognition.2021.104696Moral Concerns are Differentially Observable in Language. Cognition. 212104696Brendan Kennedy, Mohammad Atari, Aida Mostafazadeh Davani, Joe Hoover, Ali Om- rani, Jesse Graham, and Morteza Dehghani. 2021. Moral Concerns are Differentially Observable in Language. Cognition, 212:104696.
Identifying the Human Values behind Arguments. Johannes Kiesel, Milad Alshomary, Nicolas Handke, Xiaoni Cai, Henning Wachsmuth, Benno Stein, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL '22. the 60th Annual Meeting of the Association for Computational Linguistics, ACL '22Dublin, IrelandAssociation for Computational LinguisticsJohannes Kiesel, Milad Alshomary, Nicolas Handke, Xiaoni Cai, Henning Wachsmuth, and Benno Stein. 2022. Identifying the Human Values behind Argu- ments. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics, ACL '22, pages 4459-4471, Dublin, Ireland. Asso- ciation for Computational Linguistics.
Does Personalization Help? Predicting How Social Situations Affect Personal Values. Ilir Kola, Ralvi Isufaj, Catholijn M Jonker, 10.3233/FAIA220196HHAI2022: Augmenting Human Intellect. Ilir Kola, Ralvi Isufaj, and Catholijn M. Jonker. 2022. Does Personalization Help? Predicting How Social Situations Affect Personal Values. In HHAI2022: Augmenting Human Intellect, pages 157-169.
Textand author-dependent moral foundations classification. Alex Gwo, Jen Lan, Ivandré Paraboni, 10.1080/13614568.2022.2092655New Review of Hypermedia and Multimedia. 0Alex Gwo Jen Lan and Ivandré Paraboni. 2022. Text- and author-dependent moral foundations classifica- tion. New Review of Hypermedia and Multimedia, 0(0):1-21.
Cross-Domain Classification of Moral Values. Enrico Liscio, Alin E Dondera, Andrei Geadau, Catholijn M Jonker, Pradeep K Murukannaiah, Findings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '22. Seattle, USAAssociation for Computational LinguisticsEnrico Liscio, Alin E. Dondera, Andrei Geadau, Catholijn M. Jonker, and Pradeep K. Murukannaiah. 2022a. Cross-Domain Classification of Moral Val- ues. In Findings of the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics, NAACL '22, pages 2727-2745, Seattle, USA. Association for Computational Lin- guistics.
Value inference in sociotechnical systems: Blue sky ideas track. Enrico Liscio, Roger Lera-Leri, Filippo Bistaffa, I J Roel, Catholijn M Dobbe, Maite Jonker, Juan A Lopez-Sanchez, Pradeep K Rodriguez-Aguilar, Murukannaiah, Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS '23. the 22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS '23London, United KingdomIFAA-MASEnrico Liscio, Roger Lera-Leri, Filippo Bistaffa, Roel I.J. Dobbe, Catholijn M. Jonker, Maite Lopez- Sanchez, Juan A. Rodriguez-Aguilar, and Pradeep K. Murukannaiah. 2023. Value inference in sociotech- nical systems: Blue sky ideas track. In Proceed- ings of the 22nd International Conference on Au- tonomous Agents and Multiagent Systems, AAMAS '23, pages 1-7, London, United Kingdom. IFAA- MAS.
What Values Should an Agent Align With?. Enrico Liscio, Van Der, Luciano C Meer, Catholijn M Siebert, Pradeep K Jonker, Murukannaiah, 10.1007/s10458-022-09550-0Autonomous Agents and Multi-Agent Systems. 362332Enrico Liscio, Michiel van der Meer, Luciano C. Siebert, Catholijn M. Jonker, and Pradeep K. Mu- rukannaiah. 2022b. What Values Should an Agent Align With? Autonomous Agents and Multi-Agent Systems, 36(23):32.
On interpretation of network embedding via taxonomy induction. Ninghao Liu, Xiao Huang, Jundong Li, Xia Hu, 10.1145/3219819.3220001Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '18. the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '18ACMNinghao Liu, Xiao Huang, Jundong Li, and Xia Hu. 2018. On interpretation of network embedding via taxonomy induction. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '18, pages 1812- 1820. ACM.
UNICORN on RAIN-BOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark. Nicholas Lourie, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI '21. the Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI '21Nicholas Lourie, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2021. UNICORN on RAIN- BOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intel- ligence, AAAI '21, pages 13480-13488.
A Unified Approach to Interpreting Model Predictions. M Scott, Su-In Lundberg, Lee, https:/dl.acm.org/doi/pdf/10.5555/3295222.3295230booktitle = Advances in Neural Information Processing Systems, NeurIPS '17. Long Beach, CA, USAScott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In booktitle = Advances in Neural Information Process- ing Systems,, NeurIPS '17, pages 1208-1217, Long Beach, CA, USA.
A Survey on Bias and Fairness in Machine Learning. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan, 10.1145/3457607ACM Computing Surveys. 654Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6).
Automatic construction of domain-specific sentiment lexicon for unsupervised domain adaptation and sentiment classification. Knowledge-Based Systems. 10.1016/j.knosys.2020.106423Omid Mohamad Beigi and Mohammad H. Moattar213106423Omid Mohamad Beigi and Mohammad H. Moattar. 2021. Automatic construction of domain-specific sentiment lexicon for unsupervised domain adapta- tion and sentiment classification. Knowledge-Based Systems, 213:106423.
Moralization in social networks and the emergence of violence during protests. Marlon Mooijman, Joe Hoover, Ying Lin, Ji Heng, Morteza Dehghani, 10.1038/s41562-018-0353-0Nature Human Behaviour. 26Marlon Mooijman, Joe Hoover, Ying Lin, Heng Ji, and Morteza Dehghani. 2018. Moralization in social net- works and the emergence of violence during protests. Nature Human Behaviour, 2(6):389-396.
Public Participation in Crisis Policymaking. How 30,000 Dutch Citizens Advised Their Government on Relaxing COVID-19 Lockdown Measures. Niek Mouter, Jose Ignacio Hernandez, Anatol Valerian Itten, 10.1371/journal.pone.0250614PLoS ONE. 165Niek Mouter, Jose Ignacio Hernandez, and Anatol Va- lerian Itten. 2021. Public Participation in Crisis Policymaking. How 30,000 Dutch Citizens Advised Their Government on Relaxing COVID-19 Lock- down Measures. PLoS ONE, 16(5):1-42.
StereoSet: Measuring stereotypical bias in pretrained language models. Moin Nadeem, Anna Bethke, Siva Reddy, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, ACL '21. the 59th Annual Meeting of the Association for Computational Linguistics, ACL '21Online. Association for Computational LinguisticsMoin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pre- trained language models. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics, ACL '21, pages 5356-5371, Online. Association for Computational Linguistics.
C Matheus, Pavan, G Vitor, Alex G J Santos, Joao Lan, Wesley Ramos Martins, Caio Santos, Pablo B Deutsch, Fernando C Costa, Ivandre Hsieh, Paraboni, 10.1109/taffc.2020.3034050Morality Classification in Natural Language Text. IEEE Transactions on Affective Computing. 3045Matheus C. Pavan, Vitor G. Santos, Alex G. J. Lan, Joao Martins, Wesley Ramos Santos, Caio Deutsch, Pablo B. Costa, Fernando C. Hsieh, and Ivandre Paraboni. 2020. Morality Classification in Natu- ral Language Text. IEEE Transactions on Affective Computing, 3045(c):1-8.
Elicitation of Situated Values: Need for Tools to Help Stakeholders and Designers to Reflect and Communicate. Alina Pommeranz, Christian Detweiler, Pascal Wiggers, Catholijn M Jonker, 10.1007/s10676-011-9282-6Ethics and Information Technology. 144Alina Pommeranz, Christian Detweiler, Pascal Wig- gers, and Catholijn M. Jonker. 2012. Elicitation of Situated Values: Need for Tools to Help Stake- holders and Designers to Reflect and Communicate. Ethics and Information Technology, 14(4):285-303.
Development and Validation of the Personal Values Dictionary: A Theory-Driven Tool for Investigating References to Basic Human Values in Text. Vladimir Ponizovskiy, Murat Ardag, Lusine Grigoryan, Ryan Boyd, Henrik Dobewall, Peter Holtz, 10.1002/per.2294European Journal of Personality. 345Vladimir Ponizovskiy, Murat Ardag, Lusine Grigoryan, Ryan Boyd, Henrik Dobewall, and Peter Holtz. 2020. Development and Validation of the Personal Values Dictionary: A Theory-Driven Tool for Investigating References to Basic Human Values in Text. Euro- pean Journal of Personality, 34(5):885-902.
Deconfounded Lexicon Induction for Interpretable Social Science. Reid Pryzant, Kelly Shen, Dan Jurafsky, Stefan Wager, 10.18653/v1/n18-1146Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '18. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '18New Orleans, Louisiana, USAReid Pryzant, Kelly Shen, Dan Jurafsky, and Stefan Wager. 2018. Deconfounded Lexicon Induction for Interpretable Social Science. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguis- tics, NAACL '18, pages 1615-1625, New Orleans, Louisiana, USA.
Val-ueNet: A New Dataset for Human Value Driven Dialogue System. Liang Qiu, Yizhou Zhao, Jinchao Li, Pan Lu, Baolin Peng, Jianfeng Gao, Song-Chun Zhu, 10.1609/aaai.v36i10.21368Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI '22. the 36th AAAI Conference on Artificial Intelligence, AAAI '22Liang Qiu, Yizhou Zhao, Jinchao Li, Pan Lu, Baolin Peng, Jianfeng Gao, and Song-Chun Zhu. 2022. Val- ueNet: A New Dataset for Human Value Driven Dia- logue System. In Proceedings of the 36th AAAI Con- ference on Artificial Intelligence, AAAI '22, pages 11183-11191.
Why Should I Trust You?": Explaining the Predictions of Any Classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, 10.1145/2939672.2939778Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Ex- plaining the Predictions of Any Classifier. In Pro- ceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, KDD '16, pages 1135-1144.
. J Stuart, Daniel Russell, Max Dewey, Tegmark, 10.1609/aimag.v36i4.2577Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine. 364Stuart J. Russell, Daniel Dewey, and Max Tegmark. 2015. Research Priorities for Robust and Benefi- cial Artificial Intelligence. AI Magazine, 36(4):105- 114.
The Importance of Context in Moral Judgments. Chelsea Schein, 10.1177/1745691620904083Perspectives on Psychological Science. 152Chelsea Schein. 2020. The Importance of Context in Moral Judgments. Perspectives on Psychological Science, 15(2):207-215.
An Overview of the Schwartz Theory of Basic Values. Shalom H Schwartz, 10.9707/2307-0919.1116Online readings in Psychology and Culture. 2Shalom H. Schwartz. 2012. An Overview of the Schwartz Theory of Basic Values. Online readings in Psychology and Culture, 2(1):1-20.
C Luciano, Enrico Siebert, Pradeep K Liscio, Lionel Murukannaiah, Shannon L Kaptein, Spruit, 10.3233/FAIA220193Jeroen van den Hoven, and Catholijn M. Jonker. 2022. Estimating Value Preferences in a Hybrid Participatory System. Amsterdam, the NetherlandsIOS PressHHAI2022: Augmenting Human IntellectLuciano C. Siebert, Enrico Liscio, Pradeep K. Mu- rukannaiah, Lionel Kaptein, Shannon L. Spruit, Jeroen van den Hoven, and Catholijn M. Jonker. 2022. Estimating Value Preferences in a Hybrid Participatory System. In HHAI2022: Augmenting Human Intellect, pages 114-127, Amsterdam, the Netherlands. IOS Press.
On the Machine Learning of Ethical Judgments from Natural Language. Zeerak Talat, Hagen Blix, Josef Valvoda, Maya Indira Ganesh, Ryan Cotterell, Adina Williams, 10.18653/v1/2022.naacl-main.56Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL '22. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL '22Seattle, USAZeerak Talat, Hagen Blix, Josef Valvoda, Maya Indira Ganesh, Ryan Cotterell, and Adina Williams. 2022. On the Machine Learning of Ethical Judgments from Natural Language. In Proceedings of the 2022 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, NAACL '22, pages 769-779, Seattle, USA.
Mario Triola, Elementary Statistics. 13th edition. PearsonsMario Triola. 2017. Elementary Statistics, 13th edition. Pearsons.
COVID-19 and Changing Values. Tristan Ibo Van De Poel, Dyami De Wildt, Van Kooten, Pássaro, https:/link.springer.com/chapter/10.1007/978-3-031-08424-9_2Values for a Post-Pandemic Future. Springer International PublishingIbo van de Poel, Tristan de Wildt, and Dyami van Kooten Pássaro. 2022. COVID-19 and Changing Values. In Values for a Post-Pandemic Future, pages 23-58. Springer International Publishing.
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation. Francisco Vargas, Ryan Cotterell, 10.18653/v1/2020.emnlp-main.232Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP '20. the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP '20Francisco Vargas and Ryan Cotterell. 2020. Exploring the Linear Subspace Hypothesis in Gender Bias Mit- igation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP '20, pages 2902-2913.
Covfee: an extensible web framework for 14124 continuous-time annotation of human behavior. Jose Vargas Quiros, Stephanie Tan, Chirag Raman, Laura Cabrera-Quiros, Hayley Hung, PMLRUnderstanding Social Behavior in Dyadic and Small Group Interactions, Proceedings of Machine Learning Research. Jose Vargas Quiros, Stephanie Tan, Chirag Ra- man, Laura Cabrera-Quiros, and Hayley Hung. 2022. Covfee: an extensible web framework for 14124 continuous-time annotation of human behavior. In Understanding Social Behavior in Dyadic and Small Group Interactions, Proceedings of Machine Learn- ing Research, pages 265-293. PMLR.
Desiderata for delivering NLP to accelerate healthcare AI advancement and a Mayo Clinic NLP-as-a-service implementation. Andrew Wen, Sunyang Fu, Sungrim Moon, Mohamed El Wazir, Andrew Rosenbaum, Vinod C Kaggal, Sijia Liu, Sunghwan Sohn, Hongfang Liu, Jungwei Fan, 10.1038/s41746-019-0208-8npj Digital Medicine. 2130Andrew Wen, Sunyang Fu, Sungrim Moon, Mohamed El Wazir, Andrew Rosenbaum, Vinod C. Kaggal, Si- jia Liu, Sunghwan Sohn, Hongfang Liu, and Jung- wei Fan. 2019. Desiderata for delivering NLP to accelerate healthcare AI advancement and a Mayo Clinic NLP-as-a-service implementation. npj Digi- tal Medicine, 2(130):1-7.
A Survey of Unsupervised Deep Domain Adaptation. Garrett Wilson, Diane J Cook, 10.1145/3400066ACM Transactions on Intelligent Systems and Technology. 511Garrett Wilson and Diane J. Cook. 2020. A Survey of Unsupervised Deep Domain Adaptation. ACM Transactions on Intelligent Systems and Technology, 11(5).
Building and Validating Hierarchical Lexicons with a Case Study on Personal Values. R Steven, Yiting Wilson, Rada Shen, Mihalcea, 10.1007/978-3-030-01129-1{_}28Proceedings of the 10th International Conference on Social Informatics, SocInfo '18. the 10th International Conference on Social Informatics, SocInfo '18RussiaSpringerSt. PetersburgSteven R. Wilson, Yiting Shen, and Rada Mihalcea. 2018. Building and Validating Hierarchical Lexi- cons with a Case Study on Personal Values. In Pro- ceedings of the 10th International Conference on So- cial Informatics, SocInfo '18, pages 455-470, St. Pe- tersburg, Russia. Springer.
Sentiment domain adaptation with multiple sources. Fangzhao Wu, Yongfeng Huang, 10.18653/v1/p16-1029Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL '16. the 54th Annual Meeting of the Association for Computational Linguistics, ACL '16Berlin, GermanyAssociation for Computational LinguisticsFangzhao Wu and Yongfeng Huang. 2016. Sentiment domain adaptation with multiple sources. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics, ACL '16, pages 301-310, Berlin, Germany. Association for Compu- tational Linguistics.
Exsum: From local explanations to model understanding. Yilun Zhou, Marco Tulio Ribeiro, Julie Shah, 10.18653/v1/2022.naacl-main.392Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL '22. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL '22Seattle, USAAssociation for Computational LinguisticsYilun Zhou, Marco Tulio Ribeiro, and Julie Shah. 2022. Exsum: From local explanations to model under- standing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL '22, pages 5359-5378, Seattle, USA. Association for Computational Linguistics. |
202,767,521 | Who Is Speaking to Whom? Learning to Identify Utterance Addressee in Multi-Party Conversations | Previous research on dialogue systems generally focuses on the conversation between two participants, yet multi-party conversations which involve more than two participants within one session bring up a more complicated but realistic scenario. In real multiparty conversations, we can observe who is speaking, but the addressee information is not always explicit. In this paper, we aim to tackle the challenge of identifying all the missing addressees in a conversation session. To this end, we introduce a novel who-to-whom (W2W) model which models users and utterances in the session jointly in an interactive way. We conduct experiments on the benchmark Ubuntu Multi-Party Conversation Corpus and the experimental results demonstrate that our model outperforms baselines with consistent improvements. | [
30215041,
9464771,
7356547,
5590763,
16735788,
2867243,
196186961,
16537814,
7287895,
1957433,
2955580
] | Who Is Speaking to Whom? Learning to Identify Utterance Addressee in Multi-Party Conversations
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 3-7, 2019. 2019
Ran Le
Wenpeng Hu [email protected]
School of Mathematical Sciences
Peking University
BeijingChina
Mingyue Shang [email protected]
Center for Data Science
AAIS
Peking University
BeijingChina
Zhenjun You [email protected]
School of Mathematical Sciences
Peking University
BeijingChina
Lidong Bing [email protected]
Machine Intelligence Technology
Alibaba DAMO Academy
R&D Center Singapore
Dongyan Zhao [email protected]
Rui Yan [email protected]
Wangxuan Institute of Computer Technology
Peking University
BeijingChina
Who Is Speaking to Whom? Learning to Identify Utterance Addressee in Multi-Party Conversations
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsNovember 3-7, 2019. 20191909
Previous research on dialogue systems generally focuses on the conversation between two participants, yet multi-party conversations which involve more than two participants within one session bring up a more complicated but realistic scenario. In real multiparty conversations, we can observe who is speaking, but the addressee information is not always explicit. In this paper, we aim to tackle the challenge of identifying all the missing addressees in a conversation session. To this end, we introduce a novel who-to-whom (W2W) model which models users and utterances in the session jointly in an interactive way. We conduct experiments on the benchmark Ubuntu Multi-Party Conversation Corpus and the experimental results demonstrate that our model outperforms baselines with consistent improvements.
Introduction
As an essential aspect of artificial intelligence, dialogue systems have attracted extensive attention in recent studies (Vinyals and Le, 2015;Serban et al., 2016). Researchers have paid great efforts to understand conversations between two participants, either single-turn (Li et al., 2016a;Shang et al., 2015;Vinyals and Le, 2015) or multi-turn (Zhou et al., 2016;Tao et al., 2019a,b), and achieved encouraging results. A more general and challenging scenario is that a conversation may involve more than two interlocutors conversing among each other (Uthus and Aha, 2013;Hu et al., 2019), which is known as multi-party conversation. Ubuntu Internet Relay Chat channel (IRC) is a multi-party conversation scenario as shown in Table 1. Generally, each utterance is associated with a speaker and one or more addressees in the conversation. Such a characteristic * Equal contribution. † Corresponding author. "Good point, tmux is the thing I miss." -User 1 "Cool thanks for ur help." @User 4 User 4 User 2 "Ahha, you r using something like cpanel." -User 3 "Yeah 1.4.0 exactly." @User 2 User 2 User 4 "my pleasure :)" leads to complex speaker-addressee interactions. As a result, the speaker and addressee roles associated with utterances are constantly changing among multiple users across different turns. Such speaker and addressee information could be essential in many multi-party conversation scenarios including group meeting, debating and forum discussion. Therefore, compared to two-party conversations, a unique issue of multi-party conversations is to understand who is speaking to whom. In real scenarios of multi-party conversations, an interesting phenomenon is that the speakers do not usually designate an addressee explicitly. This phenomenon also accords with our statistic analysis on the IRC dataset. We found that around 66% utterances missing explicit addressee information. That means when modeling such multi-party conversations, one may have to guess who is speaking to whom in order to understand the utterance correspondence as well as the stream structure of multi-party conversations.
Given a multi-party conversation where part of the addressees are unknown, previous work mainly focuses on predicting the addressee of only the last utterance. Ouchi and Tsuboi (2016) proposed to scan the conversation session and track the speaker's state based on the utterance content at each step. On this basis, Zhang et al. (2017) introduced a speaker interaction model that tracks all users' states according to their roles in the session. They both fused the representations of the last speaker and utterance as a query, and a match-ing network is utilized to calculate the matching degree between the query and each listener. The listener with the highest matching score is selected as the predicted addressee.
However, in practice, it is more helpful to predict all the missing addressees rather than only the last one in understanding the whole conversation. And it also benefits for both building a group-based chatbot and clustering users based on what they have said. Therefore, we propose a new task of identifying the addressees of all the missing utterances given a multi-party conversation session where part of the addressees are unspecified. To this end, we propose a novel Who-to-Whom (W2W) model which jointly models users and utterances in the multi-party conversation and predicts all the missing addressees in a uniform framework. 1 Our contributions are as follows:
• We introduce a new task of understanding who speaks to whom given an entire conversation session as well as a benchmark system.
• To capture the correlation within users and utterances in multi-party conversations, we propose an interactive representation learning approach to jointly learn the representations of users and utterances and enhance them mutually.
• The proposed approach (W2W) considers both previous and subsequent information in the session while incorporating the correlation with users and utterances. For conversations with complex structures, W2W models them in a uniform way and could handle any kind of occasion even when all the addressee information is missing.
Related Work
In this section, we briefly review recent works and progresses on multi-party conversations.
Multi-party conversations, as a general case of multi-turn conversations (Li et al., 2017(Li et al., , 2016cSerban et al., 2016) involve more than two participants. In addition to the representation of learning for utterances, another key issue is to model multiple participants in the conversations. It is intuitive to introduce multiple user embeddings for multi-party conversations, either as persona-dependent embeddings (Li et al., 2016b), or as persona-independent embeddings (Ouchi and Tsuboi, 2016;Zhang et al., 2017;Meng et al., 2017). Recently, some researchers utilized users' information based on different roles in conversations, such as senders and recipients Luan et al., 2016).
In multi-party conversations, identifying the relationship among users is also an important task. It can be categorized into two topics, 1) predicting who will be the next speaker (Meng et al., 2017) and 2) who is the addressee (Ouchi and Tsuboi, 2016;Zhang et al., 2017). For the first topic, Meng et al. (2017) investigated a temporal-based and a content-based method to jointly model the users and context. For the second topic, which is closely related to ours, Ouchi and Tsuboi (2016) proposed to predict the addressee and utterance given a context with all available information. Later, Zhang et al. (2017) proposed a speaker-interactive model, which takes users' role information into consideration and implements a role-sensitively state tracking process.
In our task, the addressee identification problem is quite different from (Ouchi and Tsuboi, 2016) and (Zhang et al., 2017). Both of their studies aimed to make predictions on whom the last speaker addresses to. While in this paper, we focus on the whole session and aim to identify all the missing addressees. By contrast, our task is a more challenging scenario since it relies on the correlation within all users and utterances to identify the speaker-addressee structure of the entire session.
Overview
Problem Formulation
Given an entire multi-party conversation S with length T , the sequence of utterances in it is defined as {u t } T t=1 . Each utterance is associated with a speaker a SP R t and an addressee a ADR t . a SP R t is observable across the entire session while a ADR t is mostly unspecified as shown in Table 1. Our task is to identify the addressees for all utterances within the conversation session. The predicted addressee is denoted asâ ADR t . Formally, we have following formulations:
QU ERY : {(a SPR t , ut)} T t=1 P REDICT ION S : {â ADR t } T t=1(1)
Let A(S) denote the user set in the session S, thus A(S)\{a SP R t } denotes the listeners at the t-th turn (a LSR j t denotes the j-th listener). The listeners are also referred as candidate addressees for each turn and the identified addresseeâ ADR t should be one Concretely, the representation learning module is designed to jointly learn the representation of users and utterances in an interactive way after initializing them separately. The representations of users (also denoted as user states) and utterance embeddings are mutually enhanced. With the representations of users and utterances, a network is utilized to fuse them up into a query representation. In this way, we jointly capture who is speaking what at each step.
After the representations of users and utterances are learned, we feed them into a matching module. In this module, a matching network is learned to score the matching degrees between the query and each candidate. According to the matching scores, the model ranks all addressee candidates in A(S)\{a SP R t } and selects the one with the highest matching score as the identified addressee. For each utterance in the multi-party conversation, we repeat the above steps until the addressees of all utterances are identified.
Our W2W Model
In this section, we first describe each part of the W2W model in details: (1) Initialization of utterance and user representations; (2) Interactive representation learning of users and utterances; (3) Matching procedure for identifying the addressee. We finally describe the training procedure of the W2W model.
Initialization
W2W models utterance and user embeddings separately before interactive representation learning and gets the representation of each utterance and user as initialization.
Utterance Initialization Encoder
Suppose that in a conversation session S with T utterances denoted as {u 1 , u 2 , . . . , u T }. An utterance u t that contains n tokens is denoted as {w 1 , w 2 , . . . , w n }, where {w i } is word embeddings 3 of the i-th token. We first utilize a word level bi-directional RNN with Gated Recurrent Units (GRUs) (Cho et al., 2014) to encode each utterance and take the concatenation of the hidden states of the last step from both sides as the sentence embedding. Then, a sentence level bidirectional GRU is applied with each sentence embedding as input to obtain the global context of the session. The utterance representation u t is represented by the concatenation of hidden states from both sides at t-th time step. 4
Position-Based User Initialization
In multi-party conversation, position information of different participants in the session is crucial in the addressee identification task. For example, a speaker is more likely to address his direct preceding or subsequent speaker. On this basis, we define the initialization user matrix A (0) based on the speaking order of users in the session (Ouchi and Tsuboi, 2016). Concretely, all users in a session are sorted in a descending order according to the first time when they speak, and the i-th user is assigned with the i-th row of A (0) as a i (0) . The user matrix A (0) is trained as parameters along with other weight matrices in the neural network.
Users of the same order in different sessions share the same initialization user embedding. Note that the user representations are independent of each personality (unique user). Such strategy guarantees the initialization user embeddings to carry position information as well as handle new users unseen in training data during addressee identification.
a ( ) a ( ) a ( ) a ( ) a ( ) a ( ) a ( ) a ( ) a ( ) a ( ) a ( ) a ( ) ( 1 , a 0 ) ( 2 1 , a 1 1 ) ( 3 1 , a 2 1 ) ( 2 , a 1 ) ( 3 2 , a 2 2 ) ( 1 2 , a 0 2 ) ( 1 3 , a 0 3 ) ( 2 3 , a 1 3 ) ( 3 , a 2 )
( 1 , (0) ) ( 2 , (1) ) ( 3 , (2) LGRU tracks listeners' states and UGRU fuses users' states into utterances. W2W scans the conversation session from two directions and the representation of each user and utterance is the concatenation of both sides.
Interactive Representation Learning
To better capture who speaks what at each turn through the whole session, we propose to interactively learn the representation of utterances and users. Different from prior studies (Ouchi and Tsuboi, 2016;Zhang et al., 2017) which only track the users' states but neglecting the users' impact on utterances. We propose the W2W model which learns user and utterance representations interactively by tracking users' states with utterance embeddings as well as fusing users' states into the utterance embeddings.
Users Representation Learning
Role-sensitive User State Tracking. Suggested by (Zhang et al., 2017), an utterance could have different degrees of impact on the states of the corresponding speaker and listeners. In order to capture the users' role information, we utilize two kinds of GRU-based cells represented as Speaker-GRU (SGRU) and Listener-GRU (LGRU) to track the states of the speaker and listeners respectively at each turn of the session. 5 At the t-th transition step, the SGRU tracks the speaker representation a SPR (t) , from the former state of him a SPR (t−1) , the utterance representation u t , as well as a pseudo addressee representation a PADR (t−1) calculated via PAM (Person Attention Mechanism) which is 5 We denote a user embedding tracked until t th time step as a (t) , with a SPR (t) as the representation of the speaker at t th turn and a LSR j (t) as the representation of the j th listener at t th turn. a weighted sum of all the listeners' representations. Details on PAM will be elaborated in the next part. The state tracking procedure for the ith step is formulated as Eq (2). The main idea of SGRU is to incorporate two reset gates, each of which controls the information fusion from the listeners and speaker respectively, denoted as r i and p i . W , U and V are learnable parameters.
ri = σ(Wrui + Ura SPR (i−1) + Vra PADR (i−1) ) p i = σ(Wpui + Upa SPR (i−1) + Vpa PADR (i−1) ) zi = σ(Wzui + Uza SPR (i−1) + Vza PADR (i−1) ) a SP R (i) = tanh(W ui + U (ri a SPR (i−1) ) + V(p i a PADR (i−1) )) a SPR (i) =zi a SPR (i−1) + (1 − zi) a SPR (i)(2)
Symmetrically, LGRU incorporates the embeddings of a certain listener as well as a pseudo speaker and a pseudo utterance representation (also calculated via PAM) as inputs and tracks the state of each listener. SGRU and LGRU have symmetric updating functions as Eq (2) except for the difference on pseudo representation incorporated in the cell. 6 The parameters of SGRU and
LGRU are not shared, which guarantees W2W to learn role-dependent features in users' state tracking procedure. The whole structure of SGRU and
LGRU are illustrated in Figure 3. Person Attention Mechanism. We propose a person attention mechanism (PAM) (Eq (3) the state tracking process exactly. Each element β j i measures how likely the model estimates the j-th listener to be the addressee for the i-th turn based on the user representations tracked until turn i. W p is the parameter.
β j i = σ(a LSR j (i−1) W p [a SPR (i−1) ; u i ] T )(3)
Then for each turn i, a pseudo addressee a PADR (i−1) is generated as the weighted sum of all listener representations tracked until step i as Eq (4). Intuitively, a listener with a higher matching score is more likely to be the addressee at the current step. The pseudo addressee a PADR (i−1) is incorporated into the state tracking of the speaker as Eq (2).
a PADR (i−1) = j β j i · a LSR j (i−1) j β j i (4) a PSPR j (i−1) = β j i · a SPR j (i−1) (5) u P j i = β j i · u i(6)
Symmetrically, the pseudo speaker a PSPR j (i−1) and pseudo utterance u P j i are generated through Eq (5) and Eq (6) for each listener j at the i-th turn of the conversation.
Utterances Representation Learning
We design a UGRU (Utterance-GRU) cell 7 , which has the same structure as SGRU/LGRU to fuse the utterance embedding, current speaker embedding and the user-summary vector into an enhanced utterance embedding. The user matrix initialized with A (0) (as described in 4.2.1) and tracked until step t − 1 is denoted as A (t−1) . The 7 Note that although UGRU has the same structure as SGRU/LGRU, it acts on each utterance for only once instead of recurrently tracking. Fuse users' information into utterance representation using Eq (7) ; 7 end 8 return 1) User matrix of the last turn
− → A (T ) ; 9 2) Enhanced utterance embeddings { − → u t} T t=1 .
Algorithm 1: Interactive Representation Learning Algorithm (Forward Pass).
user-summary vector is calculated through maxpooling over users on A (t−1) as a summary of all users' current states.
hs = Max-Pool(A (t−1) ) ut = UGRU(ut, a SPR (t−1) , hs)(7)
Forward-and-Backward Scanning.
Considering that the addressee of an utterance can be the speaker of the preceding utterances or the subsequent ones, it is important to capture the dependency from both sides for users and utterances. We propose a forward-and-backward scanning schema to enhance the interactive representation learning. For forward pass, W2W model outputs the forward user matrix of the last time step, denoted as − → A (T ) as well as all the forwardenhanced utterance embeddings { − → u t } T t=1 as illustrated in Algorithm 1. The backward pass initializes users and utterances in the same way as the forward pass and scans the conversation session in the reversed order. Representations from both sides are concatenated correspondingly as the final representation as Eq (8).
ui = [ − → ui; ← − ui]; a j = [ − − → a j (T) ; ← − − a j (T) ](8)
Matching
Matching Network. We first fuse the speaker embedding and the utterance embedding into a query representation as q, then measures the embedding similarity s between the query and each listener: where W s , W u , W m denote weight matrices. For simplicity, we use a short-handed M atch(.) to denote the Equation (9) when there is no ambiguity. Addressee Identification. For each turn in the session, we score the matching degree between the query and each listener and select the best matched a ADR i as the addressee prediction as Eq (10). s j i denotes the matching score between the j-th listener a LSR j and the query of the i-th turn. For the entire conversation, we repeat the above steps until the addressee for each utterance is identified.
q = tanh(Wsa SPR + Wuũ) s = σ(a LSR Wmq T )(9)s j i = Match(a SPR i , a LSR j i , ui) a ADR i = argmax j ({s j i })(10)
Learning
We utilize the cross-entropy loss to train our model (Ouchi and Tsuboi, 2016). The objective is to minimize the loss as follows:
loss = − k i (log(s + i ) + log (1 − s − i )) + λ 2 θ 2(11)
Each subscript k denotes a session, and subscript i is taken from the utterances that have ground truths of addressee information. s + i denotes the matching score between the query and the ground truth addressee, s − i denotes the score of the negative matching, where the candidate addressee is negatively sampled. All parameters in our W2W model are jointly trained via Back-Propagation (BP) (Rumelhart et al., 1986).
Experimental Setups
Dataset. We run experiments using the benchmark Ubuntu dataset released by Ouchi and Tsuboi (2016). The corpus consists of a huge amount of records including response utterances, user IDs and their posting time. We organize the dataset as samples of conversation sessions.
We filter out the sessions without a single addressee ground truth, which means no label is available in these sessions. We also filter out session samples with one or more blank utterance. Moreover, we separate the conversation sessions into three categories according to the session length. Len-5 indicates the sessions with 5 turns and it is similar for Len-10 and Len-15. Such a splitting strategy is adopted in related studies as (Ouchi and Tsuboi, 2016;Zhang et al., 2017). The dataset is split into train-dev-test sets and the statistics are shown in Table 2. Comparison Methods.
We utilize several algorithms including heuristic and state-of-the-art methods as baselines. As there is no existing method that can perform the new task, we have to adapt baselines below into our scenario.
• Preceding: The addressee is designated as the preceding speaker of the current speaker.
• Subsequent: The addressee is designated as the next speaker.
• Dynamic RNN (DRNN): The model is originally designed to predict the addressee of the last utterance given the whole context available (Ouchi and Tsuboi, 2016). We adapt it to our scenario which is to identify addressees for all utterances in the conversation session. Concretely, the representations of users and context are learned in the same way as DRNN. While during the matching procedure, the representations of speaker and context are utilized to calculate the matching degree with candidate addressees at each turn.
• Speaker Interactive RNN (SIRNN): SIRNN is an extension model on DRNN, which is more interaction-sensitive (Zhang et al., 2017). Since all addressee information is totally unknown in our scenario, we also adapt the model with only speaker-role and observer-role into this situation. User states are tracked recurrently according to their roles at each turn, i.e. two distinct networks (IGRU S and IGRU O ) are utilized to track the status of the speaker and observers at each turn. Since there is no addressee-role observable through the session, we also make some adaption on the updating cell here. At each turn, IGRU S updates the speaker embedding from the previous speaker embedding and the utterance embedding, IGRU O updates the observer embedding from the previous observer embedding and the utterance embedding. During matching procedure, we make the prediction on each turn instead of only predicting the addressee for the last turn. Implementation and Parameters. For fair comparison, we choose the hyper-parameters spec- (2016) and Zhang et al. (2017). We represent the words with 300dimensional GloVe vectors, which are fixed during training. The dimension of speaker embeddings and hidden states are set to 50. The joint cross-entropy loss function with L 2 weight decay as 0.001 is minimized by Adam (Kingma and Ba, 2014) with a batch size of 128. Evaluation Metrics. To examine the effectiveness of our model on the addressee identification task, we compare it with baselines in terms of pre-cision@n (i.e. p@n) (Yan et al., 2017). For predicting an addressee of an utterance, our model actually provides a ranking list for all candidate addressees. 8 We also evaluate the performance on the session level: we mark a session as a positive sample if and only if all ground truth labels are correctly identified on the top 1 of rankings, and calculate the ratio as accuracy.
As we discussed before, only a part of utterances in multi-party conversations have explicit addressee, which limits the completeness of automatic evaluation metrics. In order to evaluate the performance on unlabeled utterances, we leverage human inference results and calculate the consistency between the model predictions and human results. Due to the labor cost limit, we randomly sample 100 sessions from the test set of Len-5, Len-10 and Len-15 respectively and recruit three 8 Intuitively, p@1 is the precision at the highest ranked position and should be the most natural way to indicate the performance. We also provide p@2 and p@3 to illustrate the potential of different systems to identify the correct addressee on top of the lists. volunteers to annotate the addressee label for unlabeled utterances by reasoning through content and addressee information. We leave blank on the utterance where three annotators give different inference results. Finally around 81.4% of the unlabeled utterances have two or more annotators given them same annotation. With the human inference results and model predictions, we use the overlapping rate 9 as the consistency metric.
Results and Discussion
We first give an overall comparison between W2W and baselines followed by ablation experiments to demonstrate the effectiveness of each part in W2W model. We then confirm the robustness of W2W with several factors including the numbers of users, the position of the utterance. Furthermore, we evaluate how W2W and baseline models perform on both labeled and unlabeled utterance.
Overall Performance. For automatic evaluation shown in Table 3, end-to-end deep learning approaches outperform heuristic ones, which indicates that simple intuition is far from satisfaction and further confirm the value of this work. Among the deep-learning based approaches, our W2W model outperforms the state-of-the-art models by all evaluation metrics. Direct adaption from approaches on identifying the last addressee of the session may not work fine for our scenario.
As shown in Figure 4, the performance of all methods drops as the context length increases (from Len-5 to Len-15) since the task becomes more difficult with more context information to encode and more candidate addressees to rank. However, the improvement of our W2W model is rather more obvious with longer context length. In particular, for the dataset Len-15, W2W improves 5% on p@1 and 10% on session accuracy over SIRNN as shown in Figure 4, which indicate the robustness of W2W model in complex scenarios. Figure 4: The comparison between W2W model and two stateof-the-art baselines on p@1. Table 4 shows the consistency between human inference and model predictions. W2W also outperforms the baselines with a larger margin on longer conversation scenarios, which is consistent with the phenomenon of automatic evaluation. The advantage on unlabeled data of our W2W model demonstrates the superiority for detecting the latent speaker-addressee structure unspecified in the conversation stream, and that it could help find out the relationship between and across users in the session.
Ablation Test. Table 5 shows the results of ablation test. First, we replace the bi-directional scanning schema with the forward scanning one. The result shows that the bi-directional scanning schema captures the information in long conversations more sufficiently. Besides, we investigate the effectiveness of PAM by replacing it with the simple mean-pooling approach as Eq (12):
a PADR (i−1) = 1 n · j a LSR j (i−1) a PSPR j (i−1) = 1 n · a SPR j (i−1) u P j i = 1 n · ui(12)
The result shows that it is more beneficial to capture the correlation between the user-utterance pair and each listener and implement the state tracking correspondingly at each turn with our PAM mechanism.
To investigate the effectiveness of interactive representation learning module, we first remove the UGRU cell and fix the utterance representations in the state tracking procedure (referred as w/o Utterance Interaction in Table 5). Symmetrically, we fix the user representations in the session by removing the SGRU and LGRU cell and maintain only the interaction affect from the users to the utterances (referred as w/o User Interaction in Table 5). We also conduct an experiment on taking off the whole interactive representation module by removing UGRU, SGRU and LGRU, where the addressee identification is totally dependent on the initial representations of users and utterances. The result demonstrates that each part of them has an important contribution to our W2W model, especially the interaction affect on users.
Number of Participants.
The task becomes more difficult as the number of participants involved in the conversation increases since more participants correspond to more complicated speaker-addressee relationship. We investigate how the performance correlates with the number of speakers in the dataset Len-15. The results in terms of p@1 are shown in Figure 5. In conversations with few participants, all methods have rather high performance. As the participant number increases, W2W constantly outperforms the baselines. The performance gap becomes larger especially when there are 6 users and more, which indicates the capability of our W2W model in handling complex conversation situations.
Position of Addressee-to-Predict. As mentioned above in 4.1.2, position information of utterances is a crucial factor in identifying addressees for multi-party conversation. We investigate how the system performance correlates with the position of the addressee to be predicted. In Figure 6, we show the p@1 performance of the W2W and baselines when predicting the addressee of u i at the i th turn. Again, W2W shows consistently better performance than the other baselines no matter where the turn to predict addressee is.
We can observe that all the methods perform relatively poor at the beginning or the end. Compared with the middle part of a long conversation, the beginning and the ending contains less context information, which makes the addressee of these part more difficult to predict. The result in Figure 6 shows that the gap between W2W and other methods is even larger where the addresseeto-predict is at the beginning or the end, which indicates that W2W is better at capturing key information and has stronger robustness in difficult scenarios.
Variance of Matching Scores. In real multiparty conversations, the utterances without addressee information can be divided into two cases. Sometimes an utterance has an explicit addressee while the speaker doesn't specify whom he/she is speaking to. We denote these cases as NP which refers to NULL-Positive. Another case is that the utterances don't address to any user in the conversation (denoted as NN which means Null-Negative), such as utterances 'Hi, everyone!' and 'Can anyone here help me?' In Ubuntu IRC dataset, unlabeled of the NN case and NP case are mixed and are difficult to distinguish without manual annotation.
Meanwhile, our W2W model and baseline approaches predict matching scores on each listener for every utterance no matter it has addressee information or not. For each utterance, the variance of the matching scores on all listeners represents how certain the model is on its addressee identification decision. A larger variance corresponds to a more confident prediction. Table 6 demonstrates the variance comparison on labeled and unlabeled cases in the test set Len-15. On utterances with addressee labels, the variance of our W2W model is significantly larger than the state-of-the-art baseline, which indicates that W2W has a higher degree of certainty about its own predictions when the conversation content is referring to someone explicitly. For utterances without addressee labels, the difference of variance between W2W and SIRNN is significantly reduced. Considering that unlabeled sets consist of NN ones as well as NP ones on which W2W has much larger variance than SIRNN just as the labled utterance scenario, we can infer that W2W has much lower variance than SIRNN on the NN case. Such phenomenon reflects that our W2W model won't make a prediction recklessly on occasions where there is no clear addressee. Therefore, the variance on matching scores of all listeners in our W2W model, to some extent, provides a signal of whether the utterance has a explicit addressee even if we do not provide any supervision information on this aspect during training.
Conclusion
In this paper, we aim at learning to identify the utterance addressees in multi-party conversations by predicting who is speaking to whom, which is a new task.. To perform the new task, we propose the W2W model which learns the user and utterance representations interactively. To handle the uncertainty in the conversation session, we design PAM which captures matching degree between current query and each candidate addressees. The experimental results show prominent and consistent improvement over heuristic and state-of-theart baselines.
In the future, we will further investigate better utterance models associated with additional information such as topics or knowledge. With the help of learned addressee structure, we can build a general structural dialogue system for complex multiparty conversations.
Figure 1 :
1The
Figure 1
1illustrates the overview picture of the proposed W2W model which consists of a representation learning module and a matching module.
Figure 2 :
2Interactive representation learning in W2W: At each utterance turn, SGRU tracks the speaker's state,
Figure 3 :
3Structures of SGRU and LGRU.
Figure 6 :
6p@1 performance vs. positions of addressees within a session. Results are tested on Len-15.
Table 1 :
1An example of the multi-party conversation in the IRC dataset. Not all the addressees are specified.Speaker
Utterance
Addressee
User 1
Input: Initial representations of utterances {ut} T t=1 Initial user matrix A (0) 1 for i = 0; i < T ; i + + do2
Calculate current matching scores through PAM
Eq (3);
3
Generate pseudo embeddings using Eq (4,5,6);
4
Track the speaker state through SGRU using
Eq (2);
5
Track each listener's state with LGRU using Eq (2);
6
Table 2 :
2Data statistics: sample size of datasets with
different session lengths (i.e., 5, 10, 15) from the
Ubuntu dataset (Ouchi and Tsuboi, 2016).
Dataset Train
Dev
Test
Len-5
461,120 28,570 32,668
Len-10 495,226 30,974 35,638
Len-15 489,812 30,815 35,385
Table 3 :
3Addressee identification results (in %) on datasets (Len-5, Len-10 and Len-15) where denotes p-value < 0.01 in the significance test against all the baselines.Len-5
Len-10
Len-15
Model
p@1
p@2
p@3
Acc.
p@1
p@2
p@3
Acc.
p@1
p@2
p@3
Acc.
Preceding
63.50
90.05
98.83
40.46
56.84
80.15
91.86
21.06
54.97
77.19
88.75
13.08
Subsequent 61.03
88.86
98.54
40.25
54.57
73.60
87.26
20.26
53.07
69.85
81.93
12.79
DRNN
72.75
93.21
99.24
58.18
65.58
85.85
94.92
34.47
62.60
82.68
92.14
22.58
SIRNN
75.98
94.49
99.39
62.06
70.88
89.14
96.10
40.66
68.13
85.82
93.52
28.05
W2W
77.55
95.11
99.57 63.81
73.52
90.33
96.64 44.14
73.42
89.44
95.51
34.23
Table 4 :
4Consistency comparison between human in-
ference and model predictions on overlapping rate (%).
denotes p-value < 0.01 in the significance test against
all the baselines.
Model
Len-5
Len-10
Len-15
DRNN
75.60
67.54
63.06
SIRNN
78.94
71.59
67.22
W2W
80.86
74.05
71.14
ified by Ouchi and Tsuboi
Table 5 :
5Ablation test on the effectiveness of W2W model parts in dataset Len-15.Model
p@1
p@2
p@3
Acc.
W2W w/ Forward Scanning Only 71.60 87.99 94.80 31.39
W2W w/o PAM
72.56 88.83 95.21 32.78
W2W w/o Utterance Interaction
72.94 88.89 95.28 33.04
W2W w/o User Interaction
49.18 72.38 86.81 18.24
W2W w/o Interaction
46.39 71.66 86.14 15.15
W2W
73.42 89.41 95.50 33.89
Table 6 :
6Variance of matching scores on labeled and unlabeled utterances in the set Len-15.Model
Labeled Unlabeled
W2W
0.111
0.080
SIRNN
0.088
0.077
To make the model practical in learning, we assume that one utterance is associated with only one addressee.
In this paper, we denote vectors with bold lower-case (like ui) and matrices with bold upper-case like (U and W ).
We use GloVe(Pennington et al., 2014), but it can be any word embeddings(Mikolov et al., 2013;Hu et al., 2016).4 Such a hierarchical framework (Serban et al., 2016) takes into account the context of all previous and future sentences in the whole session, thus enables the model to learn a strong representation.
SGRU incorporates pseudo addressee a PADR (i−1) for speaker state tracking. LGRU incorporates the pseudo speaker a PSPR j (i−1) and pseudo utterance u j i to track each j th listener
The calculation formula of the overlapping rate is described in Appendix.
AcknowledgmentsWe would like to thank the reviewers for their constructive comments. We would also like to thank Bing Liu and Zhangming Chan from Peking University for their suggestion and help on this paper. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC No. 61876196 and NSFC No. 61672058). Rui Yan and Wenpeng Hu were sponsored by Alibaba Innovative Research (AIR) Grant.
Dynamic time-aware attention to speaker roles and contexts for spoken language understanding. Po-Chun Chen, Ta-Chung Chi, Shang-Yu Su, Yun-Nung Chen, Automatic Speech Recognition and Understanding Workshop. Okinawa JapanIEEEPo-Chun Chen, Ta-Chung Chi, Shang-Yu Su, and Yun- Nung Chen. 2017. Dynamic time-aware attention to speaker roles and contexts for spoken language understanding. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 554-560, Okinawa Japan. IEEE.
Speaker role contextual modeling for language understanding and dialogue policy learning. Ta Chung Chi, Po Chun Chen, Shang-Yu Su, Yun-Nung Chen, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei Taiwan. IJCNLP2Ta Chung Chi, Po Chun Chen, Shang-Yu Su, and Yun- Nung Chen. 2017. Speaker role contextual model- ing for language understanding and dialogue policy learning. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 163-168, Taipei Taiwan. IJCNLP.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Caglar Bart Van Merrienboer, Dzmitry Gulcehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha Qatar. EMNLPKyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha Qatar. EMNLP.
Gsn: A graph-structured network for multi-party dialogues. Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, Rui Yan, 10.24963/ijcai.2019/696Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19Wenpeng Hu, Zhangming Chan, Bing Liu, Dongyan Zhao, Jinwen Ma, and Rui Yan. 2019. Gsn: A graph-structured network for multi-party dialogues. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI- 19, pages 5010-5016. International Joint Confer- ences on Artificial Intelligence Organization.
Different contexts lead to different word embeddings. Wenpeng Hu, Jiajun Zhang, Nan Zheng, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeWenpeng Hu, Jiajun Zhang, and Nan Zheng. 2016. Different contexts lead to different word embed- dings. In Proceedings of COLING 2016, the 26th International Conference on Computational Lin- guistics: Technical Papers, pages 762-771, Osaka, Japan. The COLING 2016 Organizing Committee.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980915arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 9:15.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesBerlin Germany. ACLJiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 110-119, Berlin Germany. ACL.
A persona-based neural conversation model. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, Bill Dolan, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin Germany. ACL1Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics, volume 1, pages 994-1003, Berlin Germany. ACL.
Deep reinforcement learning for dialogue generation. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, Jianfeng Gao, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingBerlin Germany. ACLJiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep rein- forcement learning for dialogue generation. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1192- 1202, Berlin Germany. ACL.
Adversarial learning for neural dialogue generation. Jiwei Li, Will Monroe, Tianlin Shi, Sėbastien Jean, Alan Ritter, Dan Jurafsky, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen Denmark. EMNLPJiwei Li, Will Monroe, Tianlin Shi, Sėbastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157-2169, Copenhagen Denmark. EMNLP.
Yi Luan, Yangfeng Ji, Mari Ostendorf, arXiv:1603.09457Lstm based conversation models. 1arXiv preprintYi Luan, Yangfeng Ji, and Mari Ostendorf. 2016. Lstm based conversation models. arXiv preprint arXiv:1603.09457, 1.
Towards neural speaker modeling in multi-party conversation. Zhao Meng, Lili Mou, Zhi Jin, arXiv:1708.03152The task, dataset, and models. 1arXiv preprintZhao Meng, Lili Mou, and Zhi Jin. 2017. Towards neural speaker modeling in multi-party conversa- tion: The task, dataset, and models. arXiv preprint arXiv:1708.03152, 1.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Addressee and response selection for multi-party conversation. Hiroki Ouchi, Yuta Tsuboi, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin Texas USA. EMNLPHiroki Ouchi and Yuta Tsuboi. 2016. Addressee and response selection for multi-party conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2133-2143, Austin Texas USA. EMNLP.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing. the 2014 conference on empirical methods in natural language processingDoha Qutar. EMNLPJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing, pages 1532-1543, Doha Qutar. EMNLP.
Learning representations by backpropagating errors. Geoffrey E David E Rumelhart, Ronald J Hinton, Williams, nature. 3236088533David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning representations by back- propagating errors. nature, 323(6088):533.
Building end-to-end dialogue systems using generative hierarchical neural network models. Alessandro Iulian V Serban, Yoshua Sordoni, Aaron Bengio, Joelle Courville, Pineau, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix Arizona USA; AAAIAAAI PressIulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Build- ing end-to-end dialogue systems using generative hi- erarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intel- ligence, pages 3776-3783, Phoenix Arizona USA. AAAI Press, AAAI.
Neural responding machine for short-text conversation. Lifeng Shang, Zhengdong Lu, Hang Li, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing China. ACL1Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 1577-1586, Beijing China. ACL.
Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan, 10.1145/3289600.3290985Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19. the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19New York, NY, USAACMChongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019a. Multi- representation fusion network for multi-turn re- sponse selection in retrieval-based chatbots. In Pro- ceedings of the Twelfth ACM International Confer- ence on Web Search and Data Mining, WSDM '19, pages 267-275, New York, NY, USA. ACM.
One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan, Proceedings of the 57th. the 57thChongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019b. One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response se- lection in dialogues. In Proceedings of the 57th
. Conference of the Association for Computational Linguistics. Conference of the Association for Computational Linguistics, pages 1-11.
The ubuntu chat corpus for multiparticipant chat analysis. C David, David W Uthus, Aha, Proceedings of the 27th AAAI Conference on Artificial Intelligence. the 27th AAAI Conference on Artificial IntelligenceBellevue Washington USAAAAIDavid C Uthus and David W Aha. 2013. The ubuntu chat corpus for multiparticipant chat analysis. In Proceedings of the 27th AAAI Conference on Artifi- cial Intelligence, Bellevue Washington USA. AAAI.
Oriol Vinyals, Quoc Le, arXiv:1506.05869A neural conversational model. 1arXiv preprintOriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869, 1.
Learning to respond with deep neural networks for retrievalbased human-computer conversation system. Rui Yan, Yiping Song, Hua Wu, Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR conference on Research and Development in Information RetrievalPisa Tuscany ItalyACMRui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval- based human-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Infor- mation Retrieval, pages 55-64, Pisa Tuscany Italy. ACM, SIGIR.
Joint learning of response ranking and next utterance suggestion in human-computer conversation system. Rui Yan, Dongyan Zhao, Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 40th International ACM SIGIR Conference on Research and Development in Information RetrievalTokyo JapanSI-GIRRui Yan, Dongyan Zhao, et al. 2017. Joint learning of response ranking and next utterance suggestion in human-computer conversation system. In Proceed- ings of the 40th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 685-694, Tokyo Japan. ACM, SI- GIR.
Addressee and response selection in multi-party conversations with speaker interaction rnns. Rui Zhang, Honglak Lee, arXiv:1709.040051arXiv preprintLazaros Polymenakos, and Dragomir RadevRui Zhang, Honglak Lee, Lazaros Polymenakos, and Dragomir Radev. 2017. Addressee and response se- lection in multi-party conversations with speaker in- teraction rnns. arXiv preprint arXiv:1709.04005, 1.
Multi-view response selection for humancomputer conversation. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, Rui Yan, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin Texas USA. EMNLPXiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human- computer conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 372-381, Austin Texas USA. EMNLP. |
202,784,158 | Multi-Input Multi-Output Sequence Labeling for Joint Extraction of Fact and Condition Tuples from Scientific Text | Condition is essential in scientific statement. Without the conditions (e.g., equipment, environment) that were precisely specified, facts (e.g., observations) in the statements may no longer be valid. Existing ScienceIE methods, which aim at extracting factual tuples from scientific text, do not consider the conditions. In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences. The framework has (1) a multi-output module to generate one or multiple tuples and (2) a multi-input module to feed in multiple types of signals as sequences. It improves F1 score relatively by 4.2% on BioNLP2013 and by 6.2% on a new bio-text dataset for tuple extraction. | [
10822819,
6015236,
40100965,
52118895,
14584850,
306227,
52010258,
44145304,
1957433,
115914779,
44163645
] | Multi-Input Multi-Output Sequence Labeling for Joint Extraction of Fact and Condition Tuples from Scientific Text
November 3-7, 2019
Tianwen Jiang [email protected]
Harbin Institute of Technology
Harbin, HeilongjiangChina
University of Notre Dame
Notre Dame
IndianaUSA
Tong Zhao
University of Notre Dame
Notre Dame
IndianaUSA
Bing Qin
Harbin Institute of Technology
Harbin, HeilongjiangChina
Ting Liu [email protected]†tzhao2
Harbin Institute of Technology
Harbin, HeilongjiangChina
Nitesh V Chawla
University of Notre Dame
Notre Dame
IndianaUSA
Meng Jiang
University of Notre Dame
Notre Dame
IndianaUSA
Multi-Input Multi-Output Sequence Labeling for Joint Extraction of Fact and Condition Tuples from Scientific Text
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaNovember 3-7, 2019302
Condition is essential in scientific statement. Without the conditions (e.g., equipment, environment) that were precisely specified, facts (e.g., observations) in the statements may no longer be valid. Existing ScienceIE methods, which aim at extracting factual tuples from scientific text, do not consider the conditions. In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences. The framework has (1) a multi-output module to generate one or multiple tuples and (2) a multi-input module to feed in multiple types of signals as sequences. It improves F1 score relatively by 4.2% on BioNLP2013 and by 6.2% on a new bio-text dataset for tuple extraction.
Introduction
Conditions such as environment and equipment provide validation supports for facts, while the facts focus on scientific observation and hypothesis in scientific literature (Miller, 1947). Existing ScienceIE methods, which extract (subject, relational phrase, object)-tuples from scientific text, do not distinguish the roles of fact and condition. Simply adding a tuple classification module has two weak points: (1) one tuple may have different roles in different sentences; (2) the tuples in one sentence have high dependencies with each other, for example, given a statement sentence in a biochemistry paper (Tomilin et al., 2016): "We observed that ... alkaline pH increases the activity of TRPV5/V6 channels in Jurkat T cells."
an existing system (Stanovsky et al., 2018) would return one tuple as below: 1 This work was done when the first author was visiting the University of Notre Dame. (alkaline pH, increases, activity of TRPV5/V6 channels in Jurkat T cells). where (a) the object should just be the channel's activity and (b) the condition tuple (TRPV5/V6 channels, in, Jurkat T cells) was not found. Note that the term "TRPV5/V6 channels" is not only the concept in the fact tuple's object but also the condition tuple's subject.
In this work, we define the joint tuple extraction task as a multi-output sequence labeling problem. First, we create a new tag schema: Non-"O" tags are formatted as "B/I-XYZ", where • X ∈ {fact, condition};
• Y ∈ {1: subject; 2: relation; 3: object};
• Z ∈ {concept, attribute, relational phrase}. Note that if Y="2" then Z="p". So, the number of non-"O" tags is 20. Now each fact/condition tuple can be represented as a tag sequence. Moreover, it is the first work in sequence labeling that concepts and attributes are separated. The fact tuple in the example will ideally be: (alkaline pH, increases, {TRPV5/V6 channels : activity}). Figure 1 shows our framework. Multiple tag sequences are generated after the LSTMd decoder, each of which represents a fact or condition tuple. This multi-output module has two layers: one is a relation name tagging layer that predicts the tags of relational phrases and determines the number of output sequences; the other is a tuple completion tagging layer that generates the tag sequences for completing the fact and condition tuples.
To address the challenge of modeling the complex tag schema, besides language model, we incorporate as much information as possible from upstream tools such as Part-of-Speech tagging (POS), Concept detection, Attribute name extraction, and Phrase mining (CAP). And we transform them into tag sequences as the model input. We observe strong dependencies between the token's POS/CAP tags and target tags. We appreciate the high accuracy of existing techniques making the multi-input sequences available for new datasets.
The multi-input multi-output sequence labeling framework is named as MIMO. Experiments demonstrate that it improves F1 score relatively by 6.2% over state-of-the-art models for tuple extraction on a new bio-text dataset we will introduce in the later section. When transferred to the BioNLP2013 dataset without additional training, it improves F1 score relatively by 4.2%. We apply MIMO to a large set of 15.5M MEDLINE papers and construct a knowledge graph: An example can be found in Figure 4.
A New Dataset
We built a system with GUI ( Figure 2) to collect a new dataset for the joint tuple extraction purpose, named Biomedical Conditional Fact Extraction (BioCFE). Three participants (experts in biomedical domain) manually annotated the fact and condition tuples from statement sentences from 31 paper abstracts in the MEDLINE database. The an- (2) make slots for a new tuple; (3) drag spans into the slots; (4) save annotations.
notation procedure took over 30 minutes on average for each paper. Here is a brief guide to the system. First, the users merged the token(s) into a span. Second, they gave a proper number of fact and/or condition tuple(s), where the proper number is not fixed but depends on the concrete sentence. Each tuple has five slots (subject's concept, subject's attribute, relation phrase, object's concept, and object's attribute). Third, they dragged the spans filling into the slots. If the three annotations are inconsistent, we filtered out the case. Eventually we have 756 fact tuples and 654 condition tuples from 336 annotated sentences. It is common to see one sentence having multiple facts and/or conditions, and actually 61%/52% statement sentences have more than one fact/condition tuples.
The Proposed Approach
Our approach has two modules: (1) a multi-input module that harnesses recent NLP development to process the text for input sequences from multiple tasks and feeds them into a multi-head encoderdecoder model with multi-input gates;
(2) a multioutput module that generates multiple tuple tag sequences for fact and condition tuples, which consists of a relation name tagging layer and a tuple completion tagging layer, as shown in Figure 1.
The Multi-Input Module
Preprocessing for input sequences: Following fundamental NLP techniques have achieved high accuracy requiring no additional training with labeled data: Language Model (LM) (Howard and Ruder, 2018), POS (Labeau et al., 2015), CAP (Luan et al., 2018;Jiang et al., 2017;Shang et al., 2018;Wang et al., 2018a). For any given input sentence, we tokenize it and represent each token by its word embedding (pre-trained GloVe vector in this paper). Then we get another three input sequences by the input sentence and the above three fundamental NLP techniques.
(1) A pre-trained LSTM-based language model takes the sentence as input and returns semantic embedding sequence, where the dependencies between a token and its predecessors in distant contexts are preserved.
(2) We employ NLTK tool to generate the POS tag sequence for the given sentence. The POS tag sequence indicates syntactic patterns of the words in a sentence, that is the dependencies between POS tags and output tags, like verbs (e.g., VBD) and predicates (e.g., B-f2p).
(3) Multiple complementary IE techniques are used to detect concepts, attributes and phrases from the given sentences, being merged and resulting a CAP sequence. We make tags in the format of "B/Ic/a/p" for the tokens of concepts, attributes, and phrases.
Each sequence encodes a specific type of dependencies. A combination of multi-type dependencies learns the complicated dependencies on the 21 tuple tags better than any sole type. LM learns the dependencies between a token and its predecessors in distant contexts, which helps predict the position of subject, relation, and object. POS encodes the syntactic features of words. Dependencies between the POS tag and tuple tag (e.g., "VBD" and "B-f2p") can be modeled. We also spot high dependencies between the CAP tag and tuple tag. For example, the tokens of "B/I-c" (concept) and "B/I-a" (attribute) tags have high probability of being labeled as "B/I-XYc" and "B/I-XYa" in the output sequences, respectively.
Multi-head Encoder-Decoder: We investigate two neural models as encoder: one is bidirectional LSTM (BiLSTM), the other is the renown, bidirectional encoder representations from Transformers (BERT). We adopt a LSTM structure as the decoding layer (LSTMd) (Zheng et al., 2017). We observe that the input sequences may have different tag predictability on different sentences. For short sentences, POS and CAP are more useful (modeling local dependencies); for long sentences, LM is more effective (modeling distant dependencies). In order to secure the model's robustness on massive data, we apply a multi-head mechanism to the encoder-decoder model. Each head of the encoder-decoder is fed with one type of input sequence, and they are combined at the end of decoder layer. Thus, the tag prediction becomes more stable than using a simple encoderdecoder without the multi-head. Multi-input gates: We adopt the multi-input gates in ResNet (He et al., 2016) to take the most use of the multi-input sequences. We add the gates to the input of BiLSTM or BERT encoder, the input of LSTMd decoder, and the multi-output module.
Multi-Output Module
We propose to generate multiple output sequences. As annotating multiple tuples from one sentence is common, a token may have different expected tags in the tuples. On BioCFE, we observe that 93.8% statement sentences make multiple tuples: 21.7% of the sentences have at least one token that appears in at least one fact tuple and at least one condition tuple, expecting tags "B/I-fYZ" and "B/I-cYZ"; 18.1% of the sentences have at least one token that appears in one condition tuple as a part of subject and in another condition tuple as a part of object, expecting tags "B/I-c1Z" and "B/I-c3Z". Therefore, we extend the typical one-output sequence labeling to a multi-output design.
Then what is the number of output sequences? We reveal the significant role of relation names in making tuples. If we tagged the relation names out, for each relation name, of tags beginning with "B-f2p" as a fact's and "B-c2p" as a condition's, the module would generate an output sequence, respectively. Then we extract all possible tuples, whose relation has been specified, from every output sequence. Two observations on the annotated data support this idea: We transform each of the 1,410 tuples into a tag sequence. For the same sentence, if the tuples' relation names are the same, we merge their tag sequences into one and then use the matching function in (Stanovsky et al., 2018) to recover the tuples. First, 0 token has conflicting tags among the 240 merged sequences. Second, the recovery has 0 missing or wrong tuple. So, generating one output sequence and completing the tuples per relation name is practical.
The multi-output module has two layers: one is a relation name tagging layer and the other is a tuple completion tagging layer. Relation name tagging (RNT) layer: It consists of feed-forward neural networks (FFNs) and softmax layers. Decoded vectors are fed into the FFNs and the softmax predict the probability distribution of tags on fact and condition, respectively:
p f i = softmax(FFN f RN T (d i )), (1) p c i = softmax(FFN c RN T (d i )).
( 2) where f is for fact and c for condition. d i denotes the i-th token's vector given by the LSTMd. Now we have two tag sequences, one for fact and the other for condition. As we have argued with one-output, extracting tuples from the "twooutput" sequences cannot resolve the tag conflicts, either. Here we extract only the relation names: {r f 1 , r f 2 , · · · , r f n } denotes the n relation names (beginning with "B-f2p" tag) in fact tuples and {r c 1 , r c 2 , · · · , r c m } denotes the m relation names (beginning with "B-c2p" tag) in condition tuples. Tuple completion tagging (TCT) Layer: This layer predicts n fact tag sequences and m condition tag sequences. Each sequence is generated by a FFN and a softmax layer. The FFN obtains the relation name from the RNT layer. The FFN's input also includes the token's vectors from the encoder-decoder model of the multi-input module.
Here we take condition sequences as an example to describe the details of the method. When predicting the j-th tag sequence, we define the position embedding of the i-th token as follows, representing the relative position to the j-th relation name's tag "B-c2p":
v c i,j = g emb (r c j , i).(3)
Thus, the tag probability distributions of the i-th token in the condition tag sequences are:
p (r c 1 ) i = softmax(FFN c T CT (v c i,1 + d i )), p (r c 2 ) i = softmax(FFN c T CT (v c i,2 + d i )), · · · , p (r c m ) i = softmax(FFN c T CT (v c i,m + d i )).(4)
Similarly, we have the following tag distributions for the i-th token in the fact tag sequences:
p (r f 1 ) i = softmax(FFN f T CT (v f i,1 + d i )), p (r f 2 ) i = softmax(FFN f T CT (v f i,2 + d i )), · · · , p (r f n ) i = softmax(FFN f T CT (v f i,n + d i )),(5)
where v f i,j is the position embedding of the i-th token in the j-th fact sequence, representing the relative position to the relation name's tag "B-f2p".
Finally, we apply the matching function in (Stanovsky et al., 2018) to complete and extract the tuples (i.e., the concepts and/or attributes in the subjects and objects) for each output sequence.
Loss Function and Training
Given a sentence s, the loss function of the relation name tagging layer can be written as below:
s RN T = − N s i=1 (log(p f i,y f i ) + log(p c i,y c i )),(6)
where p f i,y and p c i,y are the probability of predicting y as the tag of the i-th token in the fact and condition tag sequences, respectively. y f i and y c i are the observed tag of the i-th token in the fact and condition tuple, respectively. N s is the length of the sentence s. The loss function of the tuple completion tagging layer is consisted of two parts, loss on fact tuples and loss on condition tuples:
s T CT = s f act + s cond. , s f act = − N s i=1 n j=1 log(p (r f j ) i,y f i,j ), s cond. = − N s i=1 m j=1 log(p (r c j ) i,y c i,j ),(7)
where n and m are the number of fact and condition tag sequences for the sentence s, respectively. The overall loss function for optimization is:
= RN T + T CT = s∈S ( s RN T + s T CT ),(8)
where S is the set of statement sentences.
Training details: On one hand, Equations (6) and (7) show that the error signal can be propagated from the RNT/TCT layers to the encoder-decoder model. On the other hand, the RNT layer specifies the relation names, or say, the tokens that have tags "B/I-f2p" and "B/I-c2p" for each tag sequence in the TCT layer. So we cannot have smooth gradients for back propagation from the TCT layer to the RNT layer. So, in order to have good learning effectiveness, the quality of predicting relation names has been secured beforehand. We pre-train the RNT layer with the multi-input module till the relation name's tag prediction achieves a higherthan-0.8 F1 score. Then we plug the TCT layer onto the RNT layer and train the entire framework to generate the multi-output tag sequences.
306
Experiments
We evaluate the performance of condition/fact tag prediction and tuple extraction by the proposed MIMO model, its variants, and state-of-the-art models on the newly annotated BioCFE dataset and transferred to the BioNLP2013 dataset.
Experimental Setup
Datasets: Statistics of BioCFE has been given in the Section 2. Additionally, the attribute-related tags take 11.7% and 9.4% of non-"O" tags in fact and condition tuples, respectively. So, it is important to distinguish concept and attribute. To the best of our knowledge, it is the first time that conditional information was carefully annotated on biomedical literature. We use the system in Figure 2 to annotate a subset of BioNLP2013 Cancer Genetics (CG) task dataset (Nédellec et al., 2013). We have 197 fact tuples and 173 condition tuples. We use this BioNLP dataset as an extra test set for task of fact and condition tuples extraction, but the model will not be trained on this dataset. Validation: The ratio of training:validation:test is 60:8:32. For BioCFE, the evaluation set has 242 fact tuples and 209 condition tuples (on average from 108 sentences). We repeat five times, evaluate the performance, and report average results. Evaluation metrics: For tag prediction, We use standard metrics, precision, recall, and F1 scores. We have similar observations on Micro F1 scores as Macro F1 scores, so we report Macro F1 only. For evaluating tuple extraction, we use pair-wise comparison to match the extracted and groundtruth tuples. We evaluate the correctness on the tuple's five slots using the same metrics. Baselines: We compare with statistical sequence labeling methods: Structured Support Vector Machine (SVM) (Tsochantaridis et al., 2005) and Conditional random field (CRF) (Lafferty et al., 2001). We compare with a neural sequence labeling method, BiLSTM-LSTMd (Zheng et al., 2017). We replace its encoder with BERT (Devlin et al., 2018) to make it a more competitive baseline. We also compare against two renown OpenIE systems, Stanford OpenIE (Angeli et al., 2015) and AllenNLP OpenIE (Stanovsky et al., 2018) followed by a condition/fact classification.
We enhance statistical sequence labeling models with multi-input signals for fairness, and train them for fact tuple and condition tuple extrac-tion separately. In the neural baselines (BiLSTM-LSTMd and BERT-LSTMd), fact extraction and condition extraction share the encoder-decoder model and use different, proper parameters in the linear-softmax layer. Hyperparameters: The multi-input module has a BiLSTM/BERT encoder and a LSTM decoder. The word embeddings were obtained from GloVe (Pennington et al., 2014) with the dimension size d W E = 50. The language model dimension size ns d LM = 200. The size of POS tag embedding is d P OS = 6. The size of CAP tag embedding is d CAP = 3. The number of LSTM units in the encoding layer is 300. The number of transformer units in the BERT encoding layer is 768.
Results on BioCFE
In this section, we present overall performance, ablation study, error analysis, and efficiency. Table 1 shows that the proposed multi-input multioutput sequence labeling model with a BERT encoder consistently performs the best over all the baselines on tag prediction and tuple extraction. Compared to BiLSTM-LSTMd, BiLSTM-based MIMO improves F1 score relatively by 7.1% on tag prediction and by 8.8% on tuple extraction; compared to BERT-LSTMd, BERT-based MIMO improve F1 by 4.7% and 6.2% on the two tasks, respectively. Apparently the BERT encoder significantly improves the performance (by 16.9-17.2% on tag prediction and 7.7-10.3% on tuple extraction). And the MIMO design can further improve it. Neural sequence labeling models perform better than OpenIE systems and statistical methods. Neural sequence labeling models are more adaptive to learning structures with the new tag schema. Open IE method plus a condition/fact classification is not effective.
Overall Performance
Compared to BERT-LSTMd, the BERT-based MIMO improves precision and recall relatively by 8.3% and 1.3% on tag prediction; and relatively by 3.1% and 9.3% on tuple extraction, respectively. When the tags were more precisely predicted, the tuple's five slots would be more accurately filled, and we would have more complete tuples.
We also observe that the improvements on condition's tags/tuples are consistently bigger than the improvements on fact's tag/tuples. It demonstrates that the MIMO design recognizes the role of conditions in the statement sentences better. Table 2 compares variants of the proposed model to evaluate the effectiveness of the following components: (1) multi-input sequences, such as none, or one (in LM, POS, and CAP), double combination, or triple combination; (2) multi-input encoder model, BiLSTM or BERT; (3) multi-output module, with the RNT layer only (generating one fact tag sequence and one condition tag sequence) or a combination of RNT and TCT layers (generating multiple sequences for each tuple type).
Ablation Study
Multi-input sequences: When the choices of the encoder model and multi-output layers are specified, we observe that triple combination of input sequences performs better than double combinations and the double combinations win over the sole input. An additional sequence makes a relative F1 improvement by 1.0-2.4%. The triple combination improves F1 relatively by 3.2-4.1%. This demonstrates that the three types of input sequences encode complementary information for learning dependencies in the proposed tag schema. First, the language model learns the dependencies between a token and its predecessors in distant contexts. Having the LM sequence recognizes subjects and objects relative to the relation names and reduces the false positives of "B/I-X1Z" and "B/I-X3Z". Second, the POS tag encodes the token's syntactic feature. Having the POS sequence improves the precision of tag prediction. For example, verbs and prepositions (e.g., "in", "during") often act as the relation name of facts and conditions, respectively; conjunction words (e.g., "that", "which") indicate subordinate clauses, so the noun phrase before the conjunction word is likely to be the subject of the tuple given by the clause. Third, the formerly-detected concepts, attribute names, and phrases are absolutely useful for tagging the slots of subjects and objects. In other words, the tags "B/I-c" and "B/I-a" in the CAP sequence are strongly associated with the target tags "B/I-XYc" and "B/I-XYa", respectively. Encoder in the multi-input module: Comparing the middle three columns (BiLSTM-based encoder) and the right-hand three columns (BERTbased encoder), one can easily tell the significant improvement brought by the BERT model. Layers in the multi-output module: If the multioutput models have both RNT and TCT layers, the F1 score is relatively 1.4-5.0% higher than the models that have the RNT layer only. Moreover, the recall is improved relatively by 1.5-9.0%. So the TCT layer, which generates multiple tag sequences for each type of tuple (i.e., fact and condition), plays a very important role in recognizing the multiple tuples from one statement sentence. Table 3 presents the confusion matrices made by the BERT-based MIMO on predicting non-"O" tags for facts and conditions, respectively. The columns are predicted tags and the rows are actual ones. Perfect results would be diagonal matrices.
Error Analysis
We observe that the numbers at the diagonal are consistently bigger than the numbers on the corresponding row and column. The accuracy scores are 0.905 for predicting fact tags and 0.908 for predicting condition tags. Of the 182 actual "B-f2p", the model predicted that 175 were "B-f2p"; of the 186 actual "B-c2p", it predicted that one was "I-c1c" and one was "I-c3c". It demonstrates the high accuracy (0.961 and 0.989) of extracting relation names for multi-output generation. The ovals in each confusion matrix present the most significant type of error. Of a small set of actual subjects, the model predicted them as objects, and vise versa, though the fact/condition role and concept role were correctly predicted.
The dashed circles show the second frequent type of error. Of the actual "I-f2p" tokens, the model predicted that 7 were "B-f2p"; for the actual "I-c2p", it predicted that 6 were "B-c2p". Basically, it was because of missing the beginning word of the relational phrases. Of the actual "B-f3a" tokens, the model predicted 6 were "I-f2p". Future work will aim at improving the prediction of the boundaries of long relational phrases.
Efficiency
All the experiments were conducted on 16 Graphics Cards (GeForce GTX 1080 Ti), where one individual model only used 1 GPU. Each model was trained for 1,000 epochs. For the BiLSTM-LSTMd MIMOs, the pre-training took 2.4 hours and the re-training (TCT layer) took 0.4 hour. For the BERT-LSTMd MIMOs of the best performance, the pre-training took 3.5 hours and the retraining took 0.9 hour. It took 5.7 hours to extract fact and condition tuples from 141 million sentences in the MEDLINE text data. It is comparable with existing approaches in terms of scalability.
Results on BioNLP2013
As shown in Table 3, the BERT-LSTMd MIMO model achieves an F1 score of 0.790 on tuple extraction from BioNLP2013. Note that the model was trained on BioCFE that has no overlapping sentence with BioNLP2013. This score is comparable with the testing F1 score on the BioCFE (0.808), which demonstrates the effectiveness and reliability of the proposed model.
Our model improves the F1 score relatively by 4.2% over the best baseline BERT-LSTMd. The improvement on recall is more substantial: It improves recall relatively by 5.8%. It was because of the design of the multi-output module: the TCT layer generates multiple tag sequences based on the relation names predicted by the RNT layer. A token in a statement sentence may have different roles in different tuples of the same type (fact or condition). For example, given the following statement sentence:
"Immunohistochemical staining of the tumors demonstrated a decreased number of blood vessels in the treatment group versus the controls."
The proposed model is able to find one fact tuple and two condition tuples precisely: -Condition 1: (blood vessels,in,treatment group) -Condition 2: (treatment group,versus,controls) Note that the concept "treatment group" acts as the object of Condition Tuple 1 (having tags "B/I-c3c") and the subject of Condition Tuple 2 (having tags "B/I-c1c"). The multi-output design tackled this issue while other models could not. Compared with BioCFE: On BioCFE, the F1 score on condition tuple extraction is a bit higher than that on fact tuple extraction (81.64 vs 79.94). On BioNLP2013, we have the opposite observation (78.58 vs 79.42). They are still comparable but if we look at the error cases, we find that most of the false predictions of condition tuple come from long sentences (having more than 30 words). And 35% of the sentences in BioNLP are long sentences, while only 5% in Bio CFE are long. Long dependency modeling is always challenging for IE, especially condition extraction. We will study it in the future work.
-
A Visualized Case Study
Scientific knowledge graph enables effective search and exploration. It is certainly important to represent the conditions of the corresponding fact being valid in the graph. As we have applied our model to the large MEDLINE dataset, Figure 4 visualizes the fact and condition tuples extracted from four statement sentences about "cell proliferation". On the left side, we find (1) "VPA treatment" and the "incubation" of "HDLs" increased cell proliferation, while (2)"Chlorin e6-PDT" and the "inhibition" of "MiR-199a-5p" decreased cell proliferation. On the right, we are aware of the conditions of the factual claims. They describe the methodology of the observation (e.g., "using", "in combination with") or the context (e.g., "in" a specific disease or "from" specific animals). In some other cases, we find the temperature and pH values are detected as the conditions of observations. 5 Related Work
Scientific Information Extraction
Information extraction in scientific literature, e.g., computer science, biology and chemistry, has been receiving much attention in recent years. Scien-ceIE in computer science focus on concept recognition and factual relation extraction (Luan et al., 2017;Gábor et al., 2018;Luan et al., 2018). Sci-enceIE in biological literature aims at identifying the relationships between biological concepts (i.e., proteins, diseases, drugs and genes) (Kang et al., 2012;Xu et al., 2018). Rule-based approaches were used in early studies (Rindflesch and Fiszman, 2003;Kang et al., 2012). Recently, a wide line of neural network models have been proposed and outperformed traditional methods (Wang et al., 2018b;Xu et al., 2018;. Wang et al. (2018b) investigated different kinds of word embeddings on different NLP tasks in the biological domain. employed attentionbased neural networks to extract chemical-protein relations. Xu et al. (2018) used the BiLSTM model to recognize the drug interaction. In our work, we extract biological relational facts as well as their conditions. The condition tuples are essential to interpreting the factual claims.
Open-Domain IE
Open IE refers to the extraction of (subject, relation, object)-triples from plain text (Angeli et al., 2015;Stanovsky et al., 2018;Saha et al., 2018;Wang et al., 2018a). The schema for the relations does not need to be specified in advance. Distant supervision has been widely used because the size of the benchmark data is often limited (Banko et al., 2007;Wu and Weld, 2010). Stanovsky et al. (2018) proposed supervised neural methods for OpenIE. The idea was to transform annotated tuples into tags and learn via sequence tagging. We create a new tag schema and propose a novel sequence labeling framework.
Sequence Labeling for IE
Statistical models have been studied for long, including Hidden Markov Models (HMM), Support Vector Machine (SVM), and Conditional Random Fields (CRF) (Lafferty et al., 2001;Tsochantaridis et al., 2005;Passos et al., 2014;Luo et al., 2015;Li et al., 2018). However, these methods rely heavily on hand-crafted features. Then neural network models become popular and obtain more promising performance than traditional statistical methods (Yang and Mitchell, 2017;Zheng et al., 2017;Wang et al., 2019;Yu et al., 2019). So, we use them as strong baselines.
Conclusions
We present a new problem to find conditional information in scientific statements. We created a new tag schema for jointly extracting condi-tion and fact tuples from scientific text. We proposed a multi-input multi-output sequence labeling model to utilize results from well-established related tasks and extract an uncertain number of fact(s)/condition(s). Our model yields improvement over all the baselines on a newly annotated dataset BioCFE and a public dataset BioNLP2013. We argue that structured representations of knowledge, such as fact/condition tuple, for scientific statements will enable more intelligent downstream applications. In the future work, we will explore the use of the structured tuples to bridge the gap between text content and knowledge-based applications, such as knowledge-based scientific literature search.
Figure 1 :
1Our framework has two modules: (1) a multiinput module (bottom) based on a multi-head encoderdecoder model with multi-input gates; (2) a multioutput module (top) of a relation name tagging layer and a tuple completion tagging layer.
Figure 2 :
2Annotation by four steps: (1) merge token(s) into a span;
j and y c i,j as the tag of the i-th token in the j-th fact and condition tag sequence, respectively.
2 :
2The proposed MIMO that employs (a) multi-input Language Models, POS tags, and Concept-Attribute-Phrase sequences, (b) multi-output tag sequences, (c) BERT-based encoder performs the best on tuple extraction.
Figure 3 :
3Confusion matrices on predicting fact tags (Top) and condition tags (Bottom) in BioCFE data.
Figure 4 :
4Structuring tuples detected from four statement sentences that mention "cell proliferation" into a snapshot of scientific knowledge graph with fact tuples on the left and condition tuples on the right.
Prec. Rec. F1 / F1 F act , F1 Cond. Prec. Rec. F1 / F1 F act , F1 Cond.Table 1: The proposed MIMO outperforms existing methods on tag prediction and tuple extraction in the BioCFE dataset. The MIMO with BERT-based encoder performs the best. Higher score performs better.Methods
Tag Prediction (%)
Tuple Extraction(%)
Allennlp OpenIE (Stanovsky et al., 2018)
-
-
-
42.60 38.22
40.29 / -, -
Stanford OpenIE (Angeli et al., 2015)
-
-
-
47.11 41.62
44.19 / -, -
Structured SVM (Tsochantaridis et al., 2005) 32.68 25.80 28.83 / 32.76, 24.71 47.62 46.15 46.87 / 45.01, 48.72
CRF (Lafferty et al., 2001)
60.07 41.92 49.37 / 56.23, 41.87 65.19 62.44 63.78 / 64.07, 63.44
BiLSTM-LSTMd (Zheng et al., 2017)
61.00 56.26 58.53 / 65.16, 51.78 71.57 66.55 68.97 / 69.51, 68.41
BERT-LSTMd
70.07 70.19 70.13 / 74.30, 65.88 78.64 73.67 76.08 / 76.14, 75.99
MIMO (BiLSTM based)
67.80 58.24 62.66 / 66.67, 58.58 75.35 74.67 75.01 / 74.91, 75.10
MIMO (BERT based)
75.91 71.08 73.41 / 76.01, 70.75 81.06 80.53 80.79 / 79.94, 81.64
Table
Fact 1: ({tumors:immunohistochemical staining}, demonstrated, {blood vessels:decreased number})B-f1c
I-f1c
B-f1a
I-f1a
B-f2p
I-f2p
B-f3c
I-f3c
B-f3a
I-f3a
B-f1c
I-f1c
B-f1a
I-f1a
B-f2p
I-f2p
B-f3c
I-f3c
B-f3a
I-f3a
Actual
Predicted
151 4
4
1
8
1 143
4
1
1
10
1
3
40
1
4
2
4
14
1
1
1
1
1
175 2
1
1
7 238 3
3
1
5
1 147 4
1
4
2 145
1
6
4
2
50
1
2
3
3
26
(a) Fact tags
B-c1c
I-c1c
B-c1a
I-c1a
B-c2p
I-c2p
B-c3c
I-c3c
B-c3a
I-c3a
B-c1c
I-c1c
B-c1a
I-c1a
B-c2p
I-c2p
B-c3c
I-c3c
B-c3a
I-c3a
Actual
Predicted
145 4
2
1
9
1
5 155
1
1
1
10
2
2
48
3
1
16
1
1
184
1
6
56
1
8
158 4
1
1
8
3 167
2
2
5
2
23
2
1
1
1
9
(b) Condition tags
Table 3 :
3The BERT-LSTMd MIMO model performs the best on tuple extraction in BioNLP2013.
BiLSTM-based Encoder (%) BERT-based Encoder (%). BiLSTM-based Encoder (%) BERT-based Encoder (%)
70.17 / 69.42, 70.91 79.01 74.02 76.43 / 77.43, 75.43 73.38 68.81 71.02 / 71. L M Pos, Mo Prec, 75.52 78.01 / 77.91, 78.09 73.71 68.14 70.82 / 70.34, 71.27 80.90 76.20 78.48 / 78.35, 78.60 74.17 69.12 71.56 / 70.89, 72.21 81.13 76.20 78.59 / 78.42, 78.73 74.63 69.21 71.82 / 71.68, 71.94 81.74 76.29 78.92 / 78.67, 79.16 71.80 72.34 72.07 / 72.39, 71.73 77.38 79.19 78.27 / 76.64, 79.89 72.41 73.35 72.88 / 71.99, 73.77 79.04 79.87 79.45 / 79.09, 79.81 73.85 73.74 73.80 / 72.64, 74.96 79.40 79.50 79.45 / 78.66, 80.24 72.69 74.27 73.47 / 72.19, 74.75 79.05 79.72 79.39 / 78.41, 80.36 74.43 73.73 74.08 / 73.19, 74.96 79.67 80.65 80.16 / 79.16, 81.14 74.31 74.33 74.32 / 74.45, 74.19 79.97 79.56 79.76 / 79.06, 80.47 75.15 74.12 74.63 / 74.69, 74.57 79.41 79.98 79.70 / 79.49, 79.90 75.35 74.67 75.01 / 74.91, 75.10 81.06 80.53 80.79 / 79.94Rec. F1 / F1 F act , F1 Cond. Prec. Rec. F1 / F1 F act8664LM POS CAP MO Prec. Rec. F1 / F1 F act , F1 Cond. Prec. Rec. F1 / F1 F act , F1 Cond. 71.57 66.55 68.97 / 69.51, 68.41 78.64 73.67 76.08 / 76.14, 75.99 72.84 67.36 69.99 / 69.22, 70.75 79.57 74.77 77.10 / 77.47, 76.71 72.68 68.11 70.32 / 71.85, 68.78 79.66 74.59 77.04 / 76.80, 77.27 72.84 67.69 70.17 / 69.42, 70.91 79.01 74.02 76.43 / 77.43, 75.43 73.38 68.81 71.02 / 71.86, 70.15 80.66 75.52 78.01 / 77.91, 78.09 73.71 68.14 70.82 / 70.34, 71.27 80.90 76.20 78.48 / 78.35, 78.60 74.17 69.12 71.56 / 70.89, 72.21 81.13 76.20 78.59 / 78.42, 78.73 74.63 69.21 71.82 / 71.68, 71.94 81.74 76.29 78.92 / 78.67, 79.16 71.80 72.34 72.07 / 72.39, 71.73 77.38 79.19 78.27 / 76.64, 79.89 72.41 73.35 72.88 / 71.99, 73.77 79.04 79.87 79.45 / 79.09, 79.81 73.85 73.74 73.80 / 72.64, 74.96 79.40 79.50 79.45 / 78.66, 80.24 72.69 74.27 73.47 / 72.19, 74.75 79.05 79.72 79.39 / 78.41, 80.36 74.43 73.73 74.08 / 73.19, 74.96 79.67 80.65 80.16 / 79.16, 81.14 74.31 74.33 74.32 / 74.45, 74.19 79.97 79.56 79.76 / 79.06, 80.47 75.15 74.12 74.63 / 74.69, 74.57 79.41 79.98 79.70 / 79.49, 79.90 75.35 74.67 75.01 / 74.91, 75.10 81.06 80.53 80.79 / 79.94, 81.64
Leveraging linguistic structure for open domain information extraction. Melvin Jose Johnson Angeli, Christopher D Premkumar, Manning, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing2The proposed MIMO that employs (a) multi-input Language ModelsReferences GaborTable 2: The proposed MIMO that employs (a) multi-input Language ModelsReferences Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguis- tic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), vol- ume 1, pages 344-354.
Open information extraction from the web. Michele Banko, J Michael, Stephen Cafarella, Matthew Soderland, Oren Broadhead, Etzioni, IJ-CAI. 7Michele Banko, Michael J Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJ- CAI, volume 7, pages 2670-2676.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Semeval-2018 task 7: Semantic relation extraction and classification in scientific papers. Kata Gábor, Davide Buscaldi, Anne-Kathrin Schumann, Behrang Qasemizadeh, Haifa Zargayouna, Thierry Charnois, SemEval. Kata Gábor, Davide Buscaldi, Anne-Kathrin Schu- mann, Behrang QasemiZadeh, Haifa Zargayouna, and Thierry Charnois. 2018. Semeval-2018 task 7: Semantic relation extraction and classification in sci- entific papers. In SemEval, pages 679-688.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In CVPR, pages 770-778.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 328-339.
Metapad: Meta pattern discovery from massive text corpora. Meng Jiang, Jingbo Shang, Taylor Cassidy, Xiang Ren, M Lance, Timothy P Kaplan, Jiawei Hanratty, Han, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningACMMeng Jiang, Jingbo Shang, Taylor Cassidy, Xiang Ren, Lance M Kaplan, Timothy P Hanratty, and Jiawei Han. 2017. Metapad: Meta pattern discovery from massive text corpora. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 877-886. ACM.
The role of "condition": A novel scientific knowledge graph representation and construction model. Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, V Nitesh, Meng Chawla, Jiang, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningACMTianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V Chawla, and Meng Jiang. 2019. The role of "condition": A novel scientific knowledge graph representation and construction model. In Proceed- ings of the 25th ACM SIGKDD International Con- ference on Knowledge Discovery & Data Mining, pages 1634-1642. ACM.
Using rule-based natural language processing to improve disease normalization in biomedical text. Ning Kang, Bharat Singh, Zubair Afzal, Erik M Van Mulligen, Jan A Kors, Journal of the American Medical Informatics Association. 205Ning Kang, Bharat Singh, Zubair Afzal, Erik M van Mulligen, and Jan A Kors. 2012. Using rule-based natural language processing to improve disease nor- malization in biomedical text. Journal of the Amer- ican Medical Informatics Association, 20(5):876- 881.
Non-lexical neural architecture for fine-grained pos tagging. Matthieu Labeau, Kevin Löser, Alexandre Allauzen, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingMatthieu Labeau, Kevin Löser, and Alexandre Al- lauzen. 2015. Non-lexical neural architecture for fine-grained pos tagging. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 232-237.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Cn Pereira, John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data.
Truepie: Discovering reliable patterns in pattern-based information extraction. Qi Li, Meng Jiang, Xikun Zhang, Meng Qu, Timothy P Hanratty, Jing Gao, Jiawei Han, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningACMQi Li, Meng Jiang, Xikun Zhang, Meng Qu, Timothy P Hanratty, Jing Gao, and Jiawei Han. 2018. Truepie: Discovering reliable patterns in pattern-based infor- mation extraction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1675-1684. ACM.
Extracting chemical-protein relations using attention-based neural networks. Sijia Liu, Feichen Shen, Ravikumar Komandur Elayavilli, Yanshan Wang, Majid Rastegar-Mojarad, Vipin Chaudhary, Hongfang Liu, Database. 102Sijia Liu, Feichen Shen, Ravikumar Komandur Elayav- illi, Yanshan Wang, Majid Rastegar-Mojarad, Vipin Chaudhary, and Hongfang Liu. 2018. Extract- ing chemical-protein relations using attention-based neural networks. Database, 2018:bay102.
Multi-task identification of entities, relations, and coreferencefor scientific knowledge graph construction. Yi Luan, Luheng He, Mari Ostendorf, Hannaneh Hajishirzi, Proc. Conf. Empirical Methods Natural Language Process. (EMNLP). Conf. Empirical Methods Natural Language ess. (EMNLP)Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of enti- ties, relations, and coreferencefor scientific knowl- edge graph construction. In Proc. Conf. Empirical Methods Natural Language Process. (EMNLP).
Scientific information extraction with semisupervised neural tagging. Yi Luan, Mari Ostendorf, Hannaneh Hajishirzi, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsYi Luan, Mari Ostendorf, and Hannaneh Hajishirzi. 2017. Scientific information extraction with semi- supervised neural tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2641-2651, Copen- hagen, Denmark. Association for Computational Linguistics.
Joint entity recognition and disambiguation. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, Zaiqing Nie, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingGang Luo, Xiaojiang Huang, Chin-Yew Lin, and Za- iqing Nie. 2015. Joint entity recognition and disam- biguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, pages 879-888.
The nature of scientific statements. L David, Miller, Philosophy of Science. 143David L Miller. 1947. The nature of scientific state- ments. Philosophy of Science, 14(3):219-223.
Overview of bionlp shared task 2013. Claire Nédellec, Robert Bossy, Jin-Dong Kim, Jung-Jae Kim, Tomoko Ohta, Sampo Pyysalo, Pierre Zweigenbaum, Proceedings of the BioNLP Shared Task 2013 Workshop. the BioNLP Shared Task 2013 WorkshopAssociation for Computational LinguisticsClaire Nédellec, Robert Bossy, Jin-Dong Kim, Jung- Jae Kim, Tomoko Ohta, Sampo Pyysalo, and Pierre Zweigenbaum. 2013. Overview of bionlp shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 1-7. Association for Computational Linguistics.
Lexicon infused phrase embeddings for named entity resolution. Alexandre Passos, Vineet Kumar, Andrew Mc-Callum, arXiv:1404.5367arXiv preprintAlexandre Passos, Vineet Kumar, and Andrew Mc- Callum. 2014. Lexicon infused phrase embed- dings for named entity resolution. arXiv preprint arXiv:1404.5367.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, EMNLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543.
The interaction of domain knowledge and linguistic structure in natural language processing: interpreting hypernymic propositions in biomedical text. C Thomas, Marcelo Rindflesch, Fiszman, Journal of biomedical informatics. 366Thomas C Rindflesch and Marcelo Fiszman. 2003. The interaction of domain knowledge and linguis- tic structure in natural language processing: inter- preting hypernymic propositions in biomedical text. Journal of biomedical informatics, 36(6):462-477.
Open information extraction from conjunctive sentences. Swarnadeep Saha, COLING. Swarnadeep Saha et al. 2018. Open information ex- traction from conjunctive sentences. In COLING, pages 2288-2299.
Automated phrase mining from massive text corpora. Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R Voss, Jiawei Han, IEEE Transactions on Knowledge and Data Engineering. 3010Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R Voss, and Jiawei Han. 2018. Automated phrase mining from massive text corpora. IEEE Transactions on Knowledge and Data Engineering, 30(10):1825-1837.
Supervised open information extraction. Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, Ido Dagan, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 885-895.
Trpv5/v6 channels mediate ca2+ influx in jurkat t cells under the control of extracellular ph. Alena L Victor N Tomilin, Yuri A Cherezova, Svetlana B Negulyaev, Semenova, Journal of cellular biochemistry. 1171Victor N Tomilin, Alena L Cherezova, Yuri A Neg- ulyaev, and Svetlana B Semenova. 2016. Trpv5/v6 channels mediate ca2+ influx in jurkat t cells under the control of extracellular ph. Journal of cellular biochemistry, 117(1):197-206.
Large margin methods for structured and interdependent output variables. Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, Yasemin Altun, Journal of machine learning research. 6Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large mar- gin methods for structured and interdependent out- put variables. Journal of machine learning research, 6(Sep):1453-1484.
Open information extraction with meta-pattern discovery in biomedical literature. Xuan Wang, Yu Zhang, Qi Li, Yinyin Chen, Jiawei Han, Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health InformaticsACMXuan Wang, Yu Zhang, Qi Li, Yinyin Chen, and Ji- awei Han. 2018a. Open information extraction with meta-pattern discovery in biomedical literature. In Proceedings of the 2018 ACM International Confer- ence on Bioinformatics, Computational Biology, and Health Informatics, pages 291-300. ACM.
A novel unsupervised approach for precise temporal slot filling from incomplete and noisy temporal contexts. Xueying Wang, Haiqiao Zhang, Qi Li, Yiyu Shi, Meng Jiang, The World Wide Web Conference. ACMXueying Wang, Haiqiao Zhang, Qi Li, Yiyu Shi, and Meng Jiang. 2019. A novel unsupervised approach for precise temporal slot filling from incomplete and noisy temporal contexts. In The World Wide Web Conference, pages 3328-3334. ACM.
A comparison of word embeddings for the biomedical natural language processing. Yanshan Wang, Sijia Liu, Naveed Afzal, Majid Rastegar-Mojarad, Liwei Wang, Feichen Shen, Paul Kingsbury, Hongfang Liu, Journal of biomedical informatics. 87Yanshan Wang, Sijia Liu, Naveed Afzal, Majid Rastegar-Mojarad, Liwei Wang, Feichen Shen, Paul Kingsbury, and Hongfang Liu. 2018b. A compari- son of word embeddings for the biomedical natural language processing. Journal of biomedical infor- matics, 87:12-20.
Open information extraction using wikipedia. Fei Wu, Daniel S Weld, Proceedings of the 48th annual meeting of the association for computational linguistics. the 48th annual meeting of the association for computational linguisticsAssociation for Computational LinguisticsFei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In Proceedings of the 48th annual meeting of the association for compu- tational linguistics, pages 118-127. Association for Computational Linguistics.
Leveraging biomedical resources in bi-lstm for drug-drug interaction extraction. Bo Xu, Xiufeng Shi, Zhehuan Zhao, Wei Zheng, IEEE Access. 6Bo Xu, Xiufeng Shi, Zhehuan Zhao, and Wei Zheng. 2018. Leveraging biomedical resources in bi-lstm for drug-drug interaction extraction. IEEE Access, 6:33432-33439.
Leveraging knowledge bases in lstms for improving machine reading. Bishan Yang, Tom Mitchell, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1436- 1446.
Tablepedia: Automating pdf table reading in an experimental evidence exploration and analytic system. Wenhao Yu, Zongze Li, Qingkai Zeng, Meng Jiang, The World Wide Web Conference. ACMWenhao Yu, Zongze Li, Qingkai Zeng, and Meng Jiang. 2019. Tablepedia: Automating pdf table read- ing in an experimental evidence exploration and an- alytic system. In The World Wide Web Conference, pages 3615-3619. ACM.
Joint extraction of entities and relations based on a novel tagging scheme. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, Bo Xu, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extrac- tion of entities and relations based on a novel tag- ging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1227-1236. |
262,464,409 | Building Conversational Agents with Basilica | Basilica is an event-driven software architecture for creating conversational agents as a collection of reusable components. Software engineers and computer scientists can use this general architecture to create increasingly sophisticated conversational agents. We have developed agents based on Basilica that have been used in various application scenarios and foresee that agents build on Basilica can cater to a wider variety of interactive situations as we continue to add functionality to our architecture. | [] | Building Conversational Agents with Basilica
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2009. 2009
Rohit Kumar [email protected]
Language Technologies Institute Carnegie Mellon University Pittsburgh
15213PAUSA
Carolyn P Rosé [email protected]
Language Technologies Institute Carnegie Mellon University Pittsburgh
15213PAUSA
Building Conversational Agents with Basilica
Proceedings of NAACL HLT 2009: Demonstrations
NAACL HLT 2009: DemonstrationsBoulder, ColoradoAssociation for Computational LinguisticsJune 2009. 2009
Basilica is an event-driven software architecture for creating conversational agents as a collection of reusable components. Software engineers and computer scientists can use this general architecture to create increasingly sophisticated conversational agents. We have developed agents based on Basilica that have been used in various application scenarios and foresee that agents build on Basilica can cater to a wider variety of interactive situations as we continue to add functionality to our architecture.
Introduction
Conversational Interfaces apply the metaphor of agent to an interface which allows the user to conversationally interact with the machine using natural language through speech or text. The current state of the art in the area of conversational interfaces is largely dominated by spoken dialog systems (SDS). These SDS are most often used for the purpose of accessing information from a database over the telephone. Other common applications of conversational agents include computer aided instruction (CAI) and human-robot interaction (HRI).
Conversational Agents in most of today's SDS, CAI and HRI are designed to work within the scope of specific task domains which allows the scientists and engineers working on such systems to ensure satisfactory and relevant interaction with the user most of the time. Within the task domain, such agents can display intelligent interactive behavior like helping the user use the interface, ask-ing remedial questions (Bohus and Rudnicky, 2005), shaping the user behavior (Tomko and Rosenfeld, 2004) by using alternative phrasing of utterances, responding to user affect (D'Mello et al., 2008) through text, voice and gesture, engaging the user through the display of presence via backchannels (Ward, 1996) and embodiment (Cassell et al., 1999).
As more and more of these intelligent interactive agents get built for many task domains (Raux et al., 2005;Bohus et al., 2007;Gockley et al., 2005; Amtrak Julie; …) that surround our everyday life, we observe a gradual transition in the use of the conversational agent technology to be a form of situated interaction. One of the characteristic requirements of this transition towards ubiquity of such interactive agents is the capability to sense and trigger behavior in a context sensitive way.
In most conversational interfaces today, the only trigger used by the agents is that of initiation of conversation usually by sensing user presence through a telephone call, proximity detection or user login into a virtual environment. The initiation event is followed by a scripted task-oriented conversation with the agent. These scripts could be fairly complex depending on the representational formalism underlying the script. Most of the common software architectures/platforms used to create conversational agents like TellMe Studio, Voxeo Prophecy, Olympus (Bohus et al., 2007), DIPPER (Bos and Oka, 2003), etc. use one or more of these presence sensing techniques and one of the many existing scripting languages including VoiceXML, SALT, TuTalk (Jordan et al., 2007) and Ravenclaw (Bohus and Rudnicky, 2003) task specification language among others.
However, in our recent work on building conversational agents situated in collaborative learning environments, we have discovered the need for a software architecture for creating agents that persist in an interactive environment in which human users interact with these agents as well as with each other. In this situation, the agents need to be able to sense many kinds of triggers at many points of time and choose to respond to some of those triggers through a variety of modalities including conversation. This observation was the motivation for creating Basilica which is our architecture for building conversational agents. In section 2, we talk more about the intricacies of Basilica and agents built on this architecture. Section 3 describes some of application scenarios in which we are using Conversational Agents based on Basilica.
Basilica Architecture
In order to meet the need for an architecture that enables development of Conversational Agents as a collection of behavioral components that can sense triggers and respond to those appropriately, we created the Basilica architecture.
In this architecture, we model sensing and responding as two types of components that make up conversational agents. The sensing components referred to as Filters observe stimuli from various kinds of input sources and other components. They can also generate stimuli for other components. On the other hand, Actor components generate responsive behavior that may be observed the user(s) and other components. Basilica provides the software elements required to tie Filters and Actors together through Connections that carry Events over them. We think that many of the state of the art intelligent behaviors listed in section 1 can be implemented as dyads of filter and actor components.
The minimal set of behavioral component classes listed above can easily be extended. For example, certain agent designs may need memory components and coordination components which bridge across multiple actors or filters that do not necessarily share events with each others. Timer components may be used to generate regulated stimuli. Besides belonging to one of these classes of components, certain components may act as wrappers to external systems. For example, we use wrapper components to integrate TuTalk dialog management system (Jordan et al., 2007) for some of the instructive behavior exhibited by our agents. Also, certain components act as wrappers to the environment in which the agent is present. These wrappers help in easily integrating the same agent with multiple environments without having to change any underlying components except the wrappers to the environment.
We believe that fairly intelligent conversational agents can be built for situated interaction applications by incrementally building a large number of behavioral components. Each of these components represent a decomposition of the agent's perceptive and cognitive capabilities. Among the agents we have built using Basilica, we observe that some of these capabilities are common across agents. Hence the corresponding behavioral components get re-used in many cases. Some instances of component re-use are mentioned in Section 3.
Note that recently there has been other work on modeling conversational agents as a decomposition of components. Jaspis (Turunen and Hakulinen, 2003) models the agent as a collection of managers, agents and evaluators which synchronize with each other through transactions. RIME (Nakano et al., 2008) distributes cognitive capabilities across a collection of experts of two types. However, evaluators and agents are configured as a pile of components whereas our filters and actors are configured as a network. Hence, designing conversational agents with Basilica gives the flexibility to change the network topology. Also, while Jaspis agents are stateless, actors in our architecture need not be stateless. In other work on event-based multi-layered architectures (Raux and Eskenazi, 2007), events are used for communication between layers as a mean to provide higher reactive compared to pipeline architectures. While we share this motivation, definition of events is extended here as events are used for all kinds of communication, coordination and control in Basilica.
Current Application Scenarios
In 2008, we built three conversational agents to support learners in collaborative learning environments. Also, we are currently using Basilica to develop a cross-lingual assistive agent to support non-Spanish speaking 911 dispatchers in the southern states of the US. In this section, we will discuss these four conversational agents briefly.
CycleTalk is an intelligent tutoring system that helps college sophomores studying Thermodynamics learn about principles of designing Steam cycles. In our recent experiments, we have studied the effectiveness of conversational agents in this intelligent tutoring system (Kumar et al., 2007;Chaudhuri et al., 2008). Student use the system both individually and in pairs. The conversational agent monitors student interaction in a chat room as the students work on solving a design problem. The tutor provides the students with hints to help touch upon all the underlying concepts while the students work on the design exercise. Also the agent brings up reflective dialogs when it detects a relevant topic in the students conversation. One of the problems we observed over the years with the use of instructional dialogs in collaborative environments is that the students tend to ignore the tutoring agent if it interrupts the students when they are talking to each other. Basilica helped us in resolving this problem by implementing a component that tells that student that help is available on the topic they are talking about and they can ask for the dialog support when they are ready. Basilica gives the flexibility to change the intervention strategy used by the agent when it is speaking with more than one student.
In another version of this system, the tutoring agent prompted the students with some motivational prompts occasionally as we observed that many of the students found the design exercise very demanding to complete in the time permitted for this lab exercise. We found that the use of motivational prompts improved the student's attitude towards the automated agent.
We developed another agent to help college level mathematics students working on problem solving. This agent operates in a collaborative environment which includes a whiteboard. As in the case with the CycleTalk agent, the agent used here also helps the students with hints and dialogs. The component required for those behaviors were reused as-is with modifications only their configuration files. Besides these behaviors, the agent coordinates the problem solving sessions for the team by presenting the team with problems as images placed on the whiteboard and helping the students stay on track by answering questions about the amount of time left in the problem solving session.
Recently, we modified the environment wrapper components of our CycleTalk agent and integrated them with a SecondLife application (Weusijana et al., 2008). This integration helps developers of conversational agents create interactive agents in the SecondLife virtual environment.
Finally, in a currently ongoing project, we are building an agent that would interpret Spanish utterances from a distressed 9-1-1 caller and work with a human dispatcher who does not know Spanish to attend to the call. We model the agent in this scenario after a human translator who does not just translate the caller's input to English and vice versa. Instead the translator partners with the dispatcher to provide service to the caller. Partnering conversational agents with a human user to help another human user in a different role is a novel application of interactive agents. Building conversational agents using Basilica involves the process of representing the desired agent as a decomposition of components. Figure 1 above shows the components that make up the CycleTalk conversational agent we mentioned in Section 3. The rectangles represent Filters and the parallelograms represent Actors. Connections are shown as solid lines. In a detailed design, these lines are annotated with the events they carry.
Building Agents using Basilica
Once an agent is designed, the agents and filters required for the implementation of the agent can be either re-used from the pre-existing components of Basilica or implemented as Java objects that extend the corresponding component class. Often the programming task is limited to implementing handlers and generators for the events received and sent out by the component. Theoretically, the validity of a component can be verified if it can handle and generate all the events as specified in the design diagram.
As we continue to develop more conversational agents on this architecture, we intend to create development tools which would easily translate a design like Figure 1 to the implementation and facilitate validation and debugging of the agent.
Demonstration Outline
The demonstration of our architecture will give the audience an opportunity to interact with the agents we have described in section 3 and discuss how we can design such agents using Basilica. We will have a poster to aid the discussion along with ability to probe into the code underlying the design of these agents. Attendees will be able to understand the process involved in building agents with Basilica and assess the effort required. Additionally, if we have any specialized development tools to automatically map agent design as described in Section 4 to Java code, we will demonstrate those tools. Up to date information about Basilica can be found at http://basilica.rohitkumar.net/wiki/
Figure 1 .
1Components of the CycleTalk Agent
AcknowledgementsThis work is supported by NSF REESE/REC grant number 0723580.References |
258,557,740 | MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset | Relation extraction (RE) is a fundamental task in information extraction, whose extension to multilingual settings has been hindered by the lack of supervised resources comparable in size to large English datasets such as TACRED (Zhang et al., 2017). To address this gap, we introduce the MultiTACRED dataset, covering 12 typologically diverse languages from 9 language families, which is created by machine-translating TACRED instances and automatically projecting their entity annotations. We analyze translation and annotation projection quality, identify error categories, and experimentally evaluate fine-tuned pretrained mono-and multilingual language models in common transfer learning scenarios. Our analyses show that machine translation is a viable strategy to transfer RE instances, with native speakers judging more than 83% of the translated instances to be linguistically and semantically acceptable. We find monolingual RE model performance to be comparable to the English original for many of the target languages, and that multilingual models trained on a combination of English and target language data can outperform their monolingual counterparts. However, we also observe a variety of translation and annotation projection errors, both due to the MT systems and linguistic features of the target languages, such as pronoun-dropping, compounding and inflection, that degrade dataset quality and RE model performance. | [
9206785,
202769250,
202774148,
216869183,
13877296,
233189585,
233189525,
207852344,
239024671
] | MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset
Long PapersCopyright Long PapersJuly 9-14, 2023
Leonhard Hennig [email protected]
German Research Center for Artificial Intelligence (DFKI) Speech and Language Technology Lab
Philippe Thomas [email protected]
German Research Center for Artificial Intelligence (DFKI) Speech and Language Technology Lab
Sebastian Möller [email protected]
German Research Center for Artificial Intelligence (DFKI) Speech and Language Technology Lab
MultiTACRED: A Multilingual Version of the TAC Relation Extraction Dataset
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
the 61st Annual Meeting of the Association for Computational LinguisticsLong Papers1July 9-14, 2023
Relation extraction (RE) is a fundamental task in information extraction, whose extension to multilingual settings has been hindered by the lack of supervised resources comparable in size to large English datasets such as TACRED (Zhang et al., 2017). To address this gap, we introduce the MultiTACRED dataset, covering 12 typologically diverse languages from 9 language families, which is created by machine-translating TACRED instances and automatically projecting their entity annotations. We analyze translation and annotation projection quality, identify error categories, and experimentally evaluate fine-tuned pretrained mono-and multilingual language models in common transfer learning scenarios. Our analyses show that machine translation is a viable strategy to transfer RE instances, with native speakers judging more than 83% of the translated instances to be linguistically and semantically acceptable. We find monolingual RE model performance to be comparable to the English original for many of the target languages, and that multilingual models trained on a combination of English and target language data can outperform their monolingual counterparts. However, we also observe a variety of translation and annotation projection errors, both due to the MT systems and linguistic features of the target languages, such as pronoun-dropping, compounding and inflection, that degrade dataset quality and RE model performance.
Introduction
Relation extraction (RE), defined as the task of identifying and classifying semantic relationships between entities from text (cf. Figure 1), is a fundamental task in information extraction (Doddington et al., 2004). Extending RE to multilingual settings has recently received increased interest (Zou et al., 2018;Nag et al., 2021;Chen et al., 2022c), both to address the urgent need for more inclusive NLP systems that cover more languages than just English (Ruder et al., 2019;Hu et al., 2020), as well as to investigate language-specific phenomena and challenges relevant to this task. The main bottleneck for multilingual RE is the lack of supervised resources, comparable in size to large English datasets (Riedel et al., 2010;Zhang et al., 2017), as annotation for new languages is very costly. Most of the few existing multilingual RE datasets are distantly supervised (Köksal and Özgür, 2020;Seganti et al., 2021;Bhartiya et al., 2022), and hence suffer from noisy labels that may reduce the prediction quality of models (Riedel et al., 2010;Xie et al., 2021). Available fully-supervised datasets are small, and cover either very few domain-specific relation types (Arviv et al., 2021;Khaldi et al., 2022), or only a small set of languages (Nag et al., 2021).
To address this gap, and to incentivize research on supervised multilingual RE, we introduce a multilingual version of one of the most prominent supervised RE datasets, TACRED (Zhang et al., 2017). MultiTACRED is created by machinetranslating TACRED instances and automatically projecting their entity annotations. Machine translation is a popular approach for generating data in cross-lingual learning (Hu et al., 2020;Nag et al., 2021). Although the quality of machine-translated data may be lower due to translation and alignment errors (Yarmohammadi et al., 2021), it has been shown to be beneficial for classification and structured prediction tasks (Hu et al., 2020;Ozaki et al., 2021;Yarmohammadi et al., 2021).
The MultiTACRED dataset we present in this work covers 12 languages from 9 language families. 1 We select typologically diverse languages which span a large set of linguistic phenomena such as compounding, inflection and pronoun-drop, and for which a monolingual pretrained language model is available. We automatically and manually analyze translation and annotation projection quality in all target languages, both in general terms and with respect to the RE task, and identify typical error categories for alignment and translation that may affect model performance. We find that overall translation quality is judged to be quite good with respect to the RE task, but that e.g. pronoun-dropping, coordination and compounding may cause alignment and semantic errors that result in erroneous instances. In addition, we experimentally evaluate fine-tuned pretrained monoand multilingual language models (PLM) in common training scenarios, using source language (English), target language, or a mixture of both as training data. We also evaluate an English data fine-tuned model on back-translated test instances to estimate the effect of noise introduced by the MT system on model performance. Our results show that in-language training works well, given a suitable PLM. Cross-lingual zero-shot transfer is acceptable for languages well-represented in the multilingual PLM, and combining English and target language data for training considerably improves performance across the board.
To summarize, our work aims to answer the following research questions: Can we reaffirm the usefulness of MT and cross-lingual annotation projection, in our study for creating large-scale, high quality multilingual datasets for RE? How do pretrained mono-and multilingual encoders compare to each other, in within-language as well as crosslingual evaluation scenarios? Answers to these questions can provide insights for understanding language-specific challenges in RE, and further research in cross-lingual representation and transfer learning. The contributions of this paper are:
• We introduce MultiTACRED, a translation of the widely used, large-scale TACRED dataset into 12 typologically diverse target languages: Arabic, German, Spanish, French, Finnish, Hindi, Hungarian, Japanese, Polish, Russian, Turkish, and Chinese.
• We present an evaluation of monolingual, cross-lingual, and multilingual models to evaluate target language performance for all 12 languages.
• We present insights into the quality of machine translation for RE, analyzing alignment as well as language-specific errors.
Translating TACRED
We first briefly introduce the original TACRED dataset, and then describe the language selection and automatic translation process. We wrap up with a description of the analyses we conduct to verify the translation quality.
The TACRED dataset
The TAC R elation E xtraction D ataset 2 , introduced by Zhang et al. (2017) Alt et al. (2020) and Stoica et al. (2021) improved upon the label quality of the crowd annotations by re-annotating large parts of the dataset.
Automatic Translation
We translate the complete train, dev and test splits of TACRED into the target languages, and in addition back-translate the test split into English to generate machine-translated English test data. Each instance in the original TACRED dataset is a list of tokens, with the head and tail entity arguments of the potential relation specified via token offsets. For translation, we concatenate tokens with whitespace and convert head and tail entity offsets into XML-style markers to denote the arguments' boundaries, as shown in Figure 1. We use the commercial services of DeepL 5 and Google 6 , since both offer the functionality to preserve XML tag markup. Since API costs are similar, we use DeepL for most languages, and only switch to Google for languages not supported by DeepL (at the time we were running the MT). We validate the translated text by checking the syntactic correctness of the XML tag markup, and discard translations with invalid tag structure, e.g. missing or invalid head or tail tag pairs. After translation, we tokenize the translated text using language-specific tokenizers. 7 Finally, we store the translated instances in same JSON format as the original TACRED English dataset, with fields for tokens, entity types and offsets, label and instance id. We can then easily apply the label corrections provided by e.g. Alt et al. (2020) or Stoica et al. (2021) to any target language dataset by applying the respective patch files.
We select target languages to cover a wide set of interesting linguistic phenomena, such as compounding (e.g., German), inflection/derivation (e.g., German, Turkish, Russian), pronoun-dropping (e.g., Spanish, Finnish, Polish), and varying degrees of synthesis (e.g., Turkish, Hungarian vs. Chinese). We also try to ensure that there is a monolingual pretrained language model available for each language, which is the case for all languages except Hungarian. The final set of languages in Multi-TACRED is: German, Finnish, Hungarian, French, Spanish, Arabic, Hindi, Japanese, Chinese, Polish, Russian, and Turkish. Table 6 in Appendix A lists key statistics per language.
Translation Quality Analysis
To verify the overall quality of the machinetranslated data, we also manually inspect translations. For each language, we randomly sample 100 instances from the train split. For each sample 7 See Appendix A for details. instance, we display the source (English) text with entity markup (see Figure 1 for the format), the target language text with entity markup, and the relation label.
We then ask native speakers to judge the translations by answering two questions: (Q1) Does the translated text meaningfully preserve the semantic relation of the English original, regardless of minor translation errors? 8 (Q2) Is the overall translation linguistically acceptable for a native speaker? Human judges are instructed to read both the English source and the translation carefully, and then to answer the two questions with either yes or no. They may also add free-text comments, e.g. to explain their judgements or to describe translation errors. The samples of each language are judged by a single native speaker. Appendix B gives additional details.
In addition, we conduct a manual analysis of the automatically discarded translations, using a similar-sized random sample from the German, Russian and Turkish train splits, to identify possible reasons and error categories. These analyses are performed by a single trained linguist per language, who is also a native speaker of that language, with joint discussions to synthesize observations. Results of both analyses are presented in Section 4.1.
Experiments
In this section, we describe the experiments we conduct to answer the research questions "How does the performance of language-specific models compare to the English original?" and "How does the performance of language-specific models compare to multilingual models such as mBERT trained on the English source data? How does the performance change when including target-language data for training". We first introduce the training scenarios, and then give details on choice of models and hyperparameters, as well as the training process.
Training scenarios
We evaluate the usefulness of the translated datasets by following the most prevalent approach of framing RE as a sentence-level supervised multi-class classification task. Formally, given a relation set R and a text x = [x 1 , x 2 , . . . , x n ] (where x 1 , · · · , x n are tokens) with two disjoint spans e h = [x i , . . . , x j ] and e t = [x k , . . . , x l ] denoting the head and tail entity mentions, RE aims to predict the relation r ∈ R between e h and e t , or assign the no_relation class if no relation in R holds. Similar to prior work (e.g., Nag et al. (2021)), we evaluate relation extraction models in several different transfer learning setups, which are described next.
Monolingual We evaluate the performance of language-specific PLMs for each of the 12 target languages, plus English, where the PLM is supervisedly fine-tuned on the train split of the respective language. Cross-lingual We evaluate the performance of a multilingual mBERT model on the test split of each of the 12 target languages, plus English, after training on the English train split. Mixed / Multilingual We evaluate the performance of a multilingual mBERT model on the test split of each of the 12 target languages, after training on the complete English train split and a variable portion of the train split of the target language, as suggested e.g. by Nag et al. (2021). We vary the amount of target language data in {5%,10%,20%,30%,40%,50%,100%} of the available training data. When using 100%, we are effectively doubling the size of the training set, and "duplicating" each training instance. Back-translation Finally, we also evaluate the performance of a BERT model fine-tuned on the original (untranslated) English train split on the test sets obtained by back-translating from each target language.
Training Details and Hyperparameters
We implement our experiments using the Hugging Face (HF) Transformers library (Wolf et al., 2020), Hydra (Yadan, 2019) and PyTorch (Paszke et al., 2019). 9 Due to the availability of pretrained models for many languages and to keep things simple, we use BERT as the base PLM ( [TAIL=type]", where type is the entity type of the respective argument. We use the final hidden state representation of the [CLS] token as the fixed length representation of the input sequence that is fed into the classification layer.
We train with batch size of 8 for 5 epochs, and optimize for cross-entropy. The maximum sequence length is 128 for all models. We use AdamW with a scenario-specific learning rate, no warmup, β 1 = 0.9, β 2 = 0.999, ϵ = 1e − 8, and linear decay of the learning rate. Other hyperparameter values, as well as scenario-specific learning rates and HF model identifiers for the pretrained BERT models, are listed in Appendix C.
We use micro-F1 as the evaluation metric, and report the median result of 5 runs with different, fixed random seeds. For all experiments, we use the revised version of TACRED presented by Alt et al. (2020), which fixes a large portion of the dev and test labels. 10 We report scores on the test set in the respective target language, denoted as test L . Due to the automatic translation and validation, training and test sets differ slightly across languages, and absolute scores are thus not directly comparable across languages. We therefore also report scores on the intersection test set of instances available in all languages (test ∩ ). This test set contains 11,874 instances, i.e. 76.6% of the original test set (see also Table 6).
Translation Quality
Automatic validation As described in Section 2.2, we validate the target language translation by checking whether the entity mention tag markup was correctly transferred. On average, 2.3% of the instances were considered invalid after translation. By far the largest numbers of such errors occurred when translating to Japanese (9.6% of translated instances), followed by Chinese (4.5%) and Spanish (3.8%). Table 6 in Appendix A gives more details, and shows the number of valid translations for each language, per split and also for the back-translation of the test split. Back-translation incurred only half as many additional errors as compared to the initial translation of the test split into the target language, presumably due to the fact that 'hard' examples had already been filtered out during the first translation step.
The validation basically detects two types of alignment errors -missing and additional alignments. An alignment may be missing in the case of pro-drop languages, where the argument is not realized in the translation (e.g. Spanish, Chinese), or in compound noun constructions in translations (e.g. in German). In other cases, the aligner produces multiple, disjoint spans for one of the arguments, e.g. in the case of coordinated conjunctions or compound constructions with different word order in the target language (e.g. in Spanish, French, Russian). Table 8 in Appendix D lists more examples for the most frequent error categories we observed. Manual Validation Table 1 shows the results of the manual analysis of translations. With regards to Q1, on average 87.5% of the translations are considered to meaningfully express the relation, i.e. as in the original text. Overall translation quality is judged to be good for 83.7% of the sampled instances on average across languages. The most frequent error types noted by the annotators are again alignment errors, such as aligning a random (neighboring) token from the sentence with an English pronoun argument in pronoun-dropping languages (e.g. Polish, Chinese), and non-matching spans (inclusion/exclusion of tokens in the aligned span). Similar errors have also been observed in a recent study by Chen et al. (2022b). In highly inflecting languages such as Finnish or Turkish, the aligned entity often changes morphologically (e.g. possessive/case suffixes). 11 Other typical errors are 11 Inflection and compounding both ideally could be solved by introducing alignment/argument span boundaries at the uncommon/wrong word choices, (e.g. due to missing or wrongly interpreted sentence context), and the omission of parts of the original sentence. Less frequent errors include atypical input which was not translated correctly (e.g. sentences consisting of a list of sports results), and non-English source text (approx. 1% of the data, see also Stoica et al. (2021)). Table 8 also lists examples for these error categories.
Model Performance
Monolingual Table 2 show that language-specific models perform reasonably well for many of the evaluated languages. 13 Their morpheme level, but this in turn may raise issues with e.g. PLM tokenization and entity masking. 12 See also Appendix C for an additional discussion of Hindi performance issues 13 However, as various researchers have pointed out, model performance may be over-estimated, since the models may be 3789 lower performance may be due to several reasons: translation errors, smaller train and test splits because of the automatic validation step, the quality of the pre-trained BERT model, as well as languagespecific model errors.
Results on the intersection test set test ∩ are slightly higher on average, as compared to test L . Relative differences to English, and the overall 'ranking' of language-specific results, remain approximately the same. This reaffirms the performance differences between languages observed on test L . It also suggests that the intersection test set contains fewer challenging instances. For Hindi, these results, in combination with the low manual evaluation score of 67% correct translations, suggest that the translation quality is the main reason for the performance loss.
We conclude that for the monolingual scenario, machine translation is a viable strategy to generate supervised data for relation extraction for most of the evaluated languages. Fine-tuning a languagespecific PLM on the translated data yields reasonable results that are not much lower than those of the English model for many tested languages. Cross-lingual In the cross-lingual setting, micro-F1 scores are lower than in the monolingual setting for many languages (see Table 3). The micro-F1 scores for languages well-represented in mBERT's pretraining data (e.g., English, German, Chinese) are close to their monolingual counterparts, whereas for languages like Arabic, Hungarian, Japanese, or Turkish, we observe a loss of 4.7 to 9.7 F1 points. This is mainly due to a much lower recall, for example, the median recall for Japanese is only 51.3. The micro-F1 scores are highly correlated with the pretraining data size of each language in mBERT: The Spearman rank correlation coefficient of micro-F1 L T scores with the WikiSize reported in Wu and Dredze (2020) is r s = 0.82 , the Pearson correlation coefficient is r p = 0.78 . Hence, languages which are less affected by "translationese" (Riley et al., 2020;Graham et al., 2020). well represented in mBERT's pretraining data exhibit worse relation extraction performance, as they don't benefit as much from the pretraining.
Precision, Recall and F1 on the intersection test set test ∩ are again slightly better on average than the scores on test L . For Hindi, our results reaffirm the observations made by Nag et al. (2021) for cross-lingual training using only English training data. Our results for RE also confirm prior work on the effectiveness of cross-lingual transfer learning for other tasks (e.g., Conneau et al. (2020); Hu et al. (2020). While results are lower than in the monolingual setting, they are still very reasonable for wellresourced languages such as German or Spanish, with the benefit of incurring no translation at all for training. However, for languages that are less wellrepresented in mBERT, using a language-specific PLM in combination with in-language training data produces far better results. Table 4 shows the results obtained when training on both English and varying amounts of target language data. We can observe a considerable increase of mBERT's performance for languages that are not well represented in mBERT's pretraining data, such as e.g. Hungarian. These languages benefit especially from adding in-language training data, in some cases even surpassing the performance of their respective monolingual model. For example, mBERT trained on the union of the English and the complete Japanese train splits achieves a micro-F1 score of 73.3, 11.2 points better than the cross-lingual score of 62.1 and 1.5 points better than the 71.8 obtained by the monolingual model on the same test data. Languages like German, Spanish, and French don't really benefit from adding small amounts of in-language training data in our evaluation, but show some improvements when adding 100% of the target language training data (last row), i.e. when essentially doubling the size of the training data. Other languages, like Finnish or Turkish, show improvements over the cross-lingual baseline, but don't reach the performance of their monolingual counterpart. (2020)). Languages with less pretraining data in mBERT suffer a larger performance loss. Our results confirm observations made by Nag et al. (2021), who also find improvements when training on a mixture of gold source language data and projected silver target language data. For the related task of event extraction, Yarmohammadi et al. (2021) also observe that the combination of data projection via machine translation and multilingual PLMs can lead to better performance than any one cross-lingual strategy on its own. Back-translation Finally, Table 5 shows the performance of the English model evaluated on the back-translated test splits of all target languages. Micro-F1 scores range from 69.6 to 76.1, and are somewhat lower than the score of 77.1 achieved by the same model on the original test set. For languages like German, Spanish, and French, scores are very close to the original, while for Arabic and Hungarian, we observe a loss of approximately 7 percentage points. These differences may be due to the different quality of the MT systems per language pair, but can also indicate that the model cannot always handle the linguistic variance introduced by the back-translation.
Mixed/Multilingual
Related Work
Multilingual RE Datasets Prior work has primarily focused on the creation of distantly supervised datasets. Dis-Rex (Bhartiya et al., 2022) and RelX-Distant (Köksal and Özgür, 2020) are large, Wikipedia-based datasets, but cover only 4 resp. 5 European languages. SMiLER (Seganti et al., 2021)
Conclusion
We introduced a multilingual version of the largescale TACRED relation extraction dataset, obtained via machine translation and automatic annotation projection. Baseline experiments with inlanguage as well as cross-lingual transfer learning models showed that MT is a viable strategy to transfer sentence-level RE instances and span-level entity annotations to typologically diverse target languages, with target language RE performance comparable to the English original for many languages. However, we observe that a variety of errors may affect the translations and annotation alignments, both due to the MT system and the linguistic features of the target languages (e.g., compounding, high level of synthesis). MultiTACRED can thus serve as a starting point for deeper analyses of annotation projection and RE challenges in these languages. For example, we would like to improve our understanding of RE annotation projection for highly inflectional/synthetic languages, where token-level annotations are an inadequate solution. In addition, constructing original-language test sets to measure the effects of translationese remains an open challenge.
We plan to publish the translated dataset for the research community, depending on LDC requirements for the original TACRED and the underlying TAC corpus. We will also make publicly available the code for the automatic translation, annotation projection, and our experiments.
Limitations
A key limitation of this work is the dependence on a machine translation system to get highquality translations and annotation projections of the dataset. Depending on the availability of language resources and the MT model quality for a given language pair, the translations we use for training and evaluation may be inaccurate, or be affected by translationese, possibly leading to overly optimistic estimates of model performance. In addition, since the annotation projection for relation arguments is completely automatic, any alignment errors of the MT system will yield inaccurate instances. Alignment is at the token-level, rendering it inadequate for e.g. compounding or highly inflectional languages. Due to the significant resource requirements of constructing adequately-sized test sets, another limitation is the lack of evaluation on original-language test instances. While we manually validate and analyze sample translations in each target language (Section 4.1) for an initial exploration of MT effects, these efforts should be extended to larger samples or the complete test sets. Finally, we limited this work to a single dataset, which was constructed with a specific set of target relations (person-and organization-related), from news and web text sources. These text types and the corresponding relation expressions may be well reflected in the training data of current MT systems, and thus easier to translate than relation extraction datasets from other domains (e.g., biomedical), or other text types (e.g., social media). The translated examples also reflect the source language's view of the world, not how the relations would necessarily be formulated in the target language (e.g., use of metaphors, or ignorance of cultural differences).
Ethics Statement
We use the data of the original TACRED dataset "as is". Our translations thus reflect any biases of the original dataset and its construction process, as well as biases of the MT models (e.g., rendering gender-neutral English nouns to gendered nouns in a given target language). The authors of the original TACRED dataset (Zhang et al., 2017) have not stated measures that prevent collecting sensitive text. Therefore, we do not rule out the possible risk of sensitive content in the data. Furthermore, we utilize various BERT-based PLMs in our experiments, which were pretrained on a wide variety of source data. Our models may have inherited biases from these pretraining corpora.
Training jobs were run on a machine with a single NVIDIA RTX6000 GPU with 24 GB RAM. Running time per training/evaluation is approximately 1.5 hours for the monolingual and cross-lingual models, and up to 2 hours for the mixed/multilingual models that are trained on English and target language data.
A Translation Details
We use the following parameter settings for DeepL API calls: split_sentences:1, tag_handling:xml, outline_detection:0. For Google, we use for-mat_:html, model:nmt. Table 6 shows the number of syntactically valid and invalid translations for each language and split, as well as for the back-translation of the test split.
For tokenization, we use Spacy 3.2 14 with standard (non-neural) models for de, es, fr, fi, ja, pl, ru, zh, and TranKIT 1.1.0 15 for ar, hi, hu, tr.
The translation costs per language amount to approximately 460 Euro, for a total character count of 22.9 million characters to be translated (source sentences including entity markup tags), at a price of 20 Euro per 1 million characters at the time of writing. Compared to an estimated annotation cost of approximately 10K USD, translation costs amount to less than 5% of the cost of fully annotating a similar-sized dataset in a new language. 16
B Human Translation Analysis
For the manual analysis of translated TACRED instances, we recruited a single native speaker for each language among the members of our lab and associated partners. Annotators were not paid for the task, but performed it as part of their work at the lab. All annotators are either Master's degree or PhD students, with a background in Linguistics, Computer Science, or a related field. The full instructions given to annotators, after a personal introduction to the task, are shown in Figure 2.
C Additional Training Details
All pre-trained models evaluated in this study are used as they are available from HuggingFace's model hub, without any modifications. Our implementation uses HF's BertForSequenceClassification implementation with default settings for dropout, positional embeddings, etc. Licenses for the pretrained BERT models are listed in Table 7 Hydra under the MIT license, and PyTorch uses a modified BSD license.
For Hungarian, we use bert-base-multilingualcased, since there is no pretrained Hungarian BERT model available on the hub. For Hindi, we tried several models by l3cube-pune, neuralspace-reverie, google and ai4bharat, but all of these produced far worse results than the ones reported here for For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. TACRED/MultiTACRED - Table 6 C Did you run computational experiments?
D Translation Error Examples
Section 3 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3, Ethics Statement & Appendix C
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3, Appendix C C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3.2
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix A Preprocessing D Did you use human annotators (e.g., crowdworkers) or research with human participants?
Section 2.3 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix B
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Appendix B
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Explained informally during introduction to the task D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
not applicable D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Appendix B
Figure 1 :
1Example translations from English to German, Polish, Turkish and Chinese with XML markup for the head and tail entities to project relation argument annotations.
, if specified in the repository. The Transformers library is available under the Apache 2.0 license, 14 https://spacy.io 15 https://github.com/nlp-uoregon/trankit 16 Stoica et al. (2021) pay 0.15 USD per HIT of 5 sentences in TACRED. With an average of 3 crowd workers per HIT and a total of 106,264 examples in TACRED, this amounts to approximately 9,564 USD. Angeli et al. (2014) report a cost of 3,156 USD for annotating 23,725 examples, which would correspond to a cost of 14,135 USD for the whole TACRED dataset.
Figure 2 :
2Task description given to human judges for translation quality analysis.
or the label no_relation for negative instances. About 79.5% of the examples are labeled as no_relation. 4 All relation labels were obtained by crowdsourcing, using Amazon Mechanical Turk. Recent work by, is a fully supervised dataset
of sentence-level binary relation mentions. It con-
sists of 106k sentences with entity mention pairs
collected from the TAC KBP 3 evaluations 2009-
2014, with the years 2009 to 2012 used for training,
2013 for development, and 2014 for testing. Each
sentence is annotated with a head and a tail entity
mention, and labeled with one of 41 person-and
organization-oriented relation types, e.g. per:title,
org:founded,
Devlin et al., 2019). We follow Baldini Soares et al. (2019) and enclose the subject and object entity mentions with special token pairs, modifying the input to become "[HEAD_START] subject [HEAD_END] . . . [TAIL_START] object [TAIL_END]". In addition, we append the entity types of subject and object to the input text as special tokens, after a separator token: ". . . [SEP] [HEAD=type] [SEP]
Table 2
2shows the results for the
monolingual setting. The English BERT model
achieves a reference median micro-F1 score of
77.1, which is in line with similar results for fine-
tuned PLMs. (Alt et al., 2020; Chen et al., 2022a;
Zhou and Chen, 2022) Micro-F1 scores for the
other languages range from 71.8 (Hungarian) to
76.4 (Finnish), with the notable exception of Hindi,
where the fine-tuned BERT model only achieves
a micro-F1 score of 65.1 12 . As discussed in Sec-
tion 3.2, results are not directly comparable across
languages. However, the results in
Table 2 :
2Micro-F1 scores on the TACREV dataset for the monolingual setting. The table shows the median micro-F1 score across 5 runs, on the test split of the target language (test L ), and on the intersection of test instances available in all languages (test ∩ ).
Table 3 :
3Micro-Precision, Recall and F1 scores on the TACREV dataset for the cross-lingual setting.The table shows
Table 4 :
4Micro-F1 scores on the TACREV dataset for the mixed/multilingual setting. The table shows the median micro-F1 score across 5 runs, on the translated test split of the target language, when training mBERT on the full English train split and various portions, from 5% to 100%, of the translated target language train split. The last column shows the mean improvement across languages, compared to the cross-lingual baseline. Micro-F1 scores improve when adding in-language training data for languages not well represented in mBERT, while other languages mainly benefit when using all of the English and in-language data, i.e. essentially doubling the amount of training data (last row).
covers 14 European languages, but is very imbalanced, both in terms of relation coverage in the different languages and training data per language(Chen et al., 2022c).Manually
supervised
datasets
include
BizRel (Khaldi et al., 2022), consisting of
25.5K sentences labeled with 5 business-oriented
relation types, in French, English, Spanish and
Chinese, and the IndoRE dataset of 32.6K sen-
tences covering 51 Wikidata relations, in Bengali,
Language
ar
de
es
fi
fr
hi
hu
ja
pl
ru
tr
zh
F1
69.6 76.1 75.8 73.6 75.9 73.3 70.0 72.2 74.7 74.0 72.1 74.8
Table 5 :
5Median micro-F1 scores across 5 runs of the English BERT model evaluated on the back-translated test splits of all languages. Compared to the micro-F1 score of 77.1 on the untranslated English test set, back-translation results are somewhat lower, due to MT system quality and the linguistic variance introduced by the back-translation.Hindi, Telugu and English (Nag et al., 2021). The
IndoRE dataset uses MT to transfer manually
labeled examples from English to the three other
languages, but implements a heuristic to project
entity annotations, without any verification step.
Other datasets are very small: The RelX dataset
contains a manually translated parallel test set of
502 sentences (Köksal and Özgür, 2020). Arviv
et al. (2021) create a small parallel RE dataset
of 533 sentences by sampling from TACRED
and translating into Russian and Korean. For
the related task of event extraction, datasets
worth mentioning are the multilingual ACE 2005
dataset (Walker et al., 2006), the TAC multilingual
event extraction dataset (Ellis et al., 2016), and the
work of Yarmohammadi et al. (2021).
Machine Translation for Cross-lingual Learn-
ing MT is a popular approach to address the lack
of data in cross-lingual learning (Hu et al., 2020;
Nag et al., 2021). There are two basic options -
translating target language data to a well-resourced
source language at inference time and applying a
model trained in the source language (Asai et al.,
2018; Cui et al., 2019; Hu et al., 2020), or trans-
lating source language training data to the target
language, while also projecting any annotations
required for training, and then training a model
in the target language (Khalil et al., 2019; Yarmo-
hammadi et al., 2021; Kolluru et al., 2022). Both
approaches depend on the quality of the MT system,
with translated data potentially suffering from trans-
lation or alignment errors (Aminian et al., 2017;
Ozaki et al., 2021; Yarmohammadi et al., 2021).
With very few exceptions, using MT for multilin-
gual RE remains underexplored (Faruqui and Ku-
mar, 2015; Zou et al., 2018; Nag et al., 2021).
Multilingual RE Previous work in cross-and mul-
tilingual RE has explored a variety of approaches.
Kim et al. (2014) proposed cross-lingual annota-
tion projection, while Faruqui and Kumar (2015)
machine-translate non-English sentences to En-
glish, and then project the relation phrase back
to the source language for the task of Open RE.
Verga et al. (2016) use multilingual word embed-
dings to extract relations from Spanish text without
using Spanish training data. In a related approach,
Ni and Florian (2019) describe an approach for
cross-lingual RE that is based on bilingual word
embedding mapping. Lin et al. (2017) employ con-
volutional networks to extract relation embeddings
from texts, and propose cross-lingual attention be-
tween relation embeddings to model cross-lingual
information consistency. Chen et al. (2022c) intro-
duce a prompt-based model, which requires only
the translation of prompt verbalizers. Their ap-
proach thus is especially useful in few-and zero-
shot scenarios.
Natural Language Processing (EMNLP), pages 1556-1567, Doha, Qatar. Association for Computational Linguistics.Yang Chen, Chao Jiang, Alan Ritter, and Wei Xu. 2022b.Frustratingly easy label projection for cross-lingual transfer. CoRR, abs/2211.15613.Ofir Arviv, Dmitry Nikolaev, Taelin Karidi, and Omri
Abend. 2021. On the relation between syntactic di-
vergence and zero-shot performance. In Proceedings
of the 2021 Conference on Empirical Methods in Nat-
ural Language Processing, pages 4803-4817, Online
and Punta Cana, Dominican Republic. Association
for Computational Linguistics.
Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, and
Yoshimasa Tsuruoka. 2018. Multilingual extractive
reading comprehension by runtime machine transla-
tion. ArXiv, abs/1809.03275.
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling,
and Tom Kwiatkowski. 2019. Matching the Blanks:
Distributional Similarity for Relation Learning. In
Proceedings of the 57th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 2895-
2905, Florence, Italy. Association for Computational
Linguistics.
Abhyuday Bhartiya, Kartikeya Badola, and Mausam .
2022. DiS-ReX: A multilingual dataset for distantly
supervised relation extraction. In Proceedings of the
60th Annual Meeting of the Association for Compu-
tational Linguistics (Volume 2: Short Papers), pages
849-863, Dublin, Ireland. Association for Computa-
tional Linguistics.
Xiang Chen, Ningyu Zhang, Ningyu Zhang, Xin Xie,
Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang,
Luo Si, and Huajun Chen. 2022a. Knowprompt:
Knowledge-aware prompt-tuning with synergistic op-
timization for relation extraction. Proceedings of the
ACM Web Conference 2022.
Yuxuan Chen, David Harbecke, and Leonhard Hennig.
2022c. Multilingual relation classification via effi-
cient and effective prompting. In Proceedings of
the 2022 Conference on Empirical Methods in Nat-
ural Language Processing, Online and Abu Dhabi,
the United Arab Emirates. Association for Computa-
tional Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
Vishrav Chaudhary, Guillaume Wenzek, Francisco
Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
moyer, and Veselin Stoyanov. 2020. Unsupervised
cross-lingual representation learning at scale. In Pro-
ceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 8440-
8451, Online. Association for Computational Lin-
guistics.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shi-
jin Wang, and Guoping Hu. 2019. Cross-lingual
machine reading comprehension. In Proceedings of
the 2019 Conference on Empirical Methods in Natu-
ral Language Processing and the 9th International
Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 1586-1595, Hong Kong,
China. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long and Short Papers), pages
4171-4186, Minneapolis, Minnesota. Association for
Computational Linguistics.
George Doddington, Alexis Mitchell, Mark Przybocki,
Lance Ramshaw, Stephanie Strassel, and Ralph
Weischedel. 2004. The automatic content extrac-
tion (ACE) program -tasks, data, and evaluation. In
Proceedings of the Fourth International Conference
on Language Resources and Evaluation (LREC'04),
Lisbon, Portugal. European Language Resources As-
sociation (ELRA).
Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster,
Zhiyi Song, Ann Bies, and Stephanie Strassel. 2016.
Overview of linguistic resources for the tac kbp 2016
evaluations: Methodologies and results. In Proceed-
ings of TAC 2016.
Manaal Faruqui and Shankar Kumar. 2015. Multilin-
gual open relation extraction using cross-lingual pro-
jection. In Proceedings of the 2015 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, pages 1351-1356, Denver, Colorado. Asso-
ciation for Computational Linguistics.
Yvette Graham, Barry Haddow, and Philipp Koehn.
2020. Statistical power and translationese in machine
translation evaluation. In Proceedings of the 2020
Conference on Empirical Methods in Natural Lan-
guage Processing (EMNLP), pages 72-81, Online.
Association for Computational Linguistics.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra-
ham Neubig, Orhan Firat, and Melvin Johnson.
2020. XTREME: A massively multilingual multi-
task benchmark for evaluating cross-lingual gener-
alisation. In Proceedings of the 37th International
Conference on Machine Learning, volume 119 of
Proceedings of Machine Learning Research, pages
4411-4421. PMLR.
Hadjer Khaldi, Farah Benamara, Camille Pradel, Gré-
goire Sigel, and Nathalie Aussenac-Gilles. 2022.
How's business going worldwide ? a multilingual
annotated corpus for business relation extraction. In
Proceedings of the Thirteenth Language Resources
and Evaluation Conference, pages 3696-3705, Mar-
seille, France. European Language Resources Asso-
ciation.
Talaat Khalil, Kornel Kiełczewski, Georgios Christos
Chouliaras, Amina Keldibek, and Maarten Versteegh.
3794
Scao, Sylvain Gugger, Mariama Drame, Quentin
Lhoest, and Alexander M. Rush. 2020. Transform-
ers: State-of-the-art natural language processing. In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing: System
Demonstrations, pages 38-45, Online. Association
for Computational Linguistics.
Shijie Wu and Mark Dredze. 2020. Are all languages
created equal in multilingual BERT? In Proceedings
of the 5th Workshop on Representation Learning for
NLP, pages 120-130, Online. Association for Com-
putational Linguistics.
Chenhao Xie, Jiaqing Liang, Jingping Liu, Chengsong
Huang, Wenhao Huang, and Yanghua Xiao. 2021.
Revisiting the negative data of distantly supervised
relation extraction. In Proceedings of the 59th An-
nual Meeting of the Association for Computational
Linguistics and the 11th International Joint Confer-
ence on Natural Language Processing (Volume 1:
Long Papers), pages 3572-3581, Online. Association
for Computational Linguistics.
Omry Yadan. 2019. Hydra -a framework for elegantly
configuring complex applications. Github.
Mahsa Yarmohammadi, Shijie Wu, Marc Marone,
Haoran Xu, Seth Ebner, Guanghui Qin, Yunmo
Chen, Jialiang Guo, Craig Harman, Kenton Murray,
Aaron Steven White, Mark Dredze, and Benjamin
Van Durme. 2021. Everything is all it takes: A multi-
pronged strategy for zero-shot cross-lingual informa-
tion extraction. In Proceedings of the 2021 Confer-
ence on Empirical Methods in Natural Language Pro-
cessing, pages 1950-1967, Online and Punta Cana,
Dominican Republic. Association for Computational
Linguistics.
Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli,
and Christopher D. Manning. 2017. Position-aware
attention and supervised data improve slot filling.
In Proceedings of the 2017 Conference on Empiri-
cal Methods in Natural Language Processing, pages
35-45, Copenhagen, Denmark. Association for Com-
putational Linguistics.
Wenxuan Zhou and Muhao Chen. 2022. An improved
baseline for sentence-level relation extraction. In Pro-
ceedings of the 2nd Conference of the Asia-Pacific
Chapter of the Association for Computational Lin-
guistics and the 12th International Joint Conference
on Natural Language Processing (Volume 2: Short
Papers), pages 161-168, Online only. Association for
Computational Linguistics.
Bowei Zou, Zengzhuang Xu, Yu Hong, and Guodong
Zhou. 2018. Adversarial feature adaptation for cross-
lingual relation classification. In Proceedings of the
27th International Conference on Computational Lin-
guistics, pages 437-448, Santa Fe, New Mexico,
USA. Association for Computational Linguistics.
Table 6 :
6MultiTACREDinstances per language and split, and for the back-translation (BT) of the test split. The 'en'
row shows the statistics of the original TACRED. (G) and (D) refer to Google and DeepL, respectively. The error
columns list the number of instances discarded after translation due to missing / erroneous entity tag markup. On
average, 2.3% of the instances were discarded due to invalid entity markup after translation. The last row shows the
intersection of valid instances available in all languages.
Table 8
8lists common error types we identified in the translations of TACRED instances.Language/Scenario
HuggingFace Model name
LR
License
ar
aubmindlab/bert-base-arabertv02
1e-5 N/A
de
bert-base-german-cased
3e-5 MIT
en
bert-base-uncased
3e-5 Apache 2.0
es
dccuchile/bert-base-spanish-wwm-cased
1e-5 (CC BY 4.0)
fi
TurkuNLP/bert-base-finnish-cased-v1
7e-6 N/A
fr
flaubert/flaubert_base_cased
1e-5 MIT
hi
l3cube-pune/hindi-bert-scratch
7e-6 CC BY 4.0
hu
bert-base-multilingual-cased
1e-5 Apache 2.0
ja
cl-tohoku/bert-base-japanese-whole-word-masking 3e-5 CC BY 4.0
pl
dkleczek/bert-base-polish-cased-v1
7e-6 N/A
ru
sberbank-ai/ruBert-base
3e-5 Apache 2.0
tr
dbmdz/bert-base-turkish-cased
1e-5 MIT
zh
bert-base-chinese
1e-5 N/A
Cross-lingual mBERT bert-base-multilingual-cased
1e-5 Apache 2.0
Multilingual mBERT
bert-base-multilingual-cased
1e-5 Apache 2.0
Table 7 :
7Best learning rate and model identifiers per language for the monolingual settings, and for the cross-and multilingual scenarios. Thetable also lists the model license, if it was available. <T>JetBlue Airways Corp</T> spokesman <H>Bryan Baldwin</H> said [. . . ] es <H>El</H> portavoz de<T>JetBlue Airways Corp</T> <H>, Bryan Baldwin</H>, dijo [. . . ] 'El' is marked as additional head span Alignment -Split span New <T>York-based Human Rights Watch</T> ( HRW ) , [. . . ] snubbed an invitation to testify [. . . ] es <T>Human Rights Watch</T> (HRW), con sede en Nueva <T>York</T>, [. . . ] rechazaron una invitación para testificar [. . . ] Translation of 'York-based' syntactically different, leading to split span Alignment -Split Compound [. . . ] Russian <T>Foreign Ministry</T> spokesman Andrei Nesterenko said on Thursday , <H>RIA Novosti</H> reported. fr [. . . ] a déclaré jeudi le porte-parole du <T>ministère</T> russe <T>des affaires étrangères</T>, Andrei Nesterenko, selon <H>RIA Novosti</H>. French word order for adjectives leads to split span of compound 'Foreign Ministry' Alignment -Compound [. . . ] Seethapathy Chander , Deputy Director General with <T>ADB</T> 's <H>Private Sector Department</H>. de [. . . ] Seethapathy Chander, stellvertretender Generaldirektor der <H>ADB-Abteilung für den Privatsektor</H>. H> was vibrant , she loved life and <T>she</T> always had a kind word for everyone. de <H>Sie</H> war lebhaft, sie liebte das Leben und hatte immer ein freundliches Wort für jeden. <H>Christopher Bentley</H> , a spokesman for Citizenship and <T>Immigration Services</T> [. . . ] es <H>Christopher Bentley</H>, un portavoz de <T>los Servicios de</T> Ciudadanía e <T>Inmigración</T> [. . . ] Wrong She said when <H>she</H> got pregnant in <T>2008</T> [. . . ] pl Powiedziała,że kiedy w <T>2008</T> r. <H>zaszła</H> w ciążę [. . . ] 'got' marked instead of dropped pronoun 'she' Alignment -Extended <T>Alaskans</T> last chose a Democrat for the presidency in 1964 , when they backed Lyndon B. Johnson by a 2-1 margin over <H>Barry Gold-water</H> . zh <T>阿拉斯加人上</T>一次选择民主党人担 任总统是在1964年,当时他们以2比1的优势 支持林登-B-约翰逊,而不是<H>巴里-戈德 华特</H>。 'last' is included in tail span Alignment -Partial In August , <H>Baldino</H> [. . . ] had taken a leave of absence from his posts as Cephalon 's chairman and <T>chief executive</T> . pl W sierpniu <H>Baldino</H> [. . . ] wziął urlop od pełnienia funkcji prezesa i <T>dyrektora gen-eral</T> nego firmy Cephalon. 'nego' should be part of the tail span and not be split off of the word 'generalnego' Alignment -Inflection Some of the people profiled are <T>ABC</T> president <H>Steve McPherson</H> , [. . . ] fi Mukana ovat muun muassa <T>ABC:n</T> pääjohtaja <H>Steve McPherson</H>, [. . . ] Tail 'ABC:n' includes genitive case marker in Finnish Non-English Source Dari arah Jakarta/Indramayu , <T>sekitar</T> 2 km sebelum Pasar Celancang , tepatnya di sebelah Kantor Kecamatan Suranenggala terdapat Tempat Pelelangan Ikan ( <H>TPI</H> ) . H> is not saying that a 1987-style stock market crash is on the immediate horizon , and <T>he</T> concedes that " by many measures , stocks are n't overpriced , even at recent highs . " tr <H>Stewart</H>, 1987 tarzı bir borsa çöküşünün hemen ufukta oldugunu söylemiyor ve <T>o</T> " birçok önlemle , hisse senetlerinin aşırı fiyatlandırılmadıgını bile kabul ediyor . son zirvelerde. " Translation incomplete Outlined in a filing with the <H>Federal Election Commission</H> , <T>Obama</T> 's suggestion is notable because . . . -10 [. . . ] <T>Cowboys</T>5-10 [. . . ] <H>Jaguars</H>8-7 [. . . ] Total : 42-93 ( .311 ) 总数: 58-74 ( .439 ) 总数: 53-81 ( .396 )Error Type
Source
Lang.
Translation
Comment
Alignment -
Missing
<H>He</H> also presided over the country 's
<T>Constitutional Council</T> [. . . ]
es
También presidió el <T>Consejo Constitu-
cional</T> del país [. . . ]
Head not marked due to
dropped pronoun
Alignment -
Definite Ar-
ticle
German translation uses a
compound noun combining
head and 'department'
Alignment -
Missing
<H>She</Multiple occurrences of
same pronoun seem to con-
fuse aligner
Alignment
-Coordina-
tion
Coordinated conjuction in
Spanish leads to split span
Alignment -
-
-
Source language is Indone-
sian, not English
Sentence
split
<H>Stewart</'son zirvelerde' erroneously
separated
by
end-of-
sentence period
de
Der Vorschlag <T>Obamas</T> ist be-
merkenswert, weil . . .
Translation is missing first
part and head span
Atypical in-
put
Browns 5-10 [. . . ] <T>Cowboys</T> 5-10 [. . . ]
<H>Jaguars</H> 8-7 [. . . ] Total : 42-93 ( .311 )
Total : 58-74 ( .439 ) Total : 53-81 ( .396 )
zh
Browns 5Almost no translation due to
atypical input
Table 8 :
8Common error types of translated TACRED examples. The first half of the table shows alignment errors that can be automatically detected, such as missing or additional aligned spans in the translation. The second half shows error types identified by human judges. ACL 2023 Responsible NLP Checklist A For every submission: A1. Did you describe the limitations of your work? A2. Did you discuss any potential risks of your work? Section Limitations & Section Ethics Statement A3. Do the abstract and introduction summarize the paper's main claims? A4. Have you used AI writing assistants when working on this paper? Left blank. B Did you use or create scientific artifacts? Sections 2.1 TACRED, 2.2 Translation Systems, 3.2 Models/Libraries, Appendix A Preprocessing B1. Did you cite the creators of artifacts you used? 2.1 TACRED, 2.2 Translation Systems, 3.2 Models/Libraries, Appendix A Preprocessing B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 2.1 TACRED; 2.2 Translation Systems, Appendix C Models/LibrariesB3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? TACRED -Section 2, MultiTacred -Section Conclusion B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?TACRED -Sec Ethics Statement B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 2.1 & 2.2 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results.3799
MultiTACRED includes the following language families / languages: German (Germanic); Finnish, Hungarian (Uralic); Spanish, French (Romance); Arabic (Semitic); Hindi (Indo-Iranic); Japanese (Japonic); Polish, Russian (Slavic); Turkish (Turkic); Chinese (Sino-Tibetan).
https://catalog.ldc.upenn.edu/LDC2018T24, under a LDC license 3 https://tac.nist.gov/2017/KBP/index.html 4 The first row of Table 6 in Appendix A summarizes key statistics of the dataset. 5 https://api.deepl.com/v2/translate 6 https://translation.googleapis.com/language/ translate/v3
If necessary, human judges are first introduced to the task of relation extraction. They are also given the list of relations and their official definitions for reference.
Results and DiscussionWe first present some insights into translation quality, and then discuss the performance of models for the different training scenarios.9 We make our code publicly available at https://github. com/DFKI-NLP/MultiTACRED for better reproducibility.10 Since bothAlt et al. (2020) and(Stoica et al., 2021) provide fixes as patch files to the original dataset, it is trivial to repeat our experiments using the original or the Re-TACRED version of the data.
AcknowledgementsWe would like to thank David Harbecke, Aleksandra Gabryszak, Nils Feldhus and the anonymous reviewers for their valuable comments and feedback on the paper. We are also very grateful to all the helpful annotators who evaluated the translations: Ammer Ayach, Yuxuan Chen, Nicolas Delinte, Aleksandra Gabryszak, Maria Gonzalez Garcia, Elif Kara, Tomohiro Nishiyama, Akseli Reunamo, Kinga Schumacher, Akash Sinha, and Tatjana Zeen. Finally, we'd like to thank Gabriel Kressin and Phuc Tran Truong for their help with the code base and running the translations and experiments. This work has been supported by the German Federal Ministry for Economic Affairs and Climate Action as part of the project PLASS (01MD19003E), and by the German Federal Ministry of Education and Research as part of the projects CORA4NLP (01IW20010) and Text2Tech (01IS22017B).
TACRED revisited: A thorough evaluation of the TACRED relation extraction task. Christoph Alt, Aleksandra Gabryszak, Leonhard Hennig, 10.18653/v1/2020.acl-main.142Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsChristoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. TACRED revisited: A thorough eval- uation of the TACRED relation extraction task. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1558- 1569, Online. Association for Computational Linguis- tics.
Transferring semantic roles using translation and syntactic information. Maryam Aminian, Mohammad Sadegh Rasooli, Mona Diab, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanShort Papers2Asian Federation of Natural Language ProcessingMaryam Aminian, Mohammad Sadegh Rasooli, and Mona Diab. 2017. Transferring semantic roles using translation and syntactic information. In Proceedings of the Eighth International Joint Conference on Nat- ural Language Processing (Volume 2: Short Papers), pages 13-19, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cross-lingual intent classification in a low resource industrial setting. Gabor Angeli, Julie Tibshirani, Jean Wu, Christopher D Manning, 10.18653/v1/D19-1676Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsProceedings of the 2014 Conference on Empirical Methods in 2019Gabor Angeli, Julie Tibshirani, Jean Wu, and Christo- pher D. Manning. 2014. Combining distant and par- tial supervision for relation extraction. In Proceed- ings of the 2014 Conference on Empirical Methods in 2019. Cross-lingual intent classification in a low resource industrial setting. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6419-6424, Hong Kong, China. Association for Computational Linguistics.
Cross-lingual annotation projection for weakly-supervised relation extraction. Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, Gary Geunbae Lee, 10.1145/2529994ACM Transactions on Asian Language Information Processing. 113Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2014. Cross-lingual annotation projection for weakly-supervised relation extraction. ACM Transactions on Asian Language Information Processing, 13(1).
The RELX dataset and matching the multilingual blanks for cross-lingual relation classification. Abdullatif Köksal, Arzucan Özgür, 10.18653/v1/2020.findings-emnlp.32Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsAbdullatif Köksal and Arzucan Özgür. 2020. The RELX dataset and matching the multilingual blanks for cross-lingual relation classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 340-350, Online. Association for Computational Linguistics.
Alignment-augmented consistent translation for multilingual open information extraction. Keshav Kolluru, Muqeeth Mohammed, Shubham Mittal, Soumen Chakrabarti, Mausam , 10.18653/v1/2022.acl-long.179Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Long Papers)Keshav Kolluru, Muqeeth Mohammed, Shubham Mit- tal, Soumen Chakrabarti, and Mausam . 2022. Alignment-augmented consistent translation for mul- tilingual open information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2502-2517, Dublin, Ireland. Association for Computational Linguistics.
Neural relation extraction with multi-lingual attention. Yankai Lin, Zhiyuan Liu, Maosong Sun, 10.18653/v1/P17-1004Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Association for Computational LinguisticsYankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neu- ral relation extraction with multi-lingual attention. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 34-43, Vancouver, Canada. As- sociation for Computational Linguistics.
A data bootstrapping recipe for low-resource multilingual relation classification. Arijit Nag, Bidisha Samanta, Animesh Mukherjee, Niloy Ganguly, Soumen Chakrabarti, 10.18653/v1/2021.conll-1.45Proceedings of the 25th Conference on Computational Natural Language Learning. the 25th Conference on Computational Natural Language LearningOnline. Association for Computational LinguisticsArijit Nag, Bidisha Samanta, Animesh Mukherjee, Niloy Ganguly, and Soumen Chakrabarti. 2021. A data bootstrapping recipe for low-resource multi- lingual relation classification. In Proceedings of the 25th Conference on Computational Natural Lan- guage Learning, pages 575-587, Online. Association for Computational Linguistics.
Neural cross-lingual relation extraction based on bilingual word embedding mapping. Jian Ni, Radu Florian, 10.18653/v1/D19-1038Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsJian Ni and Radu Florian. 2019. Neural cross-lingual re- lation extraction based on bilingual word embedding mapping. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 399-409, Hong Kong, China. Association for Com- putational Linguistics.
Project-then-transfer: Effective two-stage cross-lingual transfer for semantic dependency parsing. Hiroaki Ozaki, Gaku Morio, Terufumi Morishita, Toshinori Miyoshi, 10.18653/v1/2021.eacl-main.221Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsHiroaki Ozaki, Gaku Morio, Terufumi Morishita, and Toshinori Miyoshi. 2021. Project-then-transfer: Ef- fective two-stage cross-lingual transfer for semantic dependency parsing. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2586-2594, Online. Association for Computational Linguistics.
. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, TrevorAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor
Pytorch: An imperative style, high-performance deep learning library. Zeming Killeen, Natalia Lin, Luca Gimelshein, Alban Antiga, Andreas Desmaison, Edward Kopf, Zachary Yang, Martin Devito, Alykhan Raison, Sasank Tejani, Benoit Chilamkurthy, Lu Steiner, Junjie Fang, Soumith Bai, Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.
Modeling Relations and Their Mentions without Labeled Text. Sebastian Riedel, Limin Yao, Andrew Mccallum, Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD '10). the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD '10)Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling Relations and Their Mentions with- out Labeled Text. In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD '10).
Translationese as a language in "multilingual" NMT. Parker Riley, Isaac Caswell, Markus Freitag, David Grangier, 10.18653/v1/2020.acl-main.691Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsParker Riley, Isaac Caswell, Markus Freitag, and David Grangier. 2020. Translationese as a language in "mul- tilingual" NMT. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7737-7746, Online. Association for Computational Linguistics.
A survey of cross-lingual word embedding models. Sebastian Ruder, Ivan Vulić, Anders Søgaard, 10.1613/jair.1.11640J. Artif. Int. Res. 651Sebastian Ruder, Ivan Vulić, and Anders Søgaard. 2019. A survey of cross-lingual word embedding models. J. Artif. Int. Res., 65(1):569-630.
Multilingual entity and relation extraction dataset and model. Alessandro Seganti, Klaudia Firląg, Helena Skowronska, Michał Satława, Piotr Andruszkiewicz, 10.18653/v1/2021.eacl-main.166Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsAlessandro Seganti, Klaudia Firląg, Helena Skowron- ska, Michał Satława, and Piotr Andruszkiewicz. 2021. Multilingual entity and relation extraction dataset and model. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 1946-1955, Online. Association for Computational Linguistics.
Re-tacred: Addressing shortcomings of the TACRED dataset. George Stoica, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9. AAAI PressEmmanouil Antonios Platanios, and Barnabás Póczos. 2021George Stoica, Emmanouil Antonios Platanios, and Barnabás Póczos. 2021. Re-tacred: Addressing short- comings of the TACRED dataset. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Ap- plications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Ar- tificial Intelligence, EAAI 2021, Virtual Event, Febru- ary 2-9, 2021, pages 13843-13850. AAAI Press.
Multilingual relation extraction using compositional universal schema. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew Mccallum, 10.18653/v1/N16-1103Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsPatrick Verga, David Belanger, Emma Strubell, Ben- jamin Roth, and Andrew McCallum. 2016. Multilin- gual relation extraction using compositional univer- sal schema. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 886-896, San Diego, California. Association for Computational Linguistics.
Ace 2005 multilingual training corpus. Christopher Walker, Stephanie Strassel, Julie Medero, Kazuaki Maeda, Linguistic Data ConsortiumTechnical reportChristopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Technical report, Linguistic Data Consortium.
Teven Le l3cube-pune/hindi-bert-scratch. Interestingly, using bert-base-multilingual-cased instead of l3cube-pune/hindi-bert-scratch as the base PLM produced far better results for Hindi in the monolingual setting, at 71.1 micro-F1. We experimented with learning rates in. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Xu, We used micro-F1 on the dev set as the criterion for hyperparameter selection. Table 7 lists the best learning rates per language and scenario. We use a fixed set of ran. dom seeds {1337, 2674, 4011, 5348, 6685} for training across the 5 runsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le l3cube-pune/hindi-bert-scratch. Interestingly, us- ing bert-base-multilingual-cased instead of l3cube- pune/hindi-bert-scratch as the base PLM produced far better results for Hindi in the monolingual set- ting, at 71.1 micro-F1. We experimented with learning rates in [3e − 6, 7e − 6, 1e − 5, 3 − e5, 5e − 5]. We used micro- F1 on the dev set as the criterion for hyperparameter selection. Table 7 lists the best learning rates per language and scenario. We use a fixed set of ran- dom seeds {1337, 2674, 4011, 5348, 6685} for training across the 5 runs. |
1,983,271 | IDENTIFYING RELEVANT PRIOR EXPLANATIONS | When human tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material, and to avoid repeating old material that would distract the student from what is new.Producing a system that displays such behavior involves finding an efficient way to identify which previous explanations (if any) are relevant to the current explanation task. Thus, we axe implementing a system that uses a case-based reasoning approach to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in human-human tutorial dialogues. | [
1502408
] | IDENTIFYING RELEVANT PRIOR EXPLANATIONS
James A Rosenblum
Department of Computer Science
University of Pittsburgh Pittsburgh
15260PAUSA
IDENTIFYING RELEVANT PRIOR EXPLANATIONS
Internet: [email protected]
When human tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material, and to avoid repeating old material that would distract the student from what is new.Producing a system that displays such behavior involves finding an efficient way to identify which previous explanations (if any) are relevant to the current explanation task. Thus, we axe implementing a system that uses a case-based reasoning approach to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in human-human tutorial dialogues.
Introduction and Motivation
We are building an explanation component for an existing intelligent training system, SHERLOCK (Lesgold ei al., 1992), which trains avionics technicians to troubleshoot electronic equipment. Using SHERLOCK, trainees solve problems with minimal tutor interaction and then review their troubleshooting in a post-problem reflective follolz-up (RFU) session where the tutor replays each student action and assesses it as "good" (<+>) or as "could be improved" (<->). After a step is replayed, the student can ask the tutor to justify its assessment.
As an example of the way in which human tutors exploit previous discourse, consider the dialogue in Figure 1, taken from our data. Even though the student has made the same mistake twice, the second explanation looks quite different from the first. Yet the two explanations are related to one another in an important way. In the second explanation the tutor simply reminds the student that she has not determined the status of the main control data signals and that she should do so before testing the secondary control data signals. The tutor expects the student to be able to make use of the previous explanation once he has indicated that it is relevant to the current situation ("for the same reasons given ..." serves this purpose). Accordingly, the tutor does not repeat the detailed explanation of why the main control data signals should be tested first. By generating the second explanation in such a way that it 'meshes' with the first, not only has the tutor corrected the testing mistake of the student, but has forced the student to consider how the two situations are similar. In pointing out this similarity, he has given the student a better understanding of the domain. We call an explanation that is later referred to (explicitly or implicitly) or is integrated into a subsequent explanation the anchor.
Clearly it is desirable for a system to produce text that is sensitive to what has been said previously. In order to do this, however, a system must first be able to decide what previous explanation (or part thereof) to use as an anchor. This involves deciding, ia an e.~cient uJay, whether there exist suitable candidates to act as anchor, and if so, which amongst them would be best to use. This paper concentrates on this task.
The Text Planner
For this work, we are extending the text planner built by Moore and Paris (1989). Briefly, it works in the following way. A commuaicati~e goal (e.g., "achieve the state where the hearer believes that an action could be improved") is formed based upon the student's question. Using its library of plan operators that encode knowledge about tutorial explanations, the system employs a linear planning mechanism to synthesize a response to achieve this goal. The result is a tez~ plan for the explanation.
The system then presents the explanation to the user, retaining the plan that produced it in a dialogue history. The dialogue history is a record of the conversation that has occurred thus far and includes the user's utterances as well as the text plans that led to the system's responses. In this system, a text plan represents the effect that each part of the text is intended to have on the heaxer's mental state, the linguistic strategies that were used to achieve these effects, and how the complete text achieves the overall communicative goal.
TUTOR STUDENT TUTOR TUTOR STUDENT TUTOR <-> You tested pin 28 of the A1A3A15.
[1]
Why is testing pin 28 not a good step?
[2]
As explained before, the main data inputs to relay B24 are highly suspect at this time.
[3] You have tested the low input to B24 but not the high one. Since it is more likely that there may be a problem with the main data signals to relay B24 than with the secondary data signals to relays B25-B30, the main data signal should be tested first. Testing pin 28 is not a bad step, it is just more likely (on the average) that the main data signals are causing the problem.
:
One ezchange and one action later.
<->You tested pin 36 of the A1A3A15.
[4]
Don't I need to test pin 36?
[5]
You might have to, but for the same reasons given when you tested pin 28, it is generally [6] more efficient to test the main control data signals first, and then test the secondary control data signals if necessary.
Figure 1: Human-Human Advisory Interaction Displaying Contextual Effects
Knowledge Sources for Finding Relevant Prior Explanations The most straightforward way to find relevant prior explanations is to exhaustively search the system's dialogue history looking for explanations that have certain features. For example, when ex-pl~inlng why a step was assessed as "could be improved," the system could look for previous explanations that justified this type of assessment, and in which the two actions being assessed were similar (i.e., had the same features).
However, this approach is problematic. Explanation plans are large complex structures, and they will accumulate rapidly as the dialogue progresses. Exhaustively searching the discourse history for relevant prior explanations is computationally prohibitive. Thus, we require an indexing strategy that allows the system to find possibly relevant prior explanations in an efficient manner.
To satisfy this requirement, we use case-based reasoning (CBR) to provide a framework in which previous student actions can be efficiently examined to determine which, if any, are relevant when producing an explanation. This approach has the additional advantage of allowing the system to consider what was said as well as what was not said when planning an explanation. For example, the student may have previously performed an action that displayed some characteristic that the tutor decided not to mention at the time and which would now be appropriate to discuss.
A Case-Based Algorithm
The following aspect of SHERLOCK's reasoning is extremely important to our work. SHERLOCK evaluates each student action by determining which facets apply to that action. The facets represent factors that expert avionics tutors use in assessing student's troubleshooting actions (Pokorny and Gott, 1990). To evaluate an action, SHER-LOCK finds each facet that applies to it and determines whether that facet should be considered good (g), bad (b), or neutral (n) given the current problem-solving context. For example, the facet "Making a measurement that is off the active circuit path" is considered a b-facet. The representation of a student action includes the list of facets characterizing the action and an assessment (g, b, or r~) for each of those facets.
Case-based reasoning generalizes from cases to support indexing and relevance assessment, and can be used to evaluate a case by comparing it to past cases (Ashley, 1992). This seems to describe our task when we treat each student action as a "case". Influenced by the work of Aleven and Ashley (1992), we noted certain similarities between their domain and ours that led us to believe that we could use CBR techniques to identify similar actions as described below.
Our algorithm builds a data structure called a similarity DAG (Directed A__cyclic Graph) which indicates the previous student actions that are similar to a given action. By similar, we mean similar with respect to a certain class of facets (some combination of g, b, or n). For example, when answering a question about why the current action was assessed as "could be improved," the similarity DAG is built so that it indicates which previous actions were similar to the current action with respect to the b-facets. The root of the DAG represents the current action and the facets of interest (b-facets in our example) that apply to it. Each node in the DAG, including the root, represents a set of student actions that share the same set of interesting facets. The more facets that a node has in common with the current action (in the root), the closer it will be to the root node. Proximity in the DAG corresponds to similarity in facet sets. Basically, the similarity DAG is a partial ordering of the student's actions based on their facet lists.
Similarity DAG
Discourse
I "'" History FACETS FI00: Allowed main data signal relay to remain partially tes~d (b) F101: Tested secondary data signal before main data signal (b)
~N
Action 12: VDC test, pin 36 to ground on A1A3A15 Co)
PREVIOUS ACTIONS
Action 9: VDC test,pin 28 to ~round on A1A3A15(b)
TEXT PLAN 1 /~ FACETS F~00: A11ow¢d a moiv.data signal retay to remam parUally teste~ ( shows the similarity DAG that is constructed when the system considers how to answer the question, "Don't I need to test pin 36?" (turn 5 of Figure 1). The facets relevant to the action in question are F100 and F101. The structure indicates that two previous actions -9 and to a lesser degree 8, are similar to the current situation. Pointers index the dialogue history's record of what was said at those times. At this point, the system has identified candidate situations that are relevant for planning the current explanation. It can now consider these retrieved situations more closely to determine any other facets that they may possess, and can examine the related explanations in the dialogue history to determine what was said about each of the two previous situations. The fact that there are no other nodes in the DAG indicates that there are no other suitable prior situations.
Initial results using this algorithm seem promising. In an analysis of 8 student-tutor protocols involving 154 actions and 22 opportunities for integrating a previous explanation into an answer, the algorithm correctly identified the same previous situations that were used by the human tutor in the actual interactions. In all but 3 cases, when the human tutor did not make a reference to a previous explanation, our algorithm reported no similar prior situation. In the 3 situations where our algorithm identified a similarity not exploited by the tutor, our expert agreed that they would have been useful to incorporate into his explanations.
Lastly, this technique will be useful in answering students' direct questions about the similarities of situations, e.g., "Why is testing 30 good? Isn't it like 36 and 28?" By constructing and consulting a similarity DAG, the system is able to plan responses such as: aYes, but now you know the main control data signals on pins 33 and 22 are good so you need to test the secondary data signals."
It is important to note that this approach is successful, in part, because the facets are based on a tutor's evaluation of a student's actions, and we are currently addressing only questions that jus-tify these evaluations. We focused on this type of question because 48% of student's queries during RFU are of this type. To answer additional questions in a context-sensitive fashion, we will need to extend our indexing scheme to take the intentions behind an explanation into account as well as the domain content discussed.
Conclusions and Future Work
We have indicated that in order to produce text that is sensitive to the previous discourse, a system must first be able to identify relevant previous explanations and situations. To achieve this first step, a CBR algorithm was introduced that indexes the dialogue history and supplies the explanations with a context in which to be considered. We are devising techniques that use this information to plan subsequent explanations.
Figure 2 :
2Data structures when considering how to answer turn 5,Figure 1
Figure 2
2Figure 2 shows the similarity DAG that is constructed when the system considers how to answer the question, "Don't I need to test pin 36?" (turn 5 of Figure 1). The facets relevant to the action in question are F100 and F101. The structure indicates that two previous actions -9 and to a lesser degree 8, are similar to the current situation. Pointers index the dialogue history's record of what was said at those times. At this point, the system has identified candidate situations that are relevant for planning the current explanation. It can now consider these retrieved situations more closely to determine any other facets that they may possess, and can examine the related explanations in the dialogue history to determine what was said about each of the two previous situations. The fact that there are no other nodes in the DAG indicates that there are no other suitable prior situations. Initial results using this algorithm seem promising. In an analysis of 8 student-tutor protocols involving 154 actions and 22 opportunities for integrating a previous explanation into an answer, the algorithm correctly identified the same previous situations that were used by the human tutor in the actual interactions. In all but 3 cases, when the human tutor did not make a reference to a previous explanation, our algorithm reported no similar prior situation. In the 3 situations where our algorithm identified a similarity not exploited by the tutor, our expert agreed that they would have been useful to incorporate into his explanations. Lastly, this technique will be useful in answering students' direct questions about the similarities of situations, e.g., "Why is testing 30 good? Isn't it like 36 and 28?" By constructing and consulting a similarity DAG, the system is able to plan responses such as: aYes, but now you know the main control data signals on pins 33 and 22 are good so you need to test the secondary data signals." It is important to note that this approach is successful, in part, because the facets are based on a tutor's evaluation of a student's actions, and we are currently addressing only questions that jus-
Ashley, K. 1992. Case-based reasoning and its implications for legal expert systems. V Ahven, K Ashley, Proc. of the £nd Int'l Conference on Intelligent 2~toring S~ls-ter~. of the £nd Int'l Conference on Intelligent 2~toring S~ls-ter~Montreal, Canada2Automated generation of examples for a tutorial in case-based argumentationAhven, V. and Ashley, K. 1992. Auto- mated generation of examples for a tutorial in case-based argumentation. In Proc. of the £nd Int'l Conference on Intelligent 2~toring S~ls- ter~, Montreal, Canada. Ashley, K. 1992. Case-based reasoning and its implications for legal expert systems. Ar- tificial Intelligence and Law 2(1).
Sherlock: A coached practice environment for an electronics troubleshooting job. A Lesgold, S Lajoie, M Bunzo, G Eggan, Computer Assisted Instruction and Intelligent Tutoring S~/stems: Shared Goals and Complementary Approaches. NJLesgold, A.; Lajoie, S.; Bunzo, M.; and Eggan, G. 1992. Sherlock: A coached practice environment for an electronics troubleshooting job. In Computer Assisted Instruction and In- telligent Tutoring S~/stems: Shared Goals and Complementary Approaches. Lawrence Erlbaum Assoc,, NJ.
Planning text for advisory dialogues. J D Moore, C L Paris, Proc. of the £7th Annual Meeting of the ACL. of the £7th Annual Meeting of the ACLVancouver, B.C., CanadaMoore, J. D. and Paris, C. L. 1989. Plan- ning text for advisory dialogues. In Proc. of the £7th Annual Meeting of the ACL, Vancouver, B.C., Canada. 203-211.
The evaluation of a real-world instructional system: Using technical experts as raters. R Pokorny, S Gott, Armstrong Laboratories, Brooks AFBTechnical reportPokorny, R. and Gott, S. 1990. The eval- uation of a real-world instructional system: Us- ing technical experts as raters. Technical report, Armstrong Laboratories, Brooks AFB. |
251,465,437 | Querying Interaction Structure: Approaches to Overlap in Spoken Language Corpora | In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker-based search mode that enables any speaker's transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI-based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks. | [
7989948
] | Querying Interaction Structure: Approaches to Overlap in Spoken Language Corpora
June 2022
Elena Frick
Leibniz-Institute for the German Language
R5, 6-13D-68161MannheimGermany
Henrike Helmer [email protected]
Leibniz-Institute for the German Language
R5, 6-13D-68161MannheimGermany
Thomas Schmidt [email protected]
Leibniz-Institute for the German Language
R5, 6-13D-68161MannheimGermany
RISE University of Basel
Spalenberg 65CH-4051BaselSwitzerland
Querying Interaction Structure: Approaches to Overlap in Spoken Language Corpora
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 715spoken language corporamulti-turn conversationscorpus search enginequery language
In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker-based search mode that enables any speaker's transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI-based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks.
Introduction
Interaction corpora are collections of audio and/or video recordings of spontaneous and authentic conversations. They differ from corpora of written language and also from some oral corpora (such as phonetic corpora) because they contain verbal interactions between two or more interlocutors and therefore have multiple primary data streams. Methodological and technical challenges for working with this special type of corpus are described in Schmidt (2018). In the present paper, we focus on two specific problems arising when indexing and searching interaction corpora. In particular, we look first into how token distance and token precedence can be measured in spoken language transcripts with overlapping speaker contributions containing tokens that are not synchronized with the audio sound. Secondly, we address two distinct methods of how speaker overlaps can be algorithmically computed and stored in the TEI-based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016). The paper is organized as follows: Section 2 briefly explains the background and motivation of our study. Section 3 presents our methods in indexing and searching interaction corpora and proposes some possible solutions in dealing with speaker overlaps. The paper continues with related work in Section 4 and provides the conclusion of our research in Section 5.
Background and Motivation
The background of the present study is the project ZuMult (Zugänge zu multimodalen Korpora gesprochener Sprache, Access to Multimodal Spoken Language Corpora) 1 . It is a DFG-funded three-year cooperation project between the Archive of Spoken German (AGD) 2 in Mannheim, the Hamburg Centre for Language Corpora (HZSK) 3 and the Herder-Institute 4 at the University of Leipzig. One of the aims of ZuMult is to develop a backend software architecture for a unified access to spoken language 1 https://zumult.org/ 2 http://agd.ids-mannheim.de 3 https://corpora.uni-hamburg.de/hzsk/ 4 https://www.philol.uni-leipzig.de/herder-institut/ 5 http://agd.ids-mannheim.de/folk.shtml resources located in different repositories (cf. Batinić et al., 2019;Fandrych et al., 2022). The access should also include the search functionality allowing to query corpora stored in the TEI-based ISO Standard for Spoken Language Transcripts. For this purpose, we explored how MTAS (Brouwer, Brugman, and Kemps-Snijders 2016) an open source Lucene-based search engine framework developed for querying texts with multilevel annotationscan be reused for searching spoken language corpora. The corpora we are dealing with are interaction corpora for the most part (cf. e.g. FOLK 5 , GeWiss 6 , HaMaTaC 7 ). We were interested whether this special type of corpora can be indexed with MTAS and searched by using its query language, a modified version of the CQP Query Language originally developed for the IMS Open Corpus Workbench (CWB) 8 . We introduced the first results of this research in Frick and Schmidt (2020) where we outlined the capacity, but also the limitations of MTAS in terms of its compatibility with typical characteristics of spoken language. The present paper continues this work and addresses two challenging issues concerning speaker overlaps in corpora without complete token-based time-alignment.
Methods
In this section, we illustrate the problems every corpus research tool developer has faced sooner or later when implementing search software for interaction corpora. The first problem concerns the token distance and token precedence within speaker overlaps. The other one relates to the possibilities for indexing and searching speaker overlaps. We propose some solutions implemented with MTAS and discuss their benefits and disadvantages.
716
Token Distance and Precedence
3.1.1
Problem Compared to written corpora, indexing and querying the token distance in spoken language transcriptions is not a straightforward task, because it is not clearly determined what elements of a transcript (word tokens, transcribed pauses, non-verbal sounds, time anchors etc.) should be considered as an equivalent to a text token (see Frick and Schmidt, 2020). But even if this question has been clarified, multiple speaker layers with overlapping contributions are a particular problem for dealing with token distance and token precedence in interaction corpora. In spoken language transcripts, we generally have to deal with two token orders: temporal token order and document/sequential order of tokens, which don't coincide in the case of speaker overlaps. As an example, compare the representations of a transcript excerpt from the FOLK corpus, shown in the EXMARaLDA 9 editor (Figure 1 10 ) and in the XML document corresponding to the ISO-TEI Standard for Spoken Language Transcriptions (Figure 2 11 ). As you can see, the speakers US and LM are speaking simultaneously for a second, both start their contributions with the word-token ja (Eng. yes). In the temporal token order, illustrated by the representation in EXMARaLDA, both ja are preceded by a pause of 0.31 seconds. In the XML document, the parallel speaker contributions are presented in the sequential order, with the longest contribution (represented in the ISO/TEI standard by the <annotationBlock>-element) occurring before the shorter one. Therefore, only ja realized by speaker US is preceded by a pause. The ja of speaker LM occurs after the whole contribution of speaker US and follows the token dann (Eng. then). Furthermore, although the word tokens ja of both speakers overlap, the token distance between these words according to the transcript would be 12, because 11 tokens occur between them in the XML file (see <w>elements with xml:id w3007 and w3019 in Figure 2, marked by boxes). Because of efficiency in transcribing, the audio alignment is usually made in units above the word level (e.g. utterance units or longer contributions) and many individual tokens in the transcripts are therefore not synchronized with the audio sound. In theory, a word (or even phoneme) level alignment could be added with forced aligners such as MAUS 12 . In practice however, such an alignment would be highly unreliable especially in the overlapping passages because forced aligners have no way of dealing with simultaneous speech (making multi-channel recordings is usually not a viable option for this type of field recording). So, in this case, the temporal token order cannot be determined anymore and only the document order can be used to measure the distance and precedence of tokens. This leads sometimes to incorrect, incomplete or misleading results when searching token sequences. For example, the following CQP query looks for all interjections and response particles (POS-tag: NGIRR) realized after a pause, but the search engine working on the document token order will match only ja of speaker US and not the other occurrence of ja realized by speaker LM in the transcription excerpt discussed above.
[pos="NGIRR"] precededby <pause/> looks for tokens that are annotated with 'NGIRR' at the POS layer and follow a pause
Solution
Trying to solve the problems described in the section before, we tested the so-called Speaker-based search mode. In this approach we created the speaker-based versions of each transcript, which means that every speaker of the transcript got a separate document containing only the transcriptions of this speaker. All annotations and transcriptions of other speakers and speakerless elements (such as pauses between contributions) were mapped to this new tokenization layer. After indexing these speaker-based documents with the MTAS-based search engine, we could search in our corpora by individual speakers. That means that the query string from Section 3.1.1 can now match both occurrences of ja in the example presented in Figures 1 and 2. Furthermore, in the speaker-based transcripts, we could automatically add new time-based span annotations marking all time intervals when the speaker is silent, but other speakers are speaking. For example, the following <spanGrp>-element was added to the speaker-based ISO-TEI transcript corresponding to speaker NH from the example in Figures 1 und 2. argumentation in this paper. The full ISO compliant version contains additional attributes on most elements, most importantly normalisation (@norm), lemma (@lemma) and pos (@pos) annotation for each token. 12 https://www.bas.uni-muenchen.de/Bas/BasMAUS.html <spanGrp type="another-speaker" subtype="time-based"> <span from="TLI_992" to="TLI_994">US</span> <span from="TLI_992" to="TLI_993">LM</span> <span from="TLI_995" to="TLI_996">US</span> <spanGrp/> Using these annotations, users can now perform complex searches by taking into account phenomena like speakerchange and turn-taking as demonstrated in the queries below.
([norm="oder"] !within <speaker-overlap/>) followedby <para/>{0,5}<another-speaker/> looks for any transcribed form of 'oder' occurring outside of an overlap at the last position before speaker change; 'para' stands for <pause>-, <vocal>-and <incident>elements which can occur in the transcription between two speaker contributions. (<annotationBlock/> containing ([word=".*" & !pos="(NGIRR|NGHES|XY)"] !within <speakeroverlap/>)) precededby (<another-speaker/><para/>{0,5}) looks for turn-taking by one of the non-speakers whose contribution contains at least one word token that occurs outside of the speaker overlap and is not a non-word, hesitation, interjection or responsive particle.
Discussion
The speaker-based search mode does not make the common transcript-based search superfluous, but it complements its search options in a very useful way as shown by the search examples above. However, this additional search approach comes at a price: a lot of storage space for additional search indices is required, and the computational time needed for corpus indexing increases strongly depending on the number of speakers (consider classroom interactions with dozens of students).
Speaker Overlaps
3.2.1
Problem The search functionality developed in the ZuMult project was designed with special user groups in mind. In particular, these are conversation analysis researchers interested in a new corpus search environment that makes it possible, among other things, to search for features of interaction structure, such as speaker overlaps. Although the MTAS framework used in the ZuMult search engine supports the search for overlapping structures and annotations, the MTAS Query Language is limited on the level of syntax to allow flexible searches for speaker overlaps. For example, it is possible to use the MTAS Query Language operator "intersecting" to search for contributions of speaker A overlapping with contributions of speaker B: <annotationBlock.speaker="A"/> intersecting <annotationBlock.speaker="B"/> But, it is not possible to write a query looking for all speaker overlaps in general. The query expression like <annotationBlock.speaker/> intersecting <annotationBlock.speaker/> would match every speaker's contribution because it would overlap with itself. To get the desired search result, the query should be formulated in a way like this: <annotationBlock.speaker=$X/> intersecting <annotationBlock.speaker=$Y/> where $X!=$Y However, this form of using variables is not supported in the current version of the MTAS Query Language.
3.2.2
Solution Extending the MTAS Query Language syntax to support variables is not a practicable option for us, because we use MTAS as an embedded framework that is being developed outside of our project. We did not want to change the framework itself in order to remain flexible and to be able to switch easily to the newest version of MTAS at any time later. The solution we chose to allow users to search for speaker overlaps was adding the appropriate annotations to the transcript documents and storing them in the MTAS search index. The ISO-TEI structure and the content of our transcripts allow for different methods to automatically identify speaker overlaps. The annotations of speaker overlaps can also be added in various forms to the transcript document. Consequently, we decided to test two different methods by adding two different kinds of annotations and to compare them to validate their effectiveness. The first method is segment-based. It goes through the time segments in the tokenization layer and checks for each pair of time anchors whether there are equivalents in the contributions of other speakers. If time anchors with the same value in the synch-attribute could be found in the contribution of another speaker, they are marked as the start and the end of a speaker overlap (see e.g. the type-attribute of the <anchor>-elements containing the synch-attribute with values TLI_992 and TLI_993 in Figure 2, marked by grey highlighting) and all word tokens between them get an annotation tag "ol-in" ("within overlap") in the typeattribute (see e.g. <w>-elements with xml:id w3007-w3013 in Figure 2). The type-attribute was indexed using MTAS in the same way as other token-based annotations like transcribed and normalized forms, POS-tags and lemmas. This allowed the following types of search queries to be submitted over the ZuMult Search-API:
[word.type=".*ol-in.*"] looks for word tokens within overlaps; the search pattern containing regular expression characters '.*' from both sides of 'ol-in' is important to match also type-attributes containing multi-word values (see e.g. the type-attribute of w3023 in Figure 2) The second method is contribution-based. It compares the start and end times of each <annotationBlock>-element with the start and end times of all other <annotationBlock>elements containing contributions of other speakers. If overlaps are identified, the <spanGrp>-element with the start and end times of the overlapping token sequence is added to the <annotationBlock> (see <spanGrp>-elements in Figure 2). The following query expressions demonstrate how the added span annotations can be requested when searching for speaker overlaps:
<speaker-overlap/> looks for all spans annotated as speaker overlap <speaker-overlap/> containing [lemma="(Herr|Frau)"] looks for all spans annotated as speaker overlap and containing any forms of 'Herr' or 'Frau' <speaker-overlap>[norm="also"] looks for any transcribed form of 'also' at the beginning of speaker overlaps <speaker-overlap="SZ"/> looks for all token sequences overlapping with the contributions of the speaker 'SZ' Figure 4: The same excerpt of the FOLK corpus as in Figure 3 presented in the ISO-TEI standard.
Discussion
Our experimental work showed that none of these methods can be used to index and search ALL speaker overlaps occurring in our corpora. The reason for this is trivial. Some time anchors that would be required for calculating and for indexing overlaps are missing. Please have a look at the example given in the EXMARaLDA editor in Figure 3. In this excerpt from the FOLK corpus, speakers US and NH are speaking simultaneously. If we look at the ISO-TEI representation of the same excerpt in Figure 4, we discover that the time anchor TLI_252 occurring in the speech of US is missing in the contribution of speaker NH. This is because the simultaneity of three speaker contributions makes it impossible in this case for the transcriber to precisely determine where each overlap starts or ends in relation to each of the other contributions. Therefore, the segment-based method could not recognize the word tokens with xml:id w858-w863 as being within the speaker overlap. The contribution-based method is in this case more accurate because it detects the speaker overlaps by comparing the end and start times of the <annotationBlock>-element (see <annotationBlock> with xml:id c150 and the first span annotation entry of its <spanGrp>-element). However, the contribution-based approach also has its disadvantages. Although it produces the correct time annotations, these annotations could not always be mapped to the tokenization layer during the indexing process, because relevant time anchors are again missing within <annotationBlock>-elements. This is illustrated by the FOLK excerpt in Figure 5, where speaker AM starts talking while speaker US is laughing. For a while they are speaking simultaneously. Using the contribution-based method, the interval with the appropriate speaker overlap can be determined and annotated in the transcript (see Figure 6). Unfortunately, the MTAS indexing algorithm fails when mapping the span annotation to the transcription layer because the time anchor with the synch-attribute value T_321 cannot be found in the <annotationBlock>-element of speaker US. The span annotation is simply left out of the search index. That means, that the following query will not match the tokens 'also' (w1076) and 'trinken' (w1077) in the current example.
<word/> within <speaker-overlap/> looks for word tokens annotated as speaker overlap Nevertheless, these both tokens can be found by searching 'ol-in' as value of the type-attribute as it is shown in the first query example from Section 3.2.2.
Since both methods discussed here have their drawbacks, we propose to use them complementary to each other to get an optimal set of results. Here is an example of a query expression combining both techniques for searching words within speaker overlaps:
(<word/> within <speaker-overlap/> | [word.type=".*olin.*"]) looks for word tokens occurring within speaker overlaps
There remains an open question, however, how successful the combination of both methods is. To be able to answer this question, we need manual annotations of speaker overlaps against which the search query below could be evaluated. We are aware that adding annotations to the transcript documents has disadvantages compared to adapting the MTAS Query Language, mainly because additional storage capacity is required. But our work allows us to conclude that just adding variables to the MTAS Query Language syntax and combining them with the "intersecting" operator (as previously suspected) will not return ALL speaker overlaps occurring in the corpus. A combination of different algorithms for calculating speaker overlaps behind the "intersecting"-operator would be required.
Related Work
At the beginning of the ZuMult-project, we have gained an overview of freely available web applications providing online access to spoken language corpora (cf. Batinić, Frick and Schmidt, 2021). Many of these search platforms support the search functionality allowing the token distance specification between the items of the desired word-token sequence (cf. e.g. CQPWeb 13 /BNC2014, Kontext 14 , TalkBankDB 15 , GLOSSA 16 ). But they only take into account the sequential word token order in the document without considering problems caused by speaker overlaps. Support for querying tokens in relation to overlaps is provided by CLAPI 17 . Moreover, this corpus search platform works, among others, with TEI-based transcript format like in our approach. Nevertheless, the CLAPI search possibilities are restricted: it allows for example to search for word tokens followed or preceded by overlaps, but not located within or outside overlaps. In contrast, the Database for Spoken German (Datenbank für Gesprochenes Deutsch, DGD) 18 , has a "position filter", which can, for corpora with the respective information encoded, searches to positions within and outside overlaps, 16 https://tekstlab.uio.no/glossa2 17 http://clapi.ish-lyon.cnrs.fr 18 https://dgd.ids-mannheim.de but DGD again does not support querying and displaying speaker overlaps containing specified word tokens or word token sequences. Both, CLAPI and DGD use the query builder with a complex filter to specify the distance between individual tokens. Compared to CLAPI, DGD provides additionally a speaker-based search mode comparable to the one described here, but the DGD's data model for transcripts is not TEI-based and supports only a fixed set on tokens, no free span annotations. The MTASbased search engine developed in the ZuMult-project as well as our first prototypical user interface application ZuRecht 19 combine both approaches and complement them by using a query language with CQP-based syntax for querying various aspects of speaker overlaps in the ISO-TEI transcript format.
Conclusion
The aim of this paper was to draw attention to the difficulties encountered in the development of query 19 http://zumult.ids-mannheim.de/ProtoZumult/jsp/zuRecht.jsp 20 In April 2022, almost 15000 (inter-)national users are registered for the DGD and thus are potential users of ZuRecht. The fact that our corpora are actively used is proved by numerous publications systems for interaction corpora. Using two specific phenomena (token distance and speaker overlaps), we have shown how complex such corpora are, especially if they lack the word-token-based time-alignment. From our point of view, the proposed MTAS-based solutions are helpful to satisfy most of the needs of end users searching in this specific type of corpora 20 . But the optimal answer to the described problems is and remains the time-alignment at the token level. It would allow more precise searches corresponding to token distance and speaker overlaps. As long as it is not possible to build on the token-based time-alignment, the alternative solutions are welcome and important to be shared with the research community. With the present paper we intend to motivate for more transparency and exchange in the development of the corpus search software for spoken language corpora. As an outlook, we think that the present paper can also provide some discussion material for modelling Use Cases in the based on FOLKthe main corpus provided by DGD and ZuRecht. A website collecting these publications is available at www.ids-mannheim.de/prag/muendlichekorpora/bibliographiefolk Figure 6: The same excerpt of the FOLK corpus as in Figure 5 presented in the ISO-TEI standard.
Figure 1 :
1An excerpt of the FOLK corpus transcriptions (FOLK_E_00055_SE_01_T_03) opened in EXMARaLDA.
Figure 2 :
2The same excerpt of the FOLK corpus as inFigure 1presented in the ISO-TEI standard.
Figure 3 :
3A transcript excerpt (FOLK_E_00055_SE_01_T_05) demonstrating the problem for the segment-based approach proposed in Section 3.2.2.[norm="bitte" & word.type=".*ol-in.*"] looks for any transcribed form of 'bitte' within overlaps <annotationBlock/> containing [word.type=".*ol-in.*"] looks for all speaker contributions containing overlaps
Figure 5 :
5A transcript excerpt (FOLK_E_00055_SE_01_T_05) demonstrating the problem for the contribution-based approach proposed in Section 3.2.2
https://gewiss.uni-leipzig.de 7 https://corpora.unihamburg.de/hzsk/de/islandora/object/spoken-corpus:hamatac 8 http://cwb.sourceforge.net/
https://exmaralda.org/de/ 10 In order to save space, we are not providing an English translation for the German material. We trust that this will not keep the reader from following our arguments, which are about the structural properties, not the meaning of tokens. 11 Please note that, for the sake of readability, we have simplified the XML to only display the information needed to understand the
https://cqpweb.lancs.ac.uk/bnc2014spoken/ 14 https://www.korpus.cz/ 15 https://talkbank.org/
CQLF Ontology for Multi-Stream Architectures" -Part 3 of Corpus Query Lingua Franca (CQLF, ISO 24623-1:2018, for more information about CQLF see Bański. ) , Evert , Frick and Witt"CQLF Ontology for Multi-Stream Architectures" -Part 3 of Corpus Query Lingua Franca (CQLF, ISO 24623- 1:2018, for more information about CQLF see Bański, Frick and Witt (2016) and Evert et al. (2020)).
Bibliographical References. Bibliographical References
Corpus Query Lingua Franca (CQLF). P Bański, E Frick, A Witt, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Portorož, SloveniaEuropean Language Resource Association (ELRABański, P., Frick, E., and Witt, A. (2016). Corpus Query Lingua Franca (CQLF). In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 2804-2809, Portorož, Slovenia, May 23-28. European Language Resource Association (ELRA).
Eine Basis-Architektur für den Zugriff auf multimodale Korpora gesprochener Sprache. J Batinić, E Frick, J Gasch, T Schmidt, Digital Humanities: multimedial & multimodal. Konferenzabstracts zur 6. Tagung des Verbandes Digital Humanities im deutschsprachigen Raum e.V. Sahle, P.Frankfurt/MainMainz: Verband Digital Humanities im deutschsprachigen Raum e.V.Batinić, J.,Frick, E., Gasch, J. and Schmidt, T.(2019). Eine Basis-Architektur für den Zugriff auf multimodale Korpora gesprochener Sprache. In Sahle, P. (Ed.), Digital Humanities: multimedial & multimodal. Konferenzabstracts zur 6. Tagung des Verbandes Digital Humanities im deutschsprachigen Raum e.V. (DHd 2019). Frankfurt/Main; Mainz: Verband Digital Humanities im deutschsprachigen Raum e.V., pp. 280- 281.
Accessing spoken language corpora: an overview of current approaches. J Batinić, E Frick, T Schmidt, Corpora. EdinburghEdinburgh University Press16Batinić, J., Frick, E., and Schmidt, T. (2021). Accessing spoken language corpora: an overview of current approaches. In Corpora, 16 (3):417-445. Edinburgh: Edinburgh University Press.
MTAS: A Solr/Lucene based Multi-Tier Annotation Search solution. Selected papers from the CLARIN Annual Conference. M Brouwer, H Brugman, M Kemps-Snijders, Aix-en-ProvenceBrouwer, M., Brugman, H., and Kemps-Snijders, M. (2016). MTAS: A Solr/Lucene based Multi-Tier Annotation Search solution. Selected papers from the CLARIN Annual Conference 2016. Aix-en-Provence, pp. 19-37.
Corpus Query Lingua Franca part II: Ontology. S Evert, O Harlamov, P Heinrich, P Bański, Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020). the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)ParisEuropean Language Resources Association (ELRAEvert, S., Harlamov, O., Heinrich, P., and Bański, P. (2020). Corpus Query Lingua Franca part II: Ontology. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020), pages 3346-3352, Paris: European Language Resources Association (ELRA).
ZuMult: Neue Zugangswege zu Korpora gesprochener Sprache. C Fandrych, E Frick, J Kaiser, C Meißner, A Portmann, T Schmidt, M Schwendemann, F Wallner, K Wörner, Sprache in Politik und Gesellschaft: Perspektiven und Zugänge. Jahrbuch des Instituts für Deutsche Sprache 2021. Berlin etc. Kämper, H. et al.de Gruyterin printFandrych, C., Frick, E., Kaiser, J., Meißner, C., Portmann, A., Schmidt, T., Schwendemann, M., Wallner, F., and Wörner, K. (in print). ZuMult: Neue Zugangswege zu Korpora gesprochener Sprache. In Kämper, H. et al. (Eds.), Sprache in Politik und Gesellschaft: Perspektiven und Zugänge. Jahrbuch des Instituts für Deutsche Sprache 2021. Berlin etc.: de Gruyter.
European Language Resources Association (ELRA). ISO 24624:2016. Language resource management -Transcription of spoken language. E Frick, T Schmidt, Proceedings of the LREC 2020 Workshop, Language Resources and Evaluation Conference. the LREC 2020 Workshop, Language Resources and Evaluation ConferenceParisMetamodel1Language resource management -Corpus query lingua francaFrick, E. and Schmidt, T. (2020). Using Full Text Indices for Querying Spoken Language Data. In Proceedings of the LREC 2020 Workshop, Language Resources and Evaluation Conference, 11-16 May 2020, 8th Workshop on Challenges in the Management of Large Corpora (CMLC-8), pages 40-46, Paris: European Language Resources Association (ELRA). ISO 24624:2016. Language resource management - Transcription of spoken language. ISO 24623-1:2018. Language resource management - Corpus query lingua franca (CQLF) -Part 1: Metamodel.
T Schmidt, Korpuslinguistik. (=Germanistische Sprachwissenschaft um 2020. M. Kupietz & T. SchmidtBerlin/Bostonde GruyterSchmidt, T. (2018). Gesprächskorpora. In M. Kupietz & T. Schmidt (Eds.), Korpuslinguistik. (=Germanistische Sprachwissenschaft um 2020, Bd. 5). Berlin/Boston: de Gruyter, pp. 209-230. |
12,705,175 | Passing a USA National Bar Exam: a First Corpus for Experimentation | Bar exams provide a key watershed by which legal professionals demonstrate their knowledge of the law and its application. Passing the bar entitles one to practice the law in a given jurisdiction. The bar provides an excellent benchmark for the performance of legal information systems since passing the bar would arguably signal that the system has acquired key aspects of legal reason on a par with a human lawyer. The paper provides a corpus and experimental results with material derived from a real bar exam, treating the problem as a form of textual entailment from the question to an answer. The providers of the bar exam material set the Gold Standard, which is the answer key. The experiments carried out using the 'out of the box' the Excitement Open Platform for textual entailment. The results and evaluation show that the tool can identify wrong answers (non-entailment) with a high F1 score, but it performs poorly in identifying the correct answer (entailment). The results provide a baseline performance measure against which to evaluate future improvements. The reasons for the poor performance are examined, and proposals are made to augment the tool in the future. The corpus facilitates experimentation by other researchers. | [
279533,
480462,
8225810,
1499545,
14644892
] | Passing a USA National Bar Exam: a First Corpus for Experimentation
Biralatei Fawei [email protected]
Department of Computing Science
University of Aberdeen
AberdeenUnited Kingdom
Adam Wyner [email protected]
Department of Computing Science
University of Aberdeen
AberdeenUnited Kingdom
Jeff Pan [email protected]
Department of Computing Science
University of Aberdeen
AberdeenUnited Kingdom
Passing a USA National Bar Exam: a First Corpus for Experimentation
textual entailmentbar examlegal reasoningnatural language processing
Bar exams provide a key watershed by which legal professionals demonstrate their knowledge of the law and its application. Passing the bar entitles one to practice the law in a given jurisdiction. The bar provides an excellent benchmark for the performance of legal information systems since passing the bar would arguably signal that the system has acquired key aspects of legal reason on a par with a human lawyer. The paper provides a corpus and experimental results with material derived from a real bar exam, treating the problem as a form of textual entailment from the question to an answer. The providers of the bar exam material set the Gold Standard, which is the answer key. The experiments carried out using the 'out of the box' the Excitement Open Platform for textual entailment. The results and evaluation show that the tool can identify wrong answers (non-entailment) with a high F1 score, but it performs poorly in identifying the correct answer (entailment). The results provide a baseline performance measure against which to evaluate future improvements. The reasons for the poor performance are examined, and proposals are made to augment the tool in the future. The corpus facilitates experimentation by other researchers.
Introduction
Bar exams, which are extensive in-depth examinations about legal information and reasoning, provide a key watershed by which legal professionals demonstrate their knowledge of the law and its application. Passing the bar entitles one to practice the law in a given jurisdiction and topic, and it validates the examinee's knowledge of the law. We can say, then, that the bar exam encapsulates a range of legal knowledge. Thus, the bar provides an excellent benchmark for the performance of legal information systems which attempt to represent and reason with the law, since passing the bar would arguably signal that the system has acquired key aspects of legal reason on a par with a human lawyer. This paper presents a dataset and a first experiment with bar exam material derived from the United States Multistate Bar Examination (MBE) material, provided by the National Conference of Bar Examiners (NCBE) 1 . The Gold Standard (the correct answers) is provided by the NCBE. The paper reports a textual entailment study on this US bar exam material, running the Excitement Open Platform (EOP) 2 for textual entailment 'out of the box' (Dagan et al., 2009). In the experiment, we treat the the relationship between the question and the multiple-choice answers as a form of textual entailment. The results and evaluation show that the tool can identify wrong answers (non-entailment) with a high F1 score, though it performs poorly in identifying the correct answer (entailment). The results provide a baseline performance measure against which to evaluate future improvements. The reasons for the poor performance are examined, and proposals are made to augment the tool in the future. The study is cast in a more general context of question answering for the law. Question answering is an automatic way of determining the right answer as a response to a question presented in natural language form (Harabagiu 1 http://www.ncbex.org/ 2 http://excitement-project.eu/ and Moldovan, 2003). Among the many varieties of questions, we treat bar exam questions as a form of 'Yes/No' question; given background information and a statement about that information, is the statement true or false with respect to the background information? Question answering is useful in the legal domain, which faces the challenge of finding and determining the right statement given some background information; this is particularly daunting given the volume and complexity of legal information. The problematics only increase with legal reasoning from resources found on the Internet. Broadly, it is pressing to find approaches to extract and process information so as to identify the correct answer to a given question. The research adopts the textual entailment technique to determine if a given text t, known as the theory, entails another text h, known as the hypothesis (Dagan et al., 2009). The concept of entailment used in this technique is broader than logical concept of logical entailment -given t would one accept (or reject) h. Gold Standard corpora are devised to provide data for experiments. For our purposes, the bar exam question constitutes the theory and the question to be picked as the correct answer constitutes the hypothesis; the answers provided are the Gold Standard. The entailment classification is based on semantic relatedness and mutual consistency. The essence is to find out semantic relatedness and mutual consistency between a legal text in natural language form as question and some answer in natural language form, where semantic relatedness and mutual consistency bear on the terminology of the texts. In this research, we attempt to ascertain the semantic relatedness and mutual consistency between pairs of text question and answer from a large legal corpora. The findings are, in brief, that the textual entailment tool that is used is largely successful identifying answers that are not entailed by the question, but largely unsuccessful in identifying answers that are. The findings provide a baseline for future work, which would augment the 'out of the box' system with specifically legal information. In previous work (Fawei et al., 2015), we have presented a precursory corpus along with preliminary results of the application of EOP to that corpus. The novel contributions of this paper are the descripton and presentation of a new, larger, and 'cleaner' corpus for legal text NLP along with a more refined result from EOP. The significance of the work is that by laying a baseline, it provides a means to measure future incremental improvements to a successful legal question answering system. Such a system would, in our view, have broad and deep implications for the access to and practice of the law. The rest of the paper is organized as follows. Section 2. describes the legal corpus and its features. Section 3. explains the textual entailment tool, the Excitement Open Platform (EOP), as well as some selected associated algorithms. Section 4. presents the experiment and the results. Section 5. discusses some related works in the domain, while Section 6. wraps up the research discussion with observations and future work.
Corpus Description
The National Conference of Bar Examiners in the United States prepares and administers the Multistate Bar Exam (MBE) every year to thousands of aspiring lawyers throughout the country. The MBE is an obligatory, 6hour, 200-question multiple-choice test given in every US state but Louisiana. It accounts for 40-50% of an aspiring lawyers bar exam score (other exams taking up the other 50-60%). In 2014, 73,088 examinees took the test; the mean scaled score (out of 200) was 140.4; approximately 29% of examinees did not pass the exam. 3 The exam questions (in the most recent exam) cover the legal spectrum: Constitutional Law, Contracts, Criminal Law and Procedure, Evidence, Real Property, Torts, and Civil Procedure. Thus, the MBE is a broad and deep exploration of the examinee's knowledge of the law as it applies across the US. A legal corpus was gathered from NCBE materials and prepared for a textual entailment exercise on the Excitement Open Platform. The original dataset contains one hundred questions, each with four possible answers out of which the candidate must pick the correct one; the NCBE provided an answer key to the materials. Given some modifications discussed below, the original dataset was developed into pairs of theories and hypothesis, where each question was paired with one possible answer, yielding four hundred theoryhypothesis pairs. 4 The correct (Gold Standard) answer is indicated as entailment and the wrong answer as nonentailment. Analysed this way, there is a bias of nonentailmententailment in the ratio 3:1. The Gold Standard corpus contains 66306 words with 3071 sentences of which the sentences that are theory contains 2671 sentences with minimum of 4 and maximum of 13 sentences while the sentences that are hypothesis is 400 sentences. An example original question with answers a.-d. is:
After being fired from his job, Mel drank almost a quart of vodka and decided to ride the bus home.
3 Data accessed September 05, 2015 from http: //www.kaptest.com/bar-exam/courses/mbe/ multistate-bar-exam-mbe-change. 4 The analysed dataset is available upon request While on the bus, he saw a briefcase he mistakenly thought was his own, and began struggling with the passenger carrying the briefcase. Mel knocked the passenger to the floor, took the briefcase, and fled. Mel was arrested and charged with robbery.
Mel should be a. acquitted, because he used no threats and was intoxicated.
b. acquitted, because his mistake negated the required specific intent.
c. convicted, because his intoxication was voluntary.
d. convicted, because mistake is no defense to robbery.
For the textual entailment task, the material must be presented in a particular XML format, where the theory appears as a whole, then the hypothesis in a single sentence. Constructing the material for EOP processing required significant manual preprocessing. As illustration, we mention issues with this example. First, in the original format, we have one question followed by four possible answers, whereas in the XML format, each question can be followed by only one answer (Question division). Second, the original background portion includes part, e.g. "Mel should be", of what conceptually ought to be part of the hypothesis (what is entailed), along with the main verb (full proposition). Third, the possible answers portion includes part, e.g. a justification "because he used no threats and was intoxicated", of what conceptually ought to be part of the theory (justification). Fourth, the justification must, when moved to the theory, maintain a reasonable narrative "flow" (narrative). Fifth, the original is not in XML (XML). Finally, due consideration must be given to the derived format so that it is meaning preserving (meaning preserving); in our examples, meaning preservation requires that we identify the core inference and put all "background" information into the theory. Given these considerations, we have samples of derived questions:
<pair id="7A" entailment="NONENTAILMENT" task="QA"> <t>After being fired from his job, Mel drank almost a quart of vodka and decided to ride the bus home. While on the bus, he saw a briefcase he mistakenly thought was his own, and began struggling with the passenger carrying the briefcase. Mel knocked the passenger to the floor, took the briefcase, and fled. Mel used no threats and was intoxicated. Mel was arrested and charged with robbery.</t> <h>Mel should be acquitted.</h> < /pair> <pair id="7B" entailment="ENTAILMENT" task="QA"> <t>After being fired from his job, Mel drank almost a quart of vodka and decided to ride the bus home. While on the bus, he saw a briefcase he mistakenly thought was his own, and began struggling with the passenger carrying the briefcase. Mel knocked the passenger to the floor, took the briefcase, and fled. Mel was arrested and charged with robbery. Mel's mistake negated the required specific intent.</t> <h>Mel should be acquitted.</h>
</pair>
A range of other structural issues of the text were identified and controlled for in order to produce a corpus that conceptually matches the original:
• Meta comments about the exam question, e.g. "Assume that...." and "Which of the following is correct?"
• References to other cases, e.g. "As applied in Long's case....".
• Pronominal anaphora, e.g. "It is a generally applicable statute....", where the modification might disrupt the anaphoric chain.
• Changes in verbal form, e.g. "...as applied..." becomes a main verb "is applied".
• Scope of particles, e.g. "if any" must be attached to relevant elements.
• "Yes" and "No" in original questions to refer to positive and negative forms of the hypothesis.
• Subordinate clauses in h are made into main clauses in t, e.g. "In a suit for conversion by Homeowner against Neighbor...." to "Homeowner makes a suit against Neighbor for conversion."
Excitement Open Platform EOP Description
In this section, we briefly outline the Excitement Open Platform (EOP). The EOP is an open source platform made available for both scientific and technological community for textual inference. The essence of the platform is to deliver an automatic means of identifying textual entailment between a pair of texts. The EOP platform was developed and implemented to provide a common framework for users and developers to experiment with textual analysis using multilingual resources and a variety of algorithms (Magnini et al., 2014). The EOP currently contains five different entailment decision algorithms: BIUTEE, Edit Distance, Textual Inference Engine, P1EDA and AdArte. We experimented with Edit Distance and the Textual Inference Engine.
Edit Distance
The ED algorithm uses a series of mapping operations in order to map the entire semantic content of the theory to the hypothesis in order to determine entailment. The mapping operations are edit operations such as delete, insert and substitution. Each of these operations are associated with a cost value, in which the probability of entailment between text pairs could be derived by taking an inverse proportion of the edit distance between the text pairs (Padó et al., 2014;Magnini et al., 2014). The cost of each operation is given as 0 for match, 0 for delete, 1 for insert, and 1 for substitution. The algorithm measures semantic similarities between pairs of texts by measuring token edit distance and tree edit distance. It applies some similarity measures such as Word overlap, Cosine similarity, and Longest common sequence to measure similarity between theory and hypothesis. An entailment decision is taken based on the number of operations that led to making the theory and hypothesis to be identical, concentrating its findings based on the minimal number operations that lead to the goal state. The edit distance algorithm uses a threshold of approximately 0.5741 and accuracy measure of 0.6575 to determine entailment between text pairs.
Textual Inference Engine TIE
The TIE algorithm is similar to that of the edit distance algorithm, but in addition checks entailment based on relatedness/similarity and mutual consistency, determining whether there is an inherent directionality between the given theory and hypothesis. A confidence value is assigned. It uses analysis on bag-of-words along with syntactic and semantic dependency information. Relatedness is a measure of similarity/difference of concepts, sentences and words measure (McInnes and Pedersen, 2013). For our purposes, relatedness measures the extent to which a pair of sentences are related to each other. The similarity measure quantifies similarities between two concepts based on the information they contain (Pedersen et al., 2004), which is obtainable with the help of the WordNet lexical database. The bag-of-words takes the theory and hypothesis pair of the corpus as a set of words and returns a score based on similarity and relatedness from the pair. This measurement technique relies on VerbOcean for extraction of related verbs as well as WordNet for expansion of related words and Google Normalized Distance (GND) computation of distance with respect to terms (Padó et al., 2014). The bag-of-word feature returns two scores which are within 0 and 1 for both theory and hypothesis. The syntactic information compares the theory and hypothesis pair based on dependency parse trees. Bags of dependency triples are extracted from the text pairs and computed for normalised values for the theory and hypothesis. The normalised values lie between 0 and 1, which are used for the identification of relatedness between the text pairs. If the normalised value is 0 then there is no relatedness, but if it is 1 then identical. This feature models word dependency in a sentence. The knowledge resources used in this component are VerbOcean, WordNet and GND and operated on MSTParser.
Experiments and Results
We applied the EOP to our corpus of MBE questions. A number of trials were carried out to ascertain the degree, measured by standard Accuracy (A), Precision (P), Recall (R) and F1 measures, to which the various algorithms (Edit Distance and Text Inference Engine) could be used to reliably identify theories with entailed (E) from nonentailed (NE) hypotheses. Using the TIE algorithm on the corpus of 400 pairs, out of the 100 Gold Standard entailment examples, the system was able to confirm 23 actual entailments while failing to accurately identify the remaining 77 (see Tables 1-2 Tables 3-4). The algorithm had the highest entailment result with 0.574176 confidence value and a highest nonentailment result with 0.002747 confidence value.
In order to avoid bias, the dataset was redistributed with each correct pair along with one wrong pair, constituting three different datasets each with two hundred pairs. The algorithms were reapplied on the redistributed dataset. The results were the same or slighly worse in comparison with the initial dataset of 400 pairs; to conserve on space, we have suppressed these results. To summarise, the algorithms used in the EOP have not succeeded in coming close to getting enough the correct answers to pass the USA national bar exam. However, they can reliably identify the wrong answers.
E NE E
23 77 NE 69 231
Related Work
The most closely related work is (Kim et al., 2013;Tran et al., 2013). 5 In (Tran et al., 2013), an analysis is applied to 51 legal questions on the Japanese National Pension Law; the focus is on retrieval of relevant texts rather than textual entailment per se. The approach seems to be that closely related legal information ought to have closely related references to other texts, which are retrieved and used to augment the content of the texts being examined. Textual similarity and logical structure analysis are used to determine the relationship between question and answer. They report an improved performance over approaches without retrivial, with an accuracy of about 60%. The sample of data is relatively small (51 questions); the role of the augmented texts and logical structure analysis are difficult to gauge. Finally, the underlying analysis is done on Japanese and not on Bar Exam questions, so the comparison to US Bar Exam questions is indirect. More directly relevant is (Kim et al., 2013), which works with a corpus of Japanese/Korean Bar exam questions, which include legal articles and questions. Questions are analysed in terms of negations and complexity. A rule-based system for Japanese legal reasoning is applied with results of about 60% accuracy for all questions. The structure of the material (language, question and articles) is different from the US Bar Exam; the tool is highly specific; moreover, the relationship between the source natural language text and the rule-based analysis is unclear. Question-answering and textual inference have long been studied, though not with application to legal corpora. For question-answering, inference has been used (Lin and Pantel, 2001;Segura-Olivares et al., 2013), though noisy situations reduces performance. An answer validation technique that utilizes the subsequence kernel method has been implement for machine learning for question answering (Wang and Neumann, 2008). A linear-chain Conditional Random Field (CRF) has been integrated into Tree Edit Distance for extracting answers (Yao et al., 2013). A lexical and syntactic feature similarities technique for determining textual entailment between a pair of texts has been applied (Pakray et al., 2011). A tree kernel approach is used to drive a greedy search routine to decide textual entailment between a pair of texts (Heilman and Smith, 2010). A similarity metrics measure was adopted (Rios and Gelbukh, 2012) in recognizing textual entailment. The research adopted string based metrics, chunking and named entities recognition as well as shallow semantic metrics for recognizing textual entailment. In (Bobrow et al., 2007), a rule-based approach is described to determine entailment and contradiction between a pair of texts. A semantic inference mechanism alongside cost-based approximation for deciding en-tailment between a pair of texts is presented in (Bar-Haim et al., 2007). The framework operates on parse trees to generate new trees based on entailment rules to decide if the hypothesis is entailed in the text. While these approaches require further improvement, it would be worth exploring in the EOP context whether they would augment the results when applied to legal texts.
Discussion
The paper reports the development of a corpus and the application of the EOP to determine textual entailment relations between questions and answers on US legal bar exams. The results show some success in identifying entailment and nonentailment pairs of sentences. From the experiments, it is clear that while recognition of nonentailment is rather high, the recognition of entailment is poor. One of the key observations to emerge from this study is the importance of logical reasoning in making entailment determinations. Using bags of words based on enrichment of lexical information or syntactic dependencies is not sufficient. Consider the following two examples (with simplified questions): In these examples, the algorithms determine that all four of the possible answers are entailed by the question. The reason is that all the possible answers are closely semantically related to the text. The algorithms only use explicit textual information or augmentations provided by the resources.
Several other examples in our data set fall under this sort of problemantic.
Another issue identified in course of the experiment is that the materials used to augment the textual information, e.g. VerbOcean and WordNet, lack the sorts of legal legal information and reasoning that is required. For example, the following possible answers not only refer to a relevant legal document, but also to some reasoning extracted from it:
. To decide entailment in this case requires constitutional knowledge. With the current application, nonentailment is fairly reliably identified since this relies on the textual difference between theory and hypothesis, whereas for entailment, examples textual similarity is not reliable as the theory and hypothesis can be rather distinct, yet semantically related. In future work, we will develop legal resources that can serve to augment textual entailment tools so as to improve the results of a textual entailment tool. The work reported here is novel in that it is a first, open, well-developed corpus of legal textual on US Bar Exams which is specifically designed to address matters of inference. The results lay a baseline for future developments.
); it would may be that the ratio of entailment to non-entailment sentences biased the algorithm. Out of the 300 Gold Standard nonentailment examples, the system incorrectly identified 69 as entailments while confirming 231 as nonentailment. The TIE algorithm had the highest entailment result with 0.903026 confidence value and a highest nonentailment result with 0.501849 confidence value. The Edit Distance algorithm (ED) performed worse on the corpus of 400 pairs, correctly identifying only 11 out of the 100 entailment examples and 22 out of the 300 nonentailment examples (see
Table 1 :
1Contingency Table for TIE algorithmA
P
R
F1
NE 63.5 75.0 77.0 75.987
E
63.5 25.0 23.0 23.958
Table 2 :
2Results from TIE algorithmE NE
E
11 89
NE 22 278
Table 3 :
3Contingency Table for ED ResultA
P
R
F1
NE 72.25 75.749 92.667 83.358
E
72.25 33.333
11.0
16.541
Table 4 :
4Results from EDA
Question 1: ....Tina decided that the house needed improvement, and she paid cash to have installed standard-sized combination screen/storm windows, a freestanding refrigerator to fit a kitchen alcove built for that purpose, a built-in electric stove and oven to fit a kitchen counter opening left for that purpose, and carpeting to cover the plywood living room floor....A. The court should decide that Tina may remove none of the items. B. The court should decide that Tina may remove only the refrigerator. C. The court should decide that Tina may remove all items except the carpet. D. The court should decide that Tina may remove all of the items. Question 2: ....Proposal A would eliminate the insanity defense altogether. Proposal B would retain the defense but place on the defendant the burden of proving insanity by a preponderance of the evidence. Opponents of the reforms argue that the proposals would be unconstitutional under the .... A. proposals would be unconstitutional. B. Neither proposal would be unconstitutional. C. Proposal A only would be unconstitutional. D. Proposal B only would be unconstitutional.
...the original jurisdiction of the Supreme Court as defined by Article III .... ....appellate jurisdiction of the Supreme Court, because Article III states.... ....in support of the EPAs request is that Article III precludes.... ...support of the EPAs request is that Article III provides that....
http://webdocs.cs.ualberta.ca/˜miyoung2/ jurisin_task/index.html
AcknowledgmentsThe authors appreciate the permission granted by the Multistate Bar Examination organisation to work with their bar exam materials. The first author gratefully acknowledged support by Niger Delta University through the Tertiary Education Trust Fund (TETFund).
Semantic inference at the lexicalsyntactic level for textual entailment recognition. R Bar-Haim, I Dagan, I Greental, I Szpektor, M Friedman, Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. the ACL-PASCAL Workshop on Textual Entailment and ParaphrasingACLBar-Haim, R., Dagan, I., Greental, I., Szpektor, I., and Friedman, M. (2007). Semantic inference at the lexical- syntactic level for textual entailment recognition. In Pro- ceedings of the ACL-PASCAL Workshop on Textual En- tailment and Paraphrasing, pages 131-136. ACL.
Precision-focused textual inference. D G Bobrow, C Condoravdi, R Crouch, V De Paiva, L Karttunen, T H King, R Nairn, L Price, A Zaenen, Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. the ACL-PASCAL Workshop on Textual Entailment and ParaphrasingACLBobrow, D. G., Condoravdi, C., Crouch, R., de Paiva, V., Karttunen, L., King, T. H., Nairn, R., Price, L., and Za- enen, A. (2007). Precision-focused textual inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 16-21. ACL.
. I Dagan, B Dolan, B Magnini, Roth , D , Natural Language Engineering. i-xvii15Dagan, I., Dolan, B., Magnini, B., and Roth, D. (2009). Natural Language Engineering, 15:i-xvii, 10.
Passing a USA national bar exam -a first experiment. B Fawei, A Wyner, J Pan, De- cember 10-11Legal Knowledge and Information Systems -JURIX 2015: The Twenty-Eighth Annual Conference. Braga, PortualFawei, B., Wyner, A., and Pan, J. (2015). Passing a USA national bar exam -a first experiment. In Legal Knowledge and Information Systems -JURIX 2015: The Twenty-Eighth Annual Conference, Braga, Portual, De- cember 10-11, 2015, pages 179-180.
Question answering. S Harabagiu, D Moldovan, The Oxford Handbook of Computational Linguistics. Ruslan MitkovOxford University PressHarabagiu, S. and Moldovan, D. (2003). Question answer- ing. In Ruslan Mitkov, editor, The Oxford Handbook of Computational Linguistics, pages 560-582. Oxford Uni- versity Press.
Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. M Heilman, N Smith, HLT-NAACL 2010. ACLHeilman, M. and Smith, N. (2010). Tree edit models for recognizing textual entailments, paraphrases, and an- swers to questions. In HLT-NAACL 2010, pages 1011- 1019. ACL.
Answering yes/no questions in legal bar exams. M.-Y Kim, Y Xu, R Goebel, K Satoh, New Frontiers in Artificial Intelligence -JSAI-isAI 2013. Kim, M.-Y., Xu, Y., Goebel, R., and Satoh, K. (2013). Answering yes/no questions in legal bar exams. In New Frontiers in Artificial Intelligence -JSAI-isAI 2013
. Japan Workshops, Workshops, Japan, October 2013, pages 199-213.
Discovery of inference rules for question-answering. D Lin, P Pantel, Natural Language Engineering. 704Lin, D. and Pantel, P. (2001). Discovery of inference rules for question-answering. Natural Language Engineering, 7(04):343-360.
The excitement open platform for textual inferences. B Magnini, R Zanoli, I Dagan, K Eichler, G Neumann, T.-G Noh, S Pado, A Stern, O Levy, Proceeedings of 52nd Annual Meeting of the ACL. eeedings of 52nd Annual Meeting of the ACLACLMagnini, B., Zanoli, R., Dagan, I., Eichler, K., Neumann, G., Noh, T.-G., Pado, S., Stern, A., and Levy, O. (2014). The excitement open platform for textual inferences. In Proceeedings of 52nd Annual Meeting of the ACL, pages 43-48. ACL.
Evaluating measures of semantic similarity and relatedness to disambiguate terms in biomedical text. B Mcinnes, T Pedersen, Journal of biomedical informatics. 466McInnes, B. and Pedersen, T. (2013). Evaluating measures of semantic similarity and relatedness to disambiguate terms in biomedical text. Journal of biomedical infor- matics, 46(6):1116-1124.
Design and realization of a modular architecture for textual entailment. S Padó, T.-G Noh, A Stern, R Wang, R Zanoli, Journal of Natural Language Engineering. 21Padó, S., Noh, T.-G., Stern, A., Wang, R., and Zanoli, R. (2014). Design and realization of a modular architecture for textual entailment. Journal of Natural Language En- gineering, 21:167-200, 3.
. P Pakray, S Bandyopadhyay, A Gelbukh, Pakray, P., Bandyopadhyay, S., and Gelbukh, A. (2011).
Textual entailment using lexical and syntactic similarity. International Journal of Artificial Intelligence and Applications. 21Textual entailment using lexical and syntactic similarity. International Journal of Artificial Intelligence and Ap- plications, 2(1):43-58.
Wordnet:: Similarity: measuring the relatedness of concepts. T Pedersen, S Patwardhan, J Michelizzi, Demonstration papers at HLT-NAACL 2004. ACLPedersen, T., Patwardhan, S., and Michelizzi, J. (2004). Wordnet:: Similarity: measuring the relatedness of con- cepts. In Demonstration papers at HLT-NAACL 2004, pages 38-41. ACL.
Recognizing textual entailment with similarity metrics. M Rios, A Gelbukh, Research in Computing Science. 58Rios, M. and Gelbukh, A. (2012). Recognizing textual en- tailment with similarity metrics. Research in Computing Science, 58:337-347.
Feature analysis for paraphrase recognition and textual entailment. A Segura-Olivares, A García, H Calvo, Research in Computing Science. 70Segura-Olivares, A., García, A., and Calvo, H. (2013). Feature analysis for paraphrase recognition and textual entailment. Research in Computing Science, 70:119- 144.
Answering legal questions by mining reference information. O T Tran, B X Ngo, M Nguyen, A Shimazu, New Frontiers in Artificial Intelligence -JSAI-isAI 2013 Workshops. JapanTran, O. T., Ngo, B. X., Nguyen, M., and Shimazu, A. (2013). Answering legal questions by mining reference information. In New Frontiers in Artificial Intelligence - JSAI-isAI 2013 Workshops, Japan, October 2013, pages 214-229.
Using recognizing textual entailment as a core engine for answer validation. R Wang, G Neumann, Advances in Multilingual and Multimodal Information Retrieval. SpringerWang, R. and Neumann, G. (2008). Using recognizing tex- tual entailment as a core engine for answer validation. In Advances in Multilingual and Multimodal Information Retrieval, pages 387-390. Springer.
Answer extraction as sequence tagging with tree edit distance. X Yao, B V Durme, C Callison-Burch, Clark , P , HLT-NAACL. Yao, X., Durme, B. V., Callison-burch, C., and Clark, P. (2013). Answer extraction as sequence tagging with tree edit distance. In HLT-NAACL, pages 858-867. |
759,831 | A Generative Word Embedding Model and its Low Rank Positive Semidefinite Solution | Most existing word embedding methods can be categorized into Neural Embedding Models and Matrix Factorization (MF)based methods. However some models are opaque to probabilistic interpretation, and MF-based methods, typically solved using Singular Value Decomposition (SVD), may incur loss of corpus information. In addition, it is desirable to incorporate global latent factors, such as topics, sentiments or writing styles, into the word embedding model. Since generative models provide a principled way to incorporate latent factors, we propose a generative word embedding model, which is easy to interpret, and can serve as a basis of more sophisticated latent factor models. The model inference reduces to a low rank weighted positive semidefinite approximation problem. Its optimization is approached by eigendecomposition on a submatrix, followed by online blockwise regression, which is scalable and avoids the information loss in SVD. In experiments on 7 common benchmark datasets, our vectors are competitive to word2vec, and better than other MF-based methods. | [
9397697,
8712237,
9140751,
7478738,
5944731,
1957433,
5959482,
12730203
] | A Generative Word Embedding Model and its Low Rank Positive Semidefinite Solution
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 2015. 2015
Shaohua Li
Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY)
Nanyang Technological University
Singapore
Jun Zhu
Tsinghua University
P.R. China
Chunyan Miao [email protected]
Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY)
Nanyang Technological University
Singapore
A Generative Word Embedding Model and its Low Rank Positive Semidefinite Solution
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsSeptember 2015. 2015
Most existing word embedding methods can be categorized into Neural Embedding Models and Matrix Factorization (MF)based methods. However some models are opaque to probabilistic interpretation, and MF-based methods, typically solved using Singular Value Decomposition (SVD), may incur loss of corpus information. In addition, it is desirable to incorporate global latent factors, such as topics, sentiments or writing styles, into the word embedding model. Since generative models provide a principled way to incorporate latent factors, we propose a generative word embedding model, which is easy to interpret, and can serve as a basis of more sophisticated latent factor models. The model inference reduces to a low rank weighted positive semidefinite approximation problem. Its optimization is approached by eigendecomposition on a submatrix, followed by online blockwise regression, which is scalable and avoids the information loss in SVD. In experiments on 7 common benchmark datasets, our vectors are competitive to word2vec, and better than other MF-based methods.
Introduction
The task of word embedding is to model the distribution of a word and its context words using their corresponding vectors in a Euclidean space. Then by doing regression on the relevant statistics derived from a corpus, a set of vectors are recovered which best fit these statistics. These vectors, commonly referred to as the embeddings, capture semantic/syntactic regularities between the words.
The core of a word embedding method is the link function that connects the input -the embeddings, with the output -certain corpus statistics.
Based on the link function, the objective function is developed. The reasonableness of the link function impacts the quality of the obtained embeddings, and different link functions are amenable to different optimization algorithms, with different scalability. Based on the forms of the link function and the optimization techniques, most methods can be divided into two classes: the traditional neural embedding models, and more recent low rank matrix factorization methods.
The neural embedding models use the softmax link function to model the conditional distribution of a word given its context (or vice versa) as a function of the embeddings. The normalizer in the softmax function brings intricacy to the optimization, which is usually tackled by gradient-based methods. The pioneering work was (Bengio et al., 2003). Later Mnih and Hinton (2007) propose three different link functions. However there are interaction matrices between the embeddings in all these models, which complicate and slow down the training, hindering them from being trained on huge corpora. Mikolov et al. (2013a) and Mikolov et al. (2013b) greatly simplify the conditional distribution, where the two embeddings interact directly. They implemented the well-known "word2vec", which can be trained efficiently on huge corpora. The obtained embeddings show excellent performance on various tasks.
Low-Rank Matrix Factorization (MF in short) methods include various link functions and optimization methods. The link functions are usually not softmax functions. MF methods aim to reconstruct certain corpus statistics matrix by the product of two low rank factor matrices. The objective is usually to minimize the reconstruction error, optionally with other constraints. In this line of research, Levy and Goldberg (2014b) find that "word2vec" is essentially doing stochastic weighted factorization of the word-context pointwise mutual information (PMI) matrix. They then factorize this matrix directly as a new method. Pennington et al. (2014) propose a bilinear regression function of the conditional distribution, from which a weighted MF problem on the bigram logfrequency matrix is formulated. Gradient Descent is used to find the embeddings. Recently, based on the intuition that words can be organized in semantic hierarchies, add hierarchical sparse regularizers to the matrix reconstruction error. With similar techniques, reconstruct a set of pretrained embeddings using sparse vectors of greater dimensionality. Dhillon et al. (2015) apply Canonical Correlation Analysis (CCA) to the word matrix and the context matrix, and use the canonical correlation vectors between the two matrices as word embeddings. Stratos et al. (2014) and Stratos et al. (2015) assume a Brown language model, and prove that doing CCA on the bigram occurrences is equivalent to finding a transformed solution of the language model. Arora et al. (2015) assume there is a hidden discourse vector on a random walk, which determines the distribution of the current word. The slowly evolving discourse vector puts a constraint on the embeddings in a small text window. The maximum likelihood estimate of the embeddings within this text window approximately reduces to a squared norm objective.
There are two limitations in current word embedding methods. The first limitation is, all MFbased methods map words and their context words to two different sets of embeddings, and then employ Singular Value Decomposition (SVD) to obtain a low rank approximation of the word-context matrix M . As SVD factorizes M M , some information in M is lost, and the learned embeddings may not capture the most significant regularities in M . Appendix A gives a toy example on which SVD does not work properly.
The second limitation is, a generative model for documents parametered by embeddings is absent in recent development. Although (Stratos et al., 2014;Stratos et al., 2015;Arora et al., 2015) are based on generative processes, the generative processes are only for deriving the local relationship between embeddings within a small text window, leaving the likelihood of a document undefined. In addition, the learning objectives of some models, e.g. (Mikolov et al., 2013b, Eq.1), even have no clear probabilistic interpretation. A generative word embedding model for documents is not only easier to interpret and analyze, but more importantly, provides a basis upon which documentlevel global latent factors, such as document topics (Wallach, 2006), sentiments (Lin and He, 2009), writing styles (Zhao et al., 2011b), can be incorporated in a principled manner, to better model the text distribution and extract relevant information.
Based on the above considerations, we propose to unify the embeddings of words and context words. Our link function factorizes into three parts: the interaction of two embeddings capturing linear correlations of two words, a residual capturing nonlinear or noisy correlations, and the unigram priors. To reduce overfitting, we put Gaussian priors on embeddings and residuals, and apply Jelinek-Mercer Smoothing to bigrams. Furthermore, to model the probability of a sequence of words, we assume that the contributions of more than one context word approximately add up. Thereby a generative model of documents is constructed, parameterized by embeddings and residuals. The learning objective is to maximize the corpus likelihood, which reduces to a weighted low-rank positive semidefinite (PSD) approximation problem of the PMI matrix. A Block Coordinate Descent algorithm is adopted to find an approximate solution. This algorithm is based on Eigendecomposition, which avoids information loss in SVD, but brings challenges to scalability. We then exploit the sparsity of the weight matrix and implement an efficient online blockwise regression algorithm. On seven benchmark datasets covering similarity and analogy tasks, our method achieves competitive and stable performance.
The source code of this method is provided at https://github.com/askerlee/topicvec.
Notations and Definitions
Throughout the paper, we always use a uppercase bold letter as S, V to denote a matrix or set, a lowercase bold letter as v w i to denote a vector, a normal uppercase letter as N, W to denote a scalar constant, and a normal lowercase letter as s i , w i to denote a scalar variable. Suppose a vocabulary S = {s 1 , · · · , s W } consists of all the words, where W is the vocabulary size. We further suppose s 1 , · · · , s W are sorted in decending order of the frequency, i.e. s 1 is most frequent, and s W is least frequent. A document d i is a sequence of words d i = (w i1 , · · · , w iL i ), w ij ∈ S. A corpus is a collec- In a document, a sequence of words is referred to as a text window, denoted by w i , · · · , w i+l , or w i :w i+l in shorthand. A text window of chosen size c before a word w i defines the context of w i as w i−c , · · · , w i−1 . Here w i is referred to as the focus word. Each context word w i−j and the focus word w i comprise a bigram w i−j , w i .
The Pointwise Mutual Information between two words s i , s j is defined as PMI(s i , s j ) = log P (s i , s j ) P (s i )P (s j ) .
Link Function of Text
In this section, we formulate the probability of a sequence of words as a function of their embeddings. We start from the link function of bigrams, which is the building blocks of a long sequence. Then this link function is extended to a text window with c context words, as a first-order approximation of the actual probability.
Link Function of Bigrams
We generalize the link function of "word2vec" and "GloVe" to the following:
P (s i , s j ) = exp v s j v s i + a s i s j P (s i )P (s j ) (1)
The rationale for (1) originates from the idea of the Product of Experts in (Hinton, 2002). Suppose different types of semantic/syntactic regularities between s i and s j are encoded in different dimensions of v s i , v s j . As exp{v s j v s i } = l exp{v s i ,l · v s j ,l }, this means the effects of different regularities on the probability are combined by multiplying together. If s i and s j are independent, their joint probability should be P (s i )P (s j ).
In the presence of correlations, the actual joint probability P (s i , s j ) would be a scaling of it. The scale factor reflects how much s i and s j are positively or negatively correlated. Within the scale factor, v s j v s i captures linear interactions between s i and s j , the residual a s i s j captures nonlinear or noisy interactions. In applications, only v s j v s i is of interest. Hence the bigger magnitude v s j v s i is of relative to a s i s j , the better.
Note that we do not assume a s i s j = a s j s i . This provides the flexibility P (s i , s j ) = P (s j , s i ), agreeing with the asymmetry of bigrams in natural languages. At the same time, v s j v s i imposes a symmetric part between P (s i , s j ) and P (s j , s i ).
(1) is equivalent to
P (s j |s i )=exp v s j v s i + a s i s j + log P (s j ) , (2) log P (s j |s i ) P (s j ) = v s j v s i + a s i s j .(3)
(3) of all bigrams is represented in matrix form:
V V + A = G,(4)
where G is the PMI matrix.
Gaussian Priors on Embeddings
When (1) is employed on the regression of empirical bigram probabilities, a practical issue arises: more and more bigrams have zero frequency as the constituting words become less frequent. A zero-frequency bigram does not necessarily imply negative correlation between the two constituting words; it could simply result from missing data. But in this case, even after smoothing, (1) will force v s j v s i + a s i s j to be a big negative number, making v s i overly long. The increased magnitude of embeddings is a sign of overfitting.
To reduce overfitting of embeddings of infrequent words, we assign a Spherical Gaussian prior
N (0, 1 2µ i I) to v s i : P (v s i ) ∼ exp{−µ i v s i 2 },
where the hyperparameter µ i increases as the frequency of s i decreases.
Gaussian Priors on Residuals
We wish v s j v s i in (1) captures as much correlations between s i and s j as possible. Thus the smaller a s i s j is, the better. In addition, the more frequent s i , s j is in the corpus, the less noise there is in their empirical distribution, and thus the residual a s i s j should be more heavily penalized.
To this end, we penalize the residual a s i s j by f (P (s i , s j ))a 2 s i s j , where f (·) is a nonnegative monotonic transformation, referred to as the weighting function. Let h ij denoteP (s i , s j ), then the total penalty of all residuals are the square of the weighted Frobenius norm of A:
s i ,s j ∈S f (h ij )a 2 s i s j = A 2 f (H) .(5)
By referring to "GloVe", we use the following weighting function, and find it performs well:
f (h ij ) = √ h ij Ccut h ij < C cut , i = j 1 h ij ≥ C cut , i = j 0 i = j ,
where C cut is chosen to cut the most frequent 0.02% of the bigrams off at 1. When s i = s j , two identical words usually have much smaller probability to collocate. HenceP (s i , s i ) does not reflect the true correlation of a word to itself, and should not put constraints to the embeddings. We eliminate their effects by setting f (h ii ) to 0.
If the domain of A is the whole space R W ×W , then this penalty is equivalent to a Gaussian prior N 0, 1 2f (h ij ) on each a s i s j . The variances of the Gaussians are determined by the bigram empirical probability matrix H.
Jelinek-Mercer Smoothing of Bigrams
As another measure to reduce the impact of missing data, we apply the commonly used Jelinek-Mercer Smoothing (Zhai and Lafferty, 2004) to smooth the empirical conditional probabilitỹ P (s j |s i ) by the unigram probabilityP (s j ) as:
P smoothed (s j |s i ) = (1−κ)P (s j |s i )+κP (s j ). (6)
Accordingly, the smoothed bigram empirical joint probability is defined as
P (s i , s j ) = (1−κ)P (s i , s j )+κP (s i )P (s j ). (7)
In practice, we find κ = 0.02 yields good results. When κ ≥ 0.04, the obtained embeddings begin to degrade with κ, indicating that smoothing distorts the true bigram distributions.
Link Function of a Text Window
In the previous subsection, a regression link function of bigram probabilities is established. In this section, we adopt a first-order approximation based on Information Theory, and extend the link function to a longer sequence w 0 , · · · , w c−1 , w c .
Decomposing a distribution conditioned on n random variables as the conditional distributions on its subsets roots deeply in Information Theory. This is an intricate problem because there could be both (pointwise) redundant information and (pointwise) synergistic information among the conditioning variables (Williams and Beer, 2010). They are both functions of the PMI. Based on an analysis of the complementing roles of these two types of pointwise information, we assume they are approximately equal and cancel each other when computing the pointwise interaction information. See Appendix B for a detailed discussion.
Following the above assumption, we have PMI(w 2 ; w 0 , w 1 ) ≈ PMI(w 2 ; w 0 )+PMI(w 2 ; w 1 ):
log P (w 0 , w 1 |w 2 ) P (w 0 , w 1 ) ≈log P (w 0 |w 2 ) P (w 0 ) +log P (w 1 |w 2 ) P (w 1 ) .
Plugging (1) and (3) into the above, we obtain
P (w 0 , w 1 , w 2 ) ≈ exp 2 i,j=0 i =j (v w i v w j + a w i w j ) + 2 i=0 log P (w i ) .
We extend the above assumption to that the pointwise interaction information is still close to 0 within a longer text window. Accordingly the above equation extends to a context of size c > 2:
P (w 0 , · · · , w c ) ≈ exp c i,j=0 i =j (v w i v w j + a w i w j ) + c i=0 log P (w i ) .
From it derives the conditional distribution of w c , given its context w 0 , · · · , w c−1 :
P (w c | w 0 : w c−1 )= P (w 0 , · · · , w c ) P (w 0 , · · · , w c−1 ) ≈P (w c ) exp v wc c−1 i=0 v w i + c−1 i=0 a w i wc . (8)
Generative Process and Likelihood
We proceed to assume the text is generated from a Markov chain of order c, i.e., a word only depends on words within its context of size c. Given the hyperparameter µ = (µ 1 , · · ·, µ W ), the generative process of the whole corpus is:
1. For each word s i , draw the embedding v s i from N (0, 1 2µ i I); 2. For each bigram s i , s j , draw the residual a s i s j from N 0, 1 2f (h ij ) ; 3. For each document d i , for the j-th word, draw word w ij from S with probability P (w ij | w i,j−c : w i,j−1 ) defined by (8). Figure 1: The Graphical Model of PSDVec
v w0 v w1 v wc · · · d V A µ i v si h ij a ij
The above generative process for a document d is presented as a graphical model in Figure 1. Based on this generative process, the probability of a document d i can be derived as follows, given the embeddings and residuals V , A:
P (d i |V , A) = L i j=1 P (w ij ) exp v w ij j−1 k=j−c v w ik + j−1 k=j−c a w ik w ij .
The complete-data likelihood of the corpus is:
p(D, V , A) = W i=1 N (0, I 2µ i ) W,W i,j=1 N 0, 1 2f (h ij ) M i=1 p(d i |V, A) = 1 Z(H, µ) exp − W,W i,j=1 f (h i,j )a 2 s i s j − W i=1 µ i v s i 2 · M,L i i,j=1 P (w ij ) exp v w ij j−1 k=j−c v w ik + j−1 k=j−c a w ik w ij ,
where Z(H, µ) is the normalizing constant.
Taking the logarithm of both sides of p(D, A, V ) yields
log p(D, V , A) =C 0 − log Z(H, µ) − A 2 f (H) − W i=1 µ i v s i 2 + M,L i i,j=1 v w ij j−1 k=j−c v w ik + j−1 k=j−c a w ik w ij ,(9)
where C 0 = M,L i i,j=1 log P (w ij ) is constant.
Learning Algorithm
Learning Objective
The learning objective is to find the embeddings V that maximize the corpus log-likelihood (9). Let x ij denote the (smoothed) frequency of bigram s i , s j in the corpus. Then (9) is sorted as:
log p(D, V , A) =C 0 − log Z(H, µ) − A 2 f (H) − W i=1 µ i v s i 2 + W,W i,j=1 x ij (v s i v s j + a s i s j ).(10)
As the corpus size increases,
W,W i,j=1 x ij (v s i v s j +a s i s j )
will dominate the parameter prior terms. Then we can ignore the prior terms when maximizing (10).
max x ij (v s i v s j +a s i s j ) = x ij · max P smoothed (s i , s j ) log P (s i , s j ).
As both {P smoothed (s i , s j )} and {P (s i , s j )} sum to 1, the above sum is maximized when
P (s i , s j ) =P smoothed (s i , s j ).
The maximum likelihood estimator is then:
P (s j |s i ) =P smoothed (s j |s i ), v s i v s j + a s i s j = logP smoothed (s j |s i ) P (s j ) .(11)
Writing (11) in matrix form:
B * = P smoothed (s j |s i ) s i ,s j ∈S G * = log B * − log u ⊗ (1 · · · 1),(12)
where "⊗" is the outer product. Now we fix the values of v s i v s j + a s i s j at the above optimal. The corpus likelihood becomes
log p(D, V , A) =C 1 − A 2 f (H) − W i=1 µ i v s i 2 , subject to V V + A = G * ,(13)
where
C 1 = C 0 + x ij logP smoothed (s i , s j ) − log Z(H, µ) is constant.
Learning V as Low Rank PSD Approximation
Once G * has been estimated from the corpus using (12), we seek V that maximizes (13). This is to find the maximum a posteriori (MAP) estimates of V , A that satisfy V V + A = G * . Applying this constraint to (13), we obtain Algorithm 1 BCD algorithm for finding a unregularized rank-N weighted PSD approximant.
Input: matrix G * , weight matrix W = f (H), iteration number T , rank N Randomly initialize X (0) for t = 1, · · · , T do G t = W • G * + (1 − W ) • X (t−1) X (t) = PSD Approximate(G t , N ) end for λ, Q = Eigen Decomposition(X (T ) ) V * = diag(λ 1 2 [1:N ]) · Q [1:N ] Output: V * arg max V log p(D, V , A) = arg min V G * −V V f (H) + W i=1 µ i v s i 2 .(14)
Let X = V V . Then X is positive semidefinite of rank N . Finding V that minimizes (14) is equivalent to finding a rank-N weighted positive semidefinite approximant X of G * , subject to Tikhonov regularization. This problem does not admit an analytic solution, and can only be solved using local optimization methods.
First we consider a simpler case where all the words in the vocabulary are enough frequent, and thus Tikhonov regularization is unnecessary. In this case, we set ∀µ i = 0, and (14) becomes an unregularized optimization problem. We adopt the Block Coordinate Descent (BCD) algorithm 1 in (Srebro et al., 2003) to approach this problem. The original algorithm is to find a generic rank-N matrix for a weighted approximation problem, and we tailor it by constraining the matrix within the positive semidefinite manifold.
We summarize our learning algorithm in Algorithm 1. Here "•" is the entry-wise product. We suppose the eigenvalues λ returned by Eigen Decomposition(X) are in descending order. Q [1:N ] extracts the 1 to N rows from Q .
One key issue is how to initialize X. Srebro et al. (2003) suggest to set X (0) =G * , and point out that X (0) = 0 is far from a local optimum, thus requires more iterations. However we find G * is also far from a local optimum, and this setting converges slowly too. Setting X (0) = G * /2 usually yields a satisfactory solution in a few iterations.
The subroutine PSD Approximate() computes the unweighted nearest rank-N PSD approximation, measured in F-norm (Higham, 1988).
Online Blockwise Regression of V
In Algorithm 1, the essential subroutine PSD Approximate() does eigendecomposition on G t , which is dense due to the logarithm transformation. Eigendecomposition on a W × W dense matrix requires O(W 2 ) space and O(W 3 ) time, difficult to scale up to a large vocabulary. In addition, the majority of words in the vocabulary are infrequent, and Tikhonov regularization is necessary for them.
It is observed that, as words become less frequent, fewer and fewer words appear around them to form bigrams. Remind that the vocabulary S = {s 1 , · · · , s W } are sorted in decending order of the frequency, hence the lower-right blocks of H and f (H) are very sparse, and cause these blocks in (14) to contribute much less penalty relative to other regions. Therefore these blocks could be ignored when doing regression, without sacrificing too much accuracy. This intuition leads to the following online blockwise regression.
The basic idea is to select a small set (e.g. 30,000) of the most frequent words as the core words, and partition the remaining noncore words into sets of moderate sizes. Bigrams consisting of two core words are referred to as core bigrams, which correspond to the top-left blocks of G and f (H). The embeddings of core words are learned approximately using Algorithm 1, on the top-left blocks of G and f (H). Then we fix the embeddings of core words, and find the embeddings of each set of noncore words in turn. After ignoring the lower-right regions of G and f (H) which correspond to bigrams of two noncore words, the quadratic terms of noncore embeddings are ignored. Consequently, finding these embeddings becomes a weighted ridge regression problem, which can be solved efficiently in closedform. Finally we combine all embeddings to get the embeddings of the whole vocabulary. The details are as follows:
1. Partition S into K consecutive groups S 1 , · · · , S k . Take K = 3 as an example. The first group is core words;
2. Accordingly partition G into K × K blocks, in this example as
G 11 G 12 G 13 G 21 G 22 G 23 G 31 G 32 G 33 .
Partition f (H),A in the same way.
G 11 , f (H) 11 , A 11 correspond to core bi-
grams. Partition V into S 1 V 1 S 2 V 2 S 3 V 3 ;
3. Solve V 1 V 1 + A 11 = G 11 using Algorithm 1, and obtain core embeddings V * 1 ; 4. Set V 1 = V * 1 , and find V * 2 that minimizes the total penalty of the 12-th and 21-th blocks of residuals (the 22-th block is ignored due to its high sparsity):
arg min V 2 G 12 − V 1 V 2 2 f (H) 12 + G 21 − V 2 V 1 2 f (H) 21 + s i ∈S 2 µ i v s i 2 = arg min V 2 G 12 −V 1 V 2 2 f (H) 12 + s i ∈S 2 µ i v s i 2 , wheref (H) 12 = f (H) 12 + f (H) 21 ; G 12 = G 12 • f (H) 12 + G 21 • f (H) 21 / f (H) 12 + f (H) 21
is the weighted average of G 12 and G 21 , "•" and "/" are elementwise product and division, respectively. The columns in V 2 are independent, thus for each v s i , it is a separate weighted ridge regression problem, whose solution is (Holland, 1973):
v * s i =(V 1 diag(f i )V 1 +µ i I) −1 V 1 diag(f i )ḡ i ,
wheref i andḡ i are columns corresponding to s i inf (H) 12 and G 12 , respectively; 5. For any other set of noncore words S k , find V * k that minimizes the total penalty of the 1kth and k1-th blocks, ignoring all other kj-th and jk-th blocks; 6. Combine all subsets of embeddings to form
V * . Here V * = (V * 1 , V * 2 , V * 3 ).
Experimental Results
We trained our model along with a few state-ofthe-art competitors on Wikipedia, and evaluated the embeddings on 7 common benchmark sets.
Experimental Setup
Our own method is referred to as PSD. The competitors include:
• (Mikolov et al., 2013b): word2vec 2 , or SGNS in some literature; 2 https://code.google.com/p/word2vec/ • (Levy and Goldberg, 2014b): the PPMI matrix without dimension reduction, and SVD of PPMI matrix, both yielded by hyperwords;
• (Pennington et al., 2014): GloVe 3 ;
• (Stratos et al., 2015): Singular 4 , which does SVD-based CCA on the weighted bigram frequency matrix;
• : Sparse 5 , which learns new sparse embeddings in a higher dimensional space from pretrained embeddings.
All models were trained on the English Wikipedia snapshot in March 2015. After removing nontextual elements and non-English words, 2.04 billion words were left. We used the default hyperparameters in Hyperwords when training PPMI and SVD. Word2vec, GloVe and Singular were trained with their own default hyperparameters. The embedding sets PSD-Reg-180K and PSD-Unreg-180K were trained using our online blockwise regression. Both sets contain the embeddings of the most frequent 180,000 words, based on 25,000 core words. PSD-Unreg-180K was traind with all µ i = 0, i.e. disabling Tikhonov regularization. PSD-Reg-180K was trained with 130001,180000] , i.e. increased regularization as the sparsity increases. To contrast with the batch learning performance, the performance of PSD-25K is listed, which contains the core embeddings only. PSD-25K took advantages that it contains much less false candidate words, and some test tuples (generally harder ones) were not evaluated due to missing words, thus its scores are not comparable to others.
µ i = 2 i ∈ [25001, 80000] 4 i ∈ [80001, 130000] 8 i ∈ [
Sparse was trained with PSD-180K-reg as the input embeddings, with default hyperparameters.
The benchmark sets are almost identical to those in (Levy et al., 2015), except that (Luong et al., 2013)'s Rare Words is not included, as many rare words are cut off at the frequency 100, making more than 1/3 of test pairs invalid.
Word Similarity There are 5 datasets: Word-Sim Similarity (WS Sim) and WordSim Relatedness (WS Rel) (Zesch et al., 2008;Agirre et al., 2009), partitioned from WordSim353 (Finkelstein et al., 2002); Bruni et al. (2012) (Mikolov et al., 2013c), with 8000 questions, and Google's analogy dataset (Mikolov et al., 2013a), with 19544 questions. After filtering questions involving out-of-vocabulary words, i.e. words that appear less than 100 times in the corpus, 7054 instances in MSR and 19364 instances in Google were left. The analogy questions were answered using 3CosAdd as well as 3CosMul proposed by Levy and Goldberg (2014a). Table 2 shows the results on all tasks. Word2vec significantly outperformed other methods on analogy tasks. PPMI and SVD performed much worse on analogy tasks than reported in (Levy et al., 2015), probably due to sub-optimal hyperparameters. This suggests their performance is unstable. The new embeddings yielded by Sparse systematically degraded compared to the old embeddings, contradicting the claim in .
Results
Our method PSD-Reg-180K performed well consistently, and is best in 4 similarity tasks. It performed worse than word2vec on analogy tasks, but still better than other MF-based methods. By comparing to PSD-Unreg-180K, we see Tikhonov regularization brings 1-4% performance boost across tasks. In addition, on similarity tasks, online blockwise regression only degrades slightly compared to batch factorization. Their performance gaps on analogy tasks were wider, but this might be explained by the fact that some hard cases were not counted in PSD-25K's evaluation, due to its limited vocabulary.
Conclusions and Future Work
In this paper, inspired by the link functions in previous works, with the support from Information Theory, we propose a new link function of a text window, parameterized by the embeddings of words and the residuals of bigrams. Based on the link function, we establish a generative model of documents. The learning objective is to find a set of embeddings maximizing their posterior likelihood given the corpus. This objective is reduced to weighted low-rank positive-semidefinite approximation, subject to Tikhonov regularization. Then we adopt a Block Coordinate Descent algorithm, jointly with an online blockwise regression algorithm to find an approximate solution. On seven benchmark sets, the learned embeddings show competitive and stable performance.
In the future work, we will incorporate global latent factors into this generative model, such as topics, sentiments, or writing styles, and develop more elaborate models of documents. Through learning such latent factors, important summary information of documents would be acquired, which are useful in various applications.
Appendix A Possible Trap in SVD
Suppose M is the bigram matrix of interest. SVD embeddings are derived from the low rank approximation of M M , by keeping the largest singular values/vectors. When some of these singular values correspond to negative eigenvalues, undesirable correlations might be captured. The following is an example of approximating a PMI matrix.
A vocabulary consists of 3 words s 1 , s 2 , s 3 . Two corpora derive two PMI matrices:
M (1) = 1.4 0.8 0 0.8 2.6 0 0 0 2 , M (2) = 0.2 −1.6 0 −1.6 −2.2 0 0 0 2 .
They have identical left singular matrix and singular values (3, 2, 1), but their eigenvalues are (3, 2, 1) and (−3, 2, 1), respectively.
In a rank-2 approximation, the largest two singular values/vectors are kept, and M (1) and M (2) yield identical SVD embeddings V = ( 0.45 0.89 0 0 0 1 ) (the rows may be scaled depending on the algorithm, without affecting the validity of the following conclusion). The embeddings of s 1 and s 2 (columns 1 and 2 of V ) point at the same direction, suggesting they are positively correlated. However as M
(2) 1,2 = M (2) 2,1 = −1.6 < 0, they are actually negatively correlated in the second corpus. This inconsistency is because the principal eigenvalue of M (2) is negative, and yet the corresponding singular value/vector is kept.
When using eigendecomposition, the largest two positive eigenvalues/eigenvectors are kept. M (1) yields the same embeddings V . M (2) yields V (2) = −0.89 0.45 0 0 0 1.41 , which correctly preserves the negative correlation between s 1 , s 2 .
Appendix B Information Theory
Redundant information refers to the reduced uncertainty by knowing the value of any one of the conditioning variables (hence redundant). Synergistic information is the reduced uncertainty ascribed to knowing all the values of conditioning variables, that cannot be reduced by knowing the value of any variable alone (hence synergistic).
The mutual information I(y; x i ) and the redundant information Rdn(y; x 1 , x 2 ) are defined as:
I(y; x i ) = E P (x i ,y) [log P (y|x i ) P (y) ]
Rdn(y; x 1 , x 2 ) = E P (y) min
x 1 ,x 2 E P (x i |y) [log P (y|x i ) P (y) ]
The synergistic information Syn(y; x 1 , x 2 ) is defined as the PI-function in (Williams and Beer, 2010), skipped here. I ( y ; x 1 ) I ( y ; x 2 ) S y n ( y ; x 1 , x 2 ) I ( y ; x 1 , x 2 ) R d n ( y ; x 1 , x 2 ) Figure 2: Different types of information among 3 random variables y, x 1 , x 2 . I(y; x 1 , x 2 ) is the mutual information between y and (x 1 , x 2 ). Rdn(y; x 1 , x 2 ) and Syn(y; x 1 , x 2 ) are the redundant information and synergistic information between x 1 , x 2 , conditioning y, respectively.
The interaction information Int(x 1 , x 2 , y) measures the relative strength of Rdn(y; x 1 , x 2 ) and Syn(y; x 1 , x 2 ) (Timme et al., 2014):
Int(x 1 , x 2 , y) =Syn(y; x 1 , x 2 ) − Rdn(y; x 1 , x 2 ) =I(y; x 1 , x 2 ) − I(y; x 1 ) − I(y; x 2 ) =E P (x 1 ,x 2 ,y) [log P (x 1 )P (x 2 )P (y)P (x 1 , x 2 , y) P (x 1 , x 2 )P (x 1 , y)P (x 2 , y) ] Figure 2 shows the relationship of different information among 3 random variables y, x 1 , x 2 (based on Fig.1 in (Williams and Beer, 2010)).
PMI is the pointwise counterpart of mutual information I. Similarly, all the above concepts have their pointwise counterparts, obtained by dropping the expectation operator. Specifically, the pointwise interaction information is defined as PInt(x 1 , x 2 , y) = PMI(y; x 1 , x 2 ) − PMI(y; x 1 ) − PMI(y; x 2 ) = log P (x 1 )P (x 2 )P (y)P (x 1 ,x 2 ,y) P (x 1 ,x 2 )P (x 1 ,y)P (x 2 ,y) . If we know PInt(x 1 , x 2 , y), we can recover PMI(y; x 1 , x 2 ) from the mutual information over the variable subsets, and then recover the joint distribution P (x 1 , x 2 , y).
As the pointwise redundant information PRdn(y; x 1 , x 2 ) and the pointwise synergistic information PSyn(y; x 1 , x 2 ) are both higherorder interaction terms, their magnitudes are usually much smaller than the PMI terms. We assume they are approximately equal, and thus cancel each other when computing PInt. Given this, PInt is always 0. In the case of three words w 0 , w 1 , w 2 , PInt(w 0 , w 1 , w 2 ) = 0 leads to PMI(w 2 ; w 0 , w 1 ) = PMI(w 2 ; w 0 )+PMI(w 2 ; w 1 ).
Table 1 :
1Notation Tabletion of M documents D = {d 1 , · · · , d M }. In the
vocabulary, each word s i is mapped to a vector v s i
in N -dimensional Euclidean space.
's MEN dataset;Similarity Tasks
Analogy Tasks
Method
WS Sim WS Rel MEN Turk SimLex
Google
MSR
word2vec
0.742
0.543
0.731 0.663
0.395
0.734 / 0.742 0.650 / 0.674
PPMI
0.735
0.678
0.717 0.659
0.308
0.476 / 0.524 0.183 / 0.217
SVD
0.687
0.608
0.711 0.524
0.270
0.230 / 0.240 0.123 / 0.113
GloVe
0.759
0.630
0.756 0.641
0.362
0.535 / 0.544 0.408 / 0.435
Singular
0.763
0.684
0.747 0.581
0.345
0.440 / 0.508 0.364 / 0.399
Sparse
0.739
0.585
0.725 0.625
0.355
0.240 / 0.282 0.253 / 0.274
PSD-Reg-180K
0.792
0.679
0.764 0.676
0.398
0.602 / 0.623 0.465 / 0.507
PSD-Unreg-180K
0.786
0.663
0.753 0.675
0.372
0.566 / 0.598 0.424 / 0.468
PSD-25K
0.801
0.676
0.765 0.678
0.393
0.671 / 0.695 0.533 / 0.586
Table 2 :
2Performance of each method across different tasks.Radinsky et al. (2011)'s Mechanical Turk dataset;
and (Hill et al., 2014)'s SimLex-999 dataset. The
embeddings were evaluated by the Spearman's
rank correlation with the human ratings.
Word Analogy The two datasets are MSR's
analogy dataset
It is referred to as an Expectation-Maximization algorithm by the original authors, but we think this is a misnomer.
http://nlp.stanford.edu/projects/glove/ 4 https://github.com/karlstratos/singular 5 https://github.com/mfaruqui/sparse-coding
AcknowledgmentsWe thank Omer Levy, Thomas Mach, Peilin Zhao, Mingkui Tan, Zhiqiang Xu and Chunlin Wu for their helpful discussions and insights. This research is supported by the National Research Foundation, Prime Minister's Office, Singapore under its IDM Futures Funding Initiative and administered by the Interactive and Digital Media Programme Office.
A study on similarity and relatedness using distributional and wordnet-based approaches. Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Proceedings of Human Language Technologies. Human Language TechnologiesTheMarius Paşca, and Aitor SoroaEneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Paşca, and Aitor Soroa. 2009. A study on similarity and relatedness using distribu- tional and wordnet-based approaches. In Proceed- ings of Human Language Technologies: The 2009
Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational LinguisticsAnnual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 19-27. Association for Computational Lin- guistics.
Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski, arXiv:1502.03520Random walks on discourse spaces: a new generative language model with applications to semantic word embeddings. ArXiv e-prints. cs.LGSanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2015. Random walks on discourse spaces: a new generative language model with applications to semantic word embed- dings. ArXiv e-prints, arXiv:1502.03520 [cs.LG].
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, Journal of Machine Learning Research. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, pages 1137-1155.
Latent dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, The Journal of Machine Learning Research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993-1022.
Distributional semantics in technicolor. Elia Bruni, Gemma Boleda, Marco Baroni, Nam-Khanh Tran, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers. the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersAssociation for Computational Linguistics1Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 136-145. Asso- ciation for Computational Linguistics.
Indexing by latent semantic analysis. C Scott, Susan T Deerwester, Richard A Dumais, Harshman, J. Am. Soc. Inf. Sci. Scott C. Deerwester, Susan T Dumais, and Richard A. Harshman. 1990. Indexing by latent semantic anal- ysis. J. Am. Soc. Inf. Sci.
Multi-view learning of word embeddings via cca. Paramveer Dhillon, P Dean, Lyle H Foster, Ungar, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsParamveer Dhillon, Dean P Foster, and Lyle H Ungar. 2011. Multi-view learning of word embeddings via cca. In Proceedings of Advances in Neural Informa- tion Processing Systems, pages 199-207.
Eigenwords: Spectral word embeddings. S Paramveer, Dhillon, P Dean, Lyle H Foster, Ungar, The Journal of Machine Learning Research. Paramveer S Dhillon, Dean P Foster, and Lyle H Ungar. 2015. Eigenwords: Spectral word embeddings. The Journal of Machine Learning Research.
Sparse overcomplete word vector representations. Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, Noah A Smith, Proceedings of ACL. ACLManaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcom- plete word vector representations. In Proceedings of ACL 2015.
Placing search in context: The concept revisited. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, Eytan Ruppin, ACM Trans. Inf. Syst. 201Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing search in context: The concept revisited. ACM Trans. Inf. Syst., 20(1):116- 131, January.
Euclidean embedding of cooccurrence data. Gal Amir Globerson, Fernando Chechik, Naftali Pereira, Tishby, Journal of Machine Learning Research. 8Amir Globerson, Gal Chechik, Fernando Pereira, and Naftali Tishby. 2007. Euclidean embedding of co- occurrence data. Journal of Machine Learning Re- search, vol. 8 (2007):2265-2295, Oct.
Computing a nearest symmetric positive semidefinite matrix. Nicholas J Higham, Linear Algebra and its Applications. 1030Nicholas J. Higham. 1988. Computing a nearest sym- metric positive semidefinite matrix. Linear Algebra and its Applications, 103(0):103 -118.
Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Felix Hill, Roi Reichart, Anna Korhonen, CoRR, abs/1408.3456Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. CoRR, abs/1408.3456.
Training products of experts by minimizing contrastive divergence. Geoffrey Hinton, Neural Computation. 148Geoffrey Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Com- putation, 14(8):1771-1800.
Weighted Ridge Regression: Combining Ridge and Robust Regression Methods. NBER Working Papers 0011. Paul W Holland, National Bureau of Economic Research. Paul W. Holland. 1973. Weighted Ridge Regression: Combining Ridge and Robust Regression Methods. NBER Working Papers 0011, National Bureau of Economic Research, Inc, September.
A spectral algorithm for learning hidden markov models. Daniel Hsu, M Sham, Tong Kakade, Zhang, Journal of Computer and System Sciences. 785Daniel Hsu, Sham M Kakade, and Tong Zhang. 2012. A spectral algorithm for learning hidden markov models. Journal of Computer and System Sciences, 78(5):1460-1480.
Linguistic regularities in sparse and explicit word representations. Omer Levy, Yoav Goldberg, Proceedings of CoNLL-2014. CoNLL-2014171Omer Levy and Yoav Goldberg. 2014a. Linguistic reg- ularities in sparse and explicit word representations. In Proceedings of CoNLL-2014, page 171.
Neural word embeddings as implicit matrix factorization. Omer Levy, Yoav Goldberg, Proceedings of NIPS. NIPSOmer Levy and Yoav Goldberg. 2014b. Neural word embeddings as implicit matrix factorization. In Pro- ceedings of NIPS 2014.
Improving distributional similarity with lessons learned from word embeddings. Omer Levy, Yoav Goldberg, Ido Dagan, Transactions of the Association for Computational Linguistics. 3Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associ- ation for Computational Linguistics, 3:211-225.
Joint sentiment/topic model for sentiment analysis. Chenghua Lin, Yulan He, Proceedings of the 18th ACM conference on Information and Knowledge Management. the 18th ACM conference on Information and Knowledge ManagementACMChenghua Lin and Yulan He. 2009. Joint senti- ment/topic model for sentiment analysis. In Pro- ceedings of the 18th ACM conference on Informa- tion and Knowledge Management, pages 375-384. ACM.
Better word representations with recursive neural networks for morphology. Minh-Thang Luong, Richard Socher, Christopher D Manning, Minh-Thang Luong, Richard Socher, and Christo- pher D Manning. 2013. Better word representa- tions with recursive neural networks for morphol- ogy. CoNLL-2013, 104.
Eigenvalue Algorithms for Symmetric Hierarchical Matrices. Dissertation. Thomas Mach, Chemnitz University of TechnologyThomas Mach. 2012. Eigenvalue Algorithms for Sym- metric Hierarchical Matrices. Dissertation, Chem- nitz University of Technology.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of Workshop at ICLR. Workshop at ICLRTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In Proceedings of Workshop at ICLR 2013.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Proceedings of NIPS 2013. NIPS 2013Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Proceedings of NIPS 2013, pages 3111-3119.
Linguistic regularities in continuous space word representations. Tomas Mikolov, Yih Wen-Tau, Geoffrey Zweig, Proceedings of HLT-NAACL 2013. HLT-NAACL 2013Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of HLT- NAACL 2013, pages 746-751.
Three new graphical models for statistical language modelling. Andriy Mnih, Geoffrey Hinton, Proceedings of the 24th International Conference on Machine learning. the 24th International Conference on Machine learningACMAndriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine learning, pages 641-648. ACM.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the Empiricial Methods in Natural Language Processing. the Empiricial Methods in Natural Language Processing12Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12.
A word at a time: Computing word relatedness using temporal semantic analysis. Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, Shaul Markovitch, Proceedings of the 20th International Conference on World Wide Web, WWW '11. the 20th International Conference on World Wide Web, WWW '11New York, NY, USAACMKira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: Computing word relatedness using temporal semantic analysis. In Proceedings of the 20th International Conference on World Wide Web, WWW '11, pages 337-346, New York, NY, USA. ACM.
Weighted low-rank approximations. Nathan Srebro, Tommi Jaakkola, Proceedings of ICML 2003. ICML 20033Nathan Srebro, Tommi Jaakkola, et al. 2003. Weighted low-rank approximations. In Proceedings of ICML 2003, volume 3, pages 720-727.
A spectral algorithm for learning class-based n-gram models of natural language. Karl Stratos, Michael Kim, Daniel Collins, Hsu, Proceedings of the Association for Uncertainty in Artificial Intelligence. the Association for Uncertainty in Artificial IntelligenceKarl Stratos, Do-kyum Kim, Michael Collins, and Daniel Hsu. 2014. A spectral algorithm for learn- ing class-based n-gram models of natural language. In Proceedings of the Association for Uncertainty in Artificial Intelligence.
Model-based word embeddings from decompositions of count matrices. Karl Stratos, Michael Collins, Daniel Hsu, Proceedings of ACL 2015. ACL 2015Karl Stratos, Michael Collins, and Daniel Hsu. 2015. Model-based word embeddings from decomposi- tions of count matrices. In Proceedings of ACL 2015.
Riemannian pursuit for big matrix recovery. Mingkui Tan, Ivor W Tsang, Li Wang, Proceedings of ICML 2014. ICML 2014Bart Vandereycken, and Sinno Jialin PanMingkui Tan, Ivor W. Tsang, Li Wang, Bart Vander- eycken, and Sinno Jialin Pan. 2014. Riemannian pursuit for big matrix recovery. In Proceedings of ICML 2014, pages 1539-1547.
Synergy, redundancy, and multivariate information measures: an experimentalist's perspective. Nicholas Timme, Wesley Alford, Benjamin Flecker, John M Beggs, Journal of Computational Neuroscience. 362Nicholas Timme, Wesley Alford, Benjamin Flecker, and John M Beggs. 2014. Synergy, redundancy, and multivariate information measures: an experi- mentalist's perspective. Journal of Computational Neuroscience, 36(2):119-140.
Topic modeling: beyond bag-of-words. M Hanna, Wallach, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningACMHanna M Wallach. 2006. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd interna- tional conference on Machine learning, pages 977- 984. ACM.
Nonnegative decomposition of multivariate information. L Paul, Randall D Williams, Beer, arXiv:1004.2515arXiv preprintPaul L Williams and Randall D Beer. 2010. Non- negative decomposition of multivariate information. arXiv preprint arXiv:1004.2515.
Scalable maximum margin matrix factorization by active riemannian subspace search. Yan Yan, Mingkui Tan, Ivor Tsang, Yi Yang, Chengqi Zhang, Qinfeng Shi, Proceedings of IJCAI. IJCAIYan Yan, Mingkui Tan, Ivor Tsang, Yi Yang, Chengqi Zhang, and Qinfeng Shi. 2015. Scalable maximum margin matrix factorization by active riemannian subspace search. In Proceedings of IJCAI 2015.
Learning word representations with hierarchical sparse coding. Dani Yogatama, Manaal Faruqui, Chris Dyer, Noah A Smith, Proceedings of ICML. ICMLDani Yogatama, Manaal Faruqui, Chris Dyer, and Noah A Smith. 2015. Learning word representa- tions with hierarchical sparse coding. In Proceed- ings of ICML 2015.
Using wiktionary for computing semantic relatedness. Torsten Zesch, Christof Müller, Iryna Gurevych, Proceedings of AAAI 2008. AAAI 20088Torsten Zesch, Christof Müller, and Iryna Gurevych. 2008. Using wiktionary for computing semantic re- latedness. In Proceedings of AAAI 2008, volume 8, pages 861-866.
A study of smoothing methods for language models applied to information retrieval. Chengxiang Zhai, John Lafferty, ACM Transactions on Information Systems (TOIS). 222Chengxiang Zhai and John Lafferty. 2004. A study of smoothing methods for language models applied to information retrieval. ACM Transactions on Infor- mation Systems (TOIS), 22(2):179-214.
Double updating online learning. Peilin Zhao, C H Steven, Rong Hoi, Jin, The Journal of Machine Learning Research. 12Peilin Zhao, Steven CH Hoi, and Rong Jin. 2011a. Double updating online learning. The Journal of Machine Learning Research, 12:1587-1615.
Comparing twitter and traditional media using topic models. Jing Wayne Xin Zhao, Jianshu Jiang, Jing Weng, Ee-Peng He, Hongfei Lim, Xiaoming Yan, Li, Advances in Information Retrieval (Proceedings of the 33rd Annual European Conference on Information Retrieval Research). SpringerWayne Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Ee-Peng Lim, Hongfei Yan, and Xiaoming Li. 2011b. Comparing twitter and traditional me- dia using topic models. In Advances in Informa- tion Retrieval (Proceedings of the 33rd Annual Eu- ropean Conference on Information Retrieval Re- search), pages 338-349. Springer. |
16,531,053 | Discourse Constraints for Document Compression | Sentence compression holds promise for many applications ranging from summarization to subtitle generation. The task is typically performed on isolated sentences without taking the surrounding context into account, even though most applications would operate over entire documents. In this article we present a discourse-informed model which is capable of producing document compressions that are coherent and informative. Our model is inspired by theories of local coherence and formulated within the framework of integer linear programming. Experimental results show significant improvements over a state-of-the-art discourse agnostic approach. | [
5545857,
1729543,
18981641,
16139657,
6237722,
12429339,
8778268,
991005,
2969247,
18699296,
7418660,
9482302,
16113728,
1592006,
189898,
15532760,
15519576,
10048734,
2601442,
2245732,
1762277,
16686441,
6412912,
38974582,
12585594,
7367421,
5823614,
10970495
] | Discourse Constraints for Document Compression
James Clarke
University of Illinois at Urbana-Champaign
University of Edinburgh
Mirella Lapata
University of Illinois at Urbana-Champaign
University of Edinburgh
Discourse Constraints for Document Compression
Sentence compression holds promise for many applications ranging from summarization to subtitle generation. The task is typically performed on isolated sentences without taking the surrounding context into account, even though most applications would operate over entire documents. In this article we present a discourse-informed model which is capable of producing document compressions that are coherent and informative. Our model is inspired by theories of local coherence and formulated within the framework of integer linear programming. Experimental results show significant improvements over a state-of-the-art discourse agnostic approach.
Introduction
Recent years have witnessed increasing interest in sentence compression. The task encompasses automatic methods for shortening sentences with minimal information loss while preserving their grammaticality. The popularity of sentence compression is largely due to its relevance for applications. Summarization is a case in point here. Most summarizers to date aim to produce informative summaries at a given compression rate. If we can have a compression component that reduces sentences to a minimal length and still retains the most important content, then we should be able to pack more information content into a fixed size summary. In other words, sentence compression would allow summarizers to increase the overall amount of information extracted without increasing the summary length (Lin 2003;Zajic et al. 2007). It could also be used as a post-processing step in order to render summaries more coherent and less repetitive (Mani, Gates, and Bloedorn 1999).
Beyond summarization, a sentence compression module could be used to display text on small screen devices such as PDAs (Corston-Oliver 2001) or as a reading aid for the blind (Grefenstette 1998). Sentence compression could also benefit information retrieval by eliminating extraneous information from the documents indexed by the retrieval engine. This way it would be possible to store less information in the index without dramatically affecting retrieval performance (Olivers and Dolan 1999).
In theory, sentence compression may involve several rewrite operations such as deletion, substitution, insertion, and word reordering. In practice, however, the task is commonly defined as a word deletion problem: Given an input sentence of words x = x 1 , x 2 , . . . , x n , the aim is to produce a compression by removing any subset of these words (Knight and Marcu 2002). Many sentence compression models aim to learn deletion rules from a parsed parallel corpus of source sentences and their target compressions (Knight and Marcu 2002;Turner and Charniak 2005;Galley and McKeown 2007;Cohn and Lapata 2009). For example, Knight and Marcu (2002) learn a synchronous context-free grammar (Aho and Ullman 1969) from such a corpus. The grammar rules have weights (essentially probabilities estimated using maximum likelihood) and are used to find the best compression from the set of all possible compressions for a given sentence. Other approaches exploit syntactic information without making explicit use of a parallel grammar-for example, by learning which words or constituents to delete from a parse tree (Riezler et al. 2003;Nguyen et al. 2004;McDonald 2006;Clarke and Lapata 2008).
Despite differences in formulation and training requirements (some approaches require a parallel corpus, whereas others do not), existing models are similar in that they compress sentences in isolation without taking their surrounding context into account. This is in marked contrast with common practice in summarization. Professional abstractors often rely on contextual cues while creating summaries (Endres-Niggemeyer 1998). This is true of automatic summarization systems too, which consider the position of a sentence in a document and how it relates to its surrounding sentences (Kupiec, Pedersen, and Chen 1995;Barzilay and Elhadad 1997;Marcu 2000;Teufel and Moens 2002). Determining which information is important in a sentence is not merely a function of its syntactic position (e.g., deleting the verb or the subject of a sentence is less likely). A variety of contextual factors can play a role, such as the discourse topic, whether the sentence introduces new entities or events that have not been mentioned before, or the reader's background knowledge.
A sentence-centric view of compression is also at odds with most relevant applications which aim to create a shorter document rather than a single sentence. The resulting document must not only be grammatical but also coherent if it is to function as a replacement for the original. However, this cannot be guaranteed without knowledge of how the discourse progresses from sentence to sentence. To give a simple example, a contextually aware compression system could drop a word or phrase from the current sentence, simply because it is not mentioned anywhere else in the document and is therefore deemed unimportant. Or it could decide to retain it for the sake of topic continuity.
In this article we are interested in creating a compression model that is appropriate for both documents and sentences. Luckily, a variety of discourse theories have been developed over the years (e.g., Mann and Thompson, 1988;Grosz, Weinstein, and Joshi 1995;Halliday and Hasan 1976) and have found application in summarization (Barzilay and Elhadad 1997;Marcu 2000;Teufel and Moens 2002) and other text generation applications (Scott and de Souza 1990;Kibble and Power 2004). In creating a contextsensitive compression model we are faced with three important questions: (1) Which type of discourse information is useful for compression? (2) Is it amenable to automatic processing (there is little hope for interfacing our compression model with applications if discourse-level cues cannot be identified robustly)? and (3) How are sentence-and document-based information best integrated in a unified modeling framework?
In building our compression model we borrow insights from two popular models of discourse, Centering Theory (Grosz, Weinstein, and Joshi 1995) and lexical chains (Morris and Hirst 1991). Both approaches capture local coherence-the way adjacent sentences bind together to form a larger discourse. They also both share the view that discourse coherence revolves around discourse entities and the way they are introduced and discussed. We first automatically augment our documents with annotations pertaining to centering and lexical chains, which we subsequently use to inform our compression model. The latter is an extension of the integer linear programming formulation proposed by Clarke and Lapata (2008). In a nutshell, sentence compression is modeled as an optimization problem. Given a long sentence, a compression is formed by retaining the words that maximize a scoring function coupled with a small number of constraints ensuring that the resulting output is grammatical. The constraints are encoded as linear inequalities whose solution is found using integer linear programming (ILP; Winston and Venkataramanan 2003;Vanderbei 2001). Discourse-level information can be straightforwardly incorporated by slightly changing the compression objectivewe now wish to compress entire documents rather than isolated sentences-and augmenting the constraint set with discourse-specific constraints. We use our model to compress whole documents (rather than sentences sequentially) and evaluate whether the resulting text is understandable and informative using a question-answering task. We show that our method yields significant improvements over discourse agnostic state-of-the-art compression models (McDonald 2006;Clarke and Lapata 2008).
The remainder of this article is organized as follows. Section 2 provides an overview of related work. In Section 3 we present the ILP framework and compression model we employ in our experiments. We introduce our discourse-related extensions in Sections 4 and 5. Section 6 discusses our experimental set-up and evaluation methodology. Our results are presented in Section 7. Discussion of future work concludes the paper.
Related Work
Sentence compression has been extensively studied across different modeling paradigms and has received both generative and discriminative formulations. Most generative approaches (Knight and Marcu 2002;Turner and Charniak 2005;Galley and McKeown 2007) are instantiations of the noisy-channel model, whereas discriminative formulations include decision-tree learning (Knight and Marcu 2002), maximum entropy (Riezler et al. 2003), support vector machines (Nguyen et al. 2004), and largemargin learning (McDonald 2006;Cohn and Lapata 2009). These models are trained on a parallel corpus and learn either which constituents to delete or which words to place adjacently in the compression output. Relatively few approaches dispense with the parallel corpus and generate compressions in an unsupervised manner using either a scoring function (Hori and Furui 2004;Clarke and Lapata 2008) or compression rules that are approximated from a non-parallel corpus such as the Penn Treebank (Turner and Charniak 2005).
The majority of sentence compression approaches only look at sentences in isolation without taking into account any discourse information. However, there are two notable exceptions. Jing (2000) uses information from the local context as evidence for and against the removal of phrases during sentence compression. The idea here is that words or phrases which have more links to the surrounding context are more indicative of its topic, and thus should not be dropped. The topic is not explicitly identified; instead the importance of each phrase is determined by the number of lexical links within the local context. A link is created between two words if they are repetitions, morphologically related, or associated in WordNet (Fellbaum 1998) through a lexical relation (e.g., hyponymy, synonymy). Links have weights-for example, repetition is considered more important than hypernymy. Each word is assigned a context weight based on the number of links to the local context and the importance of each relation type. Phrases are scored by the sum of their children's context scores. The decision to drop a phrase is influenced by several factors, besides the local context, such as the phrase's grammatical role and previous evidence from a parallel corpus. Daumé III and Marcu (2002) generalize sentence compression to document compression. Given a document D = w 1 , w 2 , . . . , w n the goal is to produce a summary, S, by dropping any subset of words from D. Their system uses the discourse structure of a document and the syntactic structure of each of its sentences in order to decide which words to drop. Specifically, they extend Knight and Marcu's (2002) noisy-channel model so that it can be applied to entire documents. In its simpler sentence compression instantiation, the noisy-channel model has two components, a language model and a channel model, both of which act on probabilistic context-free grammar (PCFG) representations. Daumé III and Marcu define a noisy-channel model over syntax and discourse trees. Following Rhetorical Structure Theory (RST; Mann and Thompson 1988), they represent documents by trees whose leaves correspond to elementary discourse units (edus) and whose nodes specify how these and larger units (e.g., multi-sentence segments) are linked to each other by rhetorical relations (e.g., Contrast, Elaboration). Discourse units are further characterized in terms of their text importance: nuclei denote central segments, whereas satellites denote peripheral ones. Their model therefore learns not only which syntactic constituents to drop but also which discourse units are unimportant.
While Daumé III and Marcu (2002) present a hybrid summarizer that can simultaneously delete words and sentences from a document, the majority of summarization systems to date simply select and present to the user the most important sentences in a text (see Mani [2001] for a comprehensive overview of the methods used to achieve this). Discourse-level information plays a prominent role here as the overall document organization can indicate whether a sentence should be included in the summary. A variety of approaches have focused on cohesion (Halliday and Hasan 1976) and the way it is expressed in discourse. The term broadly describes a variety of linguistic devices responsible for making the elements of a text appear unified or connected. Examples include word repetition, anaphora, ellipsis, and the use of synonyms or superordinates. The underlying assumption is that sentences connected to many other sentences are likely to carry salient information and should therefore be included in the summary (Sjorochod'ko 1972). In exploiting cohesion for summarization, it is necessary to somehow represent cohesive ties. For instance, Boguraev and Kennedy (1997) represent cohesion in terms of anaphoric relations, whereas Barzilay and Elhadad (1997) operationalize cohesion via lexical chains-sequences of related words spanning a topical unit (Morris and Hirst 1991). Besides repetition, they also examine semantic relations based on synonymy, antonymy, hypernymy, and holonymy (we discuss their approach in more detail in Section 4.1).
Other approaches characterize the document in terms of discourse structure and rhetorical relations. Documents are commonly represented as trees (Mann and Thompson 1988;Corston-Oliver 1998;Ono, Sumita, and Miike 1994;Carlson et al. 2001) and the position of a sentence in a tree is indicative of its importance. To give an example, Marcu (2000) proposes a summarization algorithm based on RST. Assuming that nuclei are more salient than satellites, the importance of sentential or clausal units can be determined based on tree depth. Alternatively, discourse structure can be represented as a graph (Wolf and Gibson 2004) and sentence importance is determined in graph-theoretic terms, by using graph connectivity measures such as in-degree or PageRank (Brin and Page 1998). Although a great deal of research in summarization has focused on global properties of discourse structure, there is evidence that local coherence may also be useful without the added complexity of computing discourse representations. (Unfortunately, discourse parsers have yet to achieve levels of performance comparable to syntactic parsers.) Teufel and Moens (2002) identify discourse relations on a sentence-by-sentence basis without presupposing an explicit discourse structure. Inspired by Centering Theory (Grosz, Weinstein, and Joshi 1995)-a theory of local discourse structure that models the interaction of referential continuity and salience of discourse entities- Orȃsan (2003) proposes a summarization algorithm that extracts sentences with at least one entity in common. The idea here is that summaries containing sentences referring to the same entity will be more coherent. Other work has relied on centering not so much to create summaries but to assess whether they are readable (Barzilay and Lapata 2008).
Our approach differs from previous sentence compression approaches in three key respects. First, we present a compression model that is contextually aware; decisions on whether to remove or retain a word (or phrase) are informed by its discourse properties (e.g., whether it introduces a new topic, or whether it is semantically related to the previous sentence). Unlike Jing (2000) we explicitly identify topically important words and assume specific representations of discourse structure. Secondly, in contrast to Daumé III and Marcu (2002) and other summarization work, we adopt a less global and more shallow representation of discourse based on Centering Theory and lexical chains. One of our aims is to exploit discourse features that can be computed efficiently and relatively cheaply. Thirdly, our compression model can be applied to isolated sentences as well as to entire documents. We claim the latter is more in the spirit of realworld applications where the goal is to generate a condensed and coherent text. Unlike Daumé III and Marcu (2002) our model can delete words but not sentences, although it could be used to compress documents of any type, even summaries.
The Compression Model
Our model is an extension of the approach put forward in Clarke and Lapata (2008) where they formulate sentence compression as an optimization problem. Given a long sentence, a compression is created by retaining the words that maximize a scoring function. The latter is essentially a language model coupled with a few constraints ensuring that the resulting output is grammatical. The language model and the constraints are encoded as linear inequalities whose solution is found using ILP. 1 Their model is a good point of departure for studying document-based compression. As it does not require a parallel corpus, it can be ported across domains and text genres, while delivering state-of-the-art results (see Clarke and Lapata [2008] for details). Importantly, discourse-level information can be easily incorporated in two ways: Firstly, by applying the compression objective to entire documents rather than individual sentences; and secondly, by augmenting the constraint set with discourserelated information. This is not the case for other approaches (e.g., those based on the noisy channel model) where compression is modeled by grammar rules indicating which constituents to delete in a syntactic context. Moreover, ILP delivers a globally optimal solution by searching over the entire compression space 2 without employing heuristics or approximations during decoding (see Turner and Charniak [2005] and McDonald [2006] for examples).
Besides sentence compression, the ILP modeling framework has been applied to a wide range of natural language processing tasks demonstrating improvements over more traditional methods. Examples include reluctant paraphrasing (Dras 1997), relation extraction , semantic role labeling (Punyakanok et al. 2004), concept-to-text generation (Marciniak and Strube 2005;Barzilay and Lapata 2006), dependency parsing (Riedel and Clarke 2006;Martins, Smith, and Xing 2009), and coreference resolution (Denis and Baldridge 2007).
In the following we describe Clarke and Lapata's (2008) model in more detail. Sections 4-5 present our extensions and modifications.
Language Model
Let x = x 0 , x 1 , x 2 , . . . , x n denote a source sentence for which we wish to generate a target compression. We use x 0 to denote the "start" token. We introduce a decision variable for each word in the source and constrain it to be binary; a value of 0 represents a word being dropped, whereas a value of 1 includes the word in the target compression. Let:
δ i = 1 if x i is in the compression 0 otherwise ∀i ∈ [1 . . . n]
A trigram language model forms the backbone of the compression model. The language model is formulated as an integer linear program with the introduction of extra decision variables indicating which word sequences should be retained or dropped from the compression. Let:
α i = 1 if x i starts the compression 0 otherwise ∀i ∈ [1 . . . n] β ij = ⎧ ⎨ ⎩ 1 if sequence x i , x j ends the compression ∀i ∈ [0 . . . n − 1] 0 otherwise ∀j ∈ [i + 1 . . . n] γ ijk = ⎧ ⎨ ⎩ 1 if sequence x i , x j , x k ∀i ∈ [0 . . . n − 2] is in the compression ∀j ∈ [i + 1 . . . n − 1] 0 otherwise ∀k ∈ [j + 1 . . . n]
The objective function is expressed in Equation (1). It is the sum of all possible trigrams multiplied by the appropriate decision variable where n is the length of the sentence (note all probabilities throughout this paper are log-transformed). The objective function also includes a significance score I(x i ) for each word x i multiplied by the decision variable for that word (see the first summation term in Equation (1)). This score highlights important content words in a sentence and is defined in Section 3.2.
max z = n i=1 δ i · λI(x i ) + n i=1 α i · P(x i |start) + n−2 i=1 n−1 j=i+1 n k=j+1 γ ijk · P(x k |x i , x j ) + n−1 i=0 n j=i+1 β ij · P(end|x i , x j ) −ζ min · μ − ζ max · μ(1)
Note that we add a weighting factor, λ, to the objective, in order to counterbalance the importance of the language model and the significance score. The final component of our objective function, ζ · μ, relates to the compression rate. As we explain shortly (Equations (7) and (8)) the compressions our model generates are subject to a prespecified compression rate. For instance we may wish to create compressions at a minimum rate of 40% and maximum rate of 70%. The compression rate constraint can be violated with a penalty, μ, which applies to each word. ζ min counts the number of words under the compression rate and ζ max the number of words over the compression rate. Thus, the more the output violates the compression rate, the larger the penalty will be. In other words, the term ζ min · μ − ζ max · μ acts as a soft constraint providing a means to guide the compression towards the desired rate. The violation penalty μ is tuned experimentally and may vary depending on the desired compression rate or application.
The objective function in Equation (1) allows any combination of trigrams to be selected. As a result, invalid trigram sequences (e.g., two or more trigrams containing the "end" token) could appear in the target compression. We avoid this situation by introducing sequential constraints (on the decision variables δ i , γ ijk , α i , and β ij ) that restrict the set of allowable trigram combinations.
Constraint 1. Exactly one word can begin a sentence.
n i=1 α i = 1 ( 2 ) Constraint 2.
If a word is included in the sentence it must either start the sentence or be preceded by two other words or one other word and the "start" token x 0 .
δ k − α k − k−2 i=0 k−1 j=i+1 γ ijk = 0 ( 3 ) ∀k : k ∈ [1 . . . n]
Constraint 3. If a word is included in the sentence it must either be preceded by one word and followed by another or it must be preceded by one word and end the sentence.
δ j − j−1 i=0 n k=j+1 γ ijk − j−1 i=0 β ij = 0 ( 4 ) ∀j : j ∈ [1 . . . n]
Constraint 4. If a word is in the sentence it must be followed by two words or followed by one word and then the end of the sentence or it must be preceded by one word and end the sentence.
δ i − n−1 j=i+1 n k=j+1 γ ijk − n j=i+1 β ij − i−1 h=0 β hi = 0 ( 5 ) ∀i : i ∈ [1 . . . n]
Constraint 5. Exactly one word pair can end the sentence.
n−1 i=0 n j=i+1 β ij = 1( 6 )
Note that Equations (2)-(6) are merely well-formedness constraints and differ from the compression-specific constraints which we discuss subsequently. Any language model formulated as an ILP would require similar constraints.
Compression rate constraints. Depending on the application or the task at hand, we may require that the compressions fall within a specific compression rate. We assume here that our model is given a compression rate range, c min % − c max %, and create two constraints that penalize compressions which do not fall within this range:
n i=0 δ i + ζ min ≥ c min · n (7) n i=0 δ i − ζ max ≤ c max · n(8)
Here, δ i is still a decision variable for each word, n is the number of words in the sentence, ζ is the number of words over or under the compression rate, and c min and c max are the limits of the range.
Significance Score
The significance score is an attempt at capturing the gist of a sentence. The score has two components which correspond to document and sentence importance, respectively. Given a sentence and its syntactic parse, we define I(x i ) as:
I(x i ) = f i log F a F i · l N(9)
where x i is a topic word, f i is x i 's document frequency, F i its corpus frequency, and F a the sum of all topic words in the corpus; l is the number of clause constituents above x i , and N is the deepest level of clause embedding in the parse. The first term in Equation (9) is similar to tf * idf ; it highlights words that are important in the document and should therefore not be dropped. The score is not applied indiscriminately to all words in a sentence but solely to topic-related words, which are approximated by nouns and verbs. This is offset by the importance of these words in the specific sentence being compressed. Intuitively, in a sentence with multiply nested clauses, more deeply embedded clauses tend to carry more semantic content. This is illustrated in Figure 1, which depicts the clause embedding for the sentence Mr Field has said he will resign if he is not reselected, a move which could divide the party nationally.
Here, the most important information is conveyed by clauses S 3 (he will resign) and S 4 (if he is not reselected), which are embedded. Accordingly, we should give more weight to words found in these clauses than in the main clause (S 1 in Figure 1). A simple way to enforce this is to give clauses weight proportional to the level of embedding (see the second term in Equation (9)). Therefore in Figure 1, the term l N is 1.0 (4/4) for clause S 4 , 0.75 (3/4) for clause S 3 , and so on. Individual words inherit their weight from their clauses. We obtain syntactic information in our experiments from RASP (Briscoe and Carroll 2002), a domain-independent, robust parsing system for English. However, any other parser with broadly similar output (e.g., Lin 2001) could also serve our purposes.
Note that the significance score in Equation (9) does not weight differentially the contribution of tf * idf versus level of embedding. Although we found in our experiments that the latter term was as important as tf * idf in producing meaningful compressions, there may be applications or data sets where the contribution of the two terms varies. This could be easily remedied by introducing a weighting factor.
Sentential Constraints
In its original formulation, the model also contains a small number of sentence-level constraints. Their aim is to preserve the meaning and structure of the original sentence as much as possible. The majority of constraints revolve around modification and
Figure 1
The clause embedding of the sentence Mr Field has said he will resign if he is not reselected, a move which could divide the party nationally; nested boxes correspond to nested clauses. argument structure and are defined over parse trees or grammatical relations which as mentioned earlier we extract from RASP.
Modifier Constraints. Modifier constraints ensure that relationships between head words and their modifiers remain grammatical in the compression:
δ i − δ j ≥ 0 (10) ∀i, j : x j ∈ x i 's ncmods δ i − δ j ≥ 0 (11) ∀i, j : x j ∈ x i 's detmods
Equation (10) guarantees that if we include a non-clausal modifier 3 (ncmod) in the compression (such as an adjective or a noun) then the head of the modifier must also be included; this is repeated for determiners (detmod) in Equation (11).
Other modifier constraints ensure the meaning of the source sentence is preserved in the compression. For example, Equation (12) enforces not in the compression when the head is included. A similar constraint is added for possessive modifiers (e.g., his, our), including genitives (e.g., John's gift), as shown in Equation (13).
δ i − δ j = 0 (12) ∀i, j : x j ∈ x i 's ncmods ∧ x j = not δ i − δ j = 0 (13) ∀i, j : x j ∈ x i 's possessive mods
Argument Structure Constraints. Argument structure constraints make sure that the resulting compression has a canonical argument structure. The first constraint (Equation (14)) ensures that if a verb is present in the compression then so are its arguments, and if any of the arguments are included in the compression then the verb must also be included.
δ i − δ j = 0 (14) ∀i, j : x j ∈ subject/object of verb x i
Another constraint forces the compression to contain at least one verb provided the source sentence contains one as well:
i:
x i ∈verbs δ i ≥ 1(15)
Other constraints apply to prepositional phrases and subordinate clauses and force the introducing term (i.e., the preposition, or subordinator) to be included in the compression if any word from within the syntactic constituent is also included:
δ i − δ j ≥ 0 (16) ∀i, j : x j ∈ PP/SUB ∧ x i starts PP/SUB
By subordinator (SUB) we mean wh-words (e.g., who, which, how, where), the word that, and subordinating conjunctions (e.g., after, although, because). The reverse is also truethat is, if the introducing term is included, at least one other word from the syntactic constituent should also be included.
i:x i ∈PP/SUB δ i − δ j ≥ 0(17)
∀j : x j starts PP/SUB All the constraints described thus far are mostly syntactic. They operate over parse trees or dependency graphs. In the following sections we present our discoursespecific constraints. But first we discuss how we represent and automatically detect discourse-related information.
Discourse Representation
Obtaining an appropriate representation of discourse is the first step toward creating a compression model that exploits document-level information. Our goal is to annotate documents automatically with discourse-level information which will subsequently be used to inform our compression procedure. As mentioned in Section 2 previous summarization work has mainly focused on cohesion (Sjorochod'ko 1972; Barzilay and Elhadad 1997) or global discourse structure (Marcu 2000;Daumé III and Marcu 2002). We also opt for a cohesion-based representation of discourse operationalized by lexical chains (Morris and Hirst 1991). Computing global discourse structure robustly and accurately is far from trivial. For example, Daumé III and Marcu (2002) employ an RST parser 4 but find that it produces noisy output for documents containing longer sentences. We therefore focus on the less ambitious task of characterizing local coherence-the way adjacent sentences bind together to form a larger discourse. Although it does not explicitly capture long distance relationships between sentences, local coherence is still an important prerequisite for maintaining global coherence. Specifically, we turn to Centering Theory (Grosz, Weinstein, and Joshi 1995) and adopt an entity-based representation of discourse.
In the following sections we briefly introduce lexical chains and centering and describe our algorithms for obtaining discourse annotations.
Lexical Chains
Lexical cohesion refers to the degree of semantic relatedness observed among lexical items in a document. The term was coined by Halliday and Hasan (1976), who observed that coherent documents tend to have more related terms or phrases than incoherent ones. A number of linguistic devices can be used to signal cohesion; these range from repetition, to synonymy, hyponymy, and meronymy. Lexical chains are a representation of lexical cohesion as sequences of semantically related words (Morris and Hirst 1991). There is a close relationship between discourse structure and cohesion. Related words tend to co-occur within the same discourse. Thus, cohesion is a surface indicator of discourse structure and can be identified through lexical chains.
Lexical chains provide a useful means for describing the topic flow in discourse. For example, a document containing the chain {house, home, loft, house} will probably describe a situation involving a house. Documents often have multiple topics (or themes) and consequently will contain many different lexical chains. Some of these topics will be peripheral and thus represented by short chains whereas main topics will correspond to dense longer chains. Words participating in the latter chains are important for our compression task-they reveal what the document is about-and in all likelihood should not be deleted. Barzilay and Elhadad (1997) describe a technique for building lexical chains for extractive text summarization. In their approach chains of semantically related expressions are used to select sentences for inclusion in a summary. Their algorithm uses WordNet (Fellbaum 1998) to build chains of nouns (and noun compounds). Nouns are considered related if they are repetitions or linked in WordNet via synonymy, antonymy, hypernymy, and holonymy. Computing lexical chains would be relatively straightforward if each word was always represented by a single sense. However, due to the high level of polysemy inherent in WordNet, algorithms developed for computing lexical chains must adopt some strategy for disambiguating word senses. For example, Hirst and St-Onge (1998) greedily disambiguate a word as soon as it is encountered by selecting the sense most strongly related to existing chain members, whereas Barzilay and Elhadad (1997) consider all possible alternatives of word senses and then choose the best one among them.
Once created, lexical chains can serve to highlight which document sentences are more topical, and should therefore be included in a summary. Barzilay and Elhadad (1997) rank their chains heuristically by a score based on their length and homogeneity. They generate summaries by extracting sentences corresponding to strong chains, that is, chains whose score is two standard deviations above the average score. Analogously, we also wish to determine which lexical chains indicate the most prevalent discourse topics. Our assumption is that terms belonging to these chains are indicative of the document's main focus and should therefore be retained in the compressed output. Barzilay and Elhadad's (1997) scoring function aims to identify sentences (for inclusion in a summary) that have a high concentration of chain members. In contrast, we are interested in chains that span several sentences. We thus score chains according to the number of sentences their terms occur in. For example, the hypothetical chain {house 3 , home 3 , loft 3 , house 5 } (where word i denotes word occurring in sentence i) would be given a score of two as the terms occur only in two sentences. We assume that a chain signals a prevalent discourse topic if it occurs throughout more sentences than the average chain. The scoring algorithm is outlined more formally as:
1.
Compute the lexical chains for the document.
2.
Score(Chain) = Sentences(Chain).
3.
Discard chains for which Score(Chain) < Average(Score).
4.
Mark terms from the remaining chains as being the focus of the document.
We use the method of Galley and McKeown (2003) to compute lexical chains for each document. 5 It improves on Barzilay and Elhadad's (1997) original algorithm by providing better word sense disambiguation and linear runtime. The algorithm proceeds in three steps. Initially, a graph is built representing all possible interpretations of the document under consideration. The text is processed sequentially, comparing each word against all words previously read. If a relation exists between the senses of the current word and any possible sense of a previous word, a connection is formed between the appropriate words and senses. The strength of the connection is a function of the type of relationship and of the distance between the words in the text (in terms of words, sentences, and paragraphs). Words are represented as nodes in the graph and semantic relations as weighted edges. The relations considered by Galley and McKeown (2003) are all first-order WordNet relations, with the addition of siblings-two words are considered siblings if they are both hyponyms of the same hypernym. Next, all occurrences of a given word are collected together. For each sense of a target word, the strength of all connections involving that sense are summed, giving that sense a unified score. The sense with the highest unified score is chosen as the correct sense for the target word. Lastly, the lexical chains are constructed by collecting same sense words into the same chain. Figure 2 illustrates the lexical chains created by our algorithm for three documents (taken from our test set). Chains are shown in oval boxes; members of the same chain have the same index. The algorithm identifies three chains in the first document: { flow, rate}, {today, day, yesterday}, and {miles, ft}. In the second document the chains are {body} and {month, night}, and in the third {policeman, police}, {woman, woman, boyfriend, man}. As can be seen, members of a chain represent a shared concept (e.g., "time", "linear unit", or "person"). In some cases important topics are missed. For instance, in the first document no chains were created with the words lava or debris. The second document is about Mrs Allan and contains many references to her. However, because Mrs Allan is not listed in WordNet it is not possible to create any chains for this word or any of its coreferents (e.g., she, her). A similar problem is observed in the third document where Anderson is not included in any chain even though he is one of the main protagonists throughout the text. We next turn to Centering Theory as a means of identifying which entities are prominent in a document.
Centering Theory
Centering Theory (Grosz, Weinstein, and Joshi 1995) is an entity-orientated theory of local coherence and salience. One of the main ideas underlying centering is that certain entities mentioned in an utterance are more central than others. This in turn imposes constraints on the use of referring expressions and in particular on the use of pronouns.
The theory begins by assuming that a discourse is broken into "utterances." These can be phrases, clauses, sentences, or even paragraphs. At any point in discourse, some entities are considered more salient than others, and are expected to exhibit
Figure 2
Excerpts of documents from our test set with discourse annotations. Centers are in double boxes; terms occurring in lexical chains are in oval boxes. Words with the same subscript are members of the same chain (e.g., police, policeman). different properties. Specifically, although each utterance may contain several entities, it is assumed that a single entity is "centered," thereby representing the current discourse focus. One of the main claims underlying centering is that discourse segments in which successive utterances contain common centers are more coherent than segments where the center repeatedly changes.
Each utterance U j in a discourse has a list of forward-looking centers, C f (U j ), and a unique backward-looking center, C b (U j ). C f (U j ) represents a ranking of the entities invoked by U j according to their salience. Thus, some entities in the discourse are deemed more important than others. The C b of the current utterance U j is the highestranked element in C f (U j−1 ) that is also in U j . (Centering hypothesizes that the C b is likely to be realized as a pronoun.) Entities are commonly ranked in terms of their grammatical function, namely, subjects are ranked more highly than objects, which are more highly ranked than the rest (Grosz, Weinstein, and Joshi 1995). The C b links U j to the previous discourse, but it does so locally since C b (U j ) is chosen from U j−1 .
Centering formalizes fluctuations in topic continuity in terms of transitions between adjacent utterances. Grosz, Weinstein, and Joshi (1995) distinguish between three types of transitions. In CONTINUE transitions, C b (U j ) = C b (U j−1 ) and C b (U j ) is the most highly ranked element entity in U j . In RETAIN transitions C b (U j ) = C b (U j−1 ) but C b (U j ) is not the most highly ranked element entity in U j . And in SHIFT transitions C b (U j ) = C b (U j−1 ). These transitions are ordered: CONTINUEs are preferred over RE-TAINs, which are preferred over SHIFTs. And discourses with many CONTINUE transitions are considered more coherent than those which repeatedly SHIFT from one center to the other.
We demonstrate these concepts in passages (1a)-(1c) taken from Walker, Joshi, and Prince (1998).
(1) a. Jeff helped Dick wash the car. CF(Jeff, Dick, car) b. He washed the windows as Dick waxed the car.
CF(Jeff, Dick, car) CB=Jeff c. He soaped a pane.
CF(Jeff, pane) CB=Jeff
Here, the first utterance does not have a backward-looking center but has three forwardlooking centers Jeff, Dick, and car. To determine the backward-looking center of (1b) we find the highest ranked entity among the forward-looking centers in (1a) which also occurs in (1b). This is Jeff as it is the subject (and thus most salient entity) in (1a) and present (as a pronoun) in (1b). The same procedure is applied for utterance (1c). Also note that (1a) and (1b) are linked via a CONTINUE transition. The same is true for (1b) and (1c). For the purposes of our document compression application, we are not so much interested in characterizing our texts in terms of entity transitions. Because they are all written by humans, we can assume they are more or less coherent. Nonetheless, identifying the centers in discourse seems important. These will indicate what the document is about, who the main protagonists are, and how the discourse focus progresses. We would probably not want to delete entities functioning as backward-looking centers.
As Centering is primarily a linguistic theory rather than a computational one, it is not explicitly stated how the concepts of "utterance," "entities," and "ranking" are instantiated. A great deal of research has been devoted to fleshing these out and many different instantiations have been developed in the literature (see Poesio et al. [2004] for details). In our case, the instantiation will have a bearing on the reliability of the algorithm to detect centers. If the parameters are too specific then it may not be possible to accurately determine the center for a given utterance. Because our aim is to identify centers in discourse automatically, our parameter choice is driven by two considerations: robustness and ease of computation.
We therefore follow previous work (e.g., Miltsakaki and Kukich 2000) in assuming that the unit of an utterance is the sentence (i.e., a main clause with accompanying subordinate and adjunct clauses). This is a simplistic view of an utterance; however it is in line with our compression task, which also operates over sentences. We determine which entities are invoked by a sentence using two methods. First, we perform named entity identification and coreference resolution on each document using LingPipe, 6 a publicly available system. Named entities are not the only type of entity to occur in our data, thus to ensure a high entity recall we add named entities and all remaining nouns 7 to the C f list. Entity matching between sentences is required to determine the C b of a sentence. This is done using the named entity's unique identifier (as provided by LingPipe) or by the entity's surface form in the case of nouns not classified as named entities.
We follow Grosz, Weinstein, and Joshi (1995) in ranking entities according to their grammatical roles; subjects are ranked more highly than objects, which are in turn ranked higher than other grammatical roles; ties are broken using left-to-right ordering of the grammatical roles in the sentence (Tetreault 2001). We identify grammatical roles using RASP (Briscoe and Carroll 2002). Formally, our centering algorithm is as follows (where U j corresponds to sentence j):
1.
Extract entities from U j .
2. Create C f (U j ) by ranking the entities in U j according to their grammatical role (subjects > objects > others, ties broken using left-to-right word order of U j ).
3.
Find the highest ranked entity in C f (U j−1 ) which occurs in C f (U j ); set the entity to be C b (U j ).
This procedure involves several automatic steps (named entity recognition, coreference resolution, and identification of grammatical roles) and will unavoidably produce some noisy annotations. There is no guarantee, therefore, that the right C b will be identified or that all sentences will be marked with a C b . The latter situation also occurs in passages that contain abrupt changes in topic. In such cases, none of the entities realized in U j will occur in C f (U j−1 ). Hopefully, lexical chains will come to the rescue here as an alternative means of capturing local content within a document. Figure 2 shows the centers (in double boxes) identified by our algorithm. In the first document lava and debris are marked as centers, in the second document Mrs Allan (and its coreferents), and in the third one Peter Anderson and allotment. When comparing the annotations produced by centering and the lexical chains, we observe that they tend to be complementary. Proper nouns that lexical chains miss out on are often identified by centering. When the latter fails, due to errors in coreference resolution or the identification of grammatical relations, lexical chains can be more robust because only WordNet is required for their computation. As an example consider the third document in Figure 2. Here, lexical chains provide a better insight into the text. Were we to rely solely on centering, we would obtain annotations only for two entities, namely, Peter Anderson and allotment.
The Discourse-Inspired Compression Model
We now turn our attention to incorporating discourse information into our compression model. Before compression takes place, all documents are processed using the centering and lexical chain algorithms described earlier. In each sentence we annotate the center C b (U j ) if one exists. Words (or phrases) that are present in the current sentence and function as the center in the next sentence C b (U j+1 ) are also flagged. Finally, words are marked if they are part of a prevalent (high scoring) chain. Provided with this additional knowledge our model takes a (sentence-separated) source document as input and generates a compressed version by applying sentence-level and discourse-level constraints to the entire document rather than to each sentence sequentially. In our earlier formulation of the compression task (Clarke and Lapata 2008), we create and solve an ILP for every sentence, whereas now an ILP is solved for each document. This makes sense from a discourse perspective as compression decisions are not made independently of each other. Also note that this latter formulation brings compression closer to summarization as we can manipulate the document compression rate directly, for example, by adding a constraint that forces the target document to be less than b tokens. This allows the model to choose how much to compress each individual sentence without requiring that they all have the same compression rate. Accordingly, we modify our objective function by introducing a sum over all sentences (assuming l sentences are present in the document) and adding an additional index g to each decision variable to track the sentence it came from:
max z = l g=1 n g i=1 δ g,i · λI(x g,i ) + n g i=1 α g,i · P(x g,i |start) + n g −2 i=1 n g −1 j=i+1 n g k=j+1 γ g,ijk · P(x g,k |x g,i , x g,j ) + n g −1 i=0 n g j=i+1 β g,ij · P(end|x g,i , x g,j ) ⎤ ⎦ −ζ min · μ − ζ max · μ(18)
We also modify the compression rate soft constraint to act over the whole document rather than sentences. This allows some sentences to violate the compression rate without incurring a penalty, provided the compression rate of the document falls within the specified range.
Document Compression Rate Constraints. We wish to penalize compressions which do not fall within a desired compression rate range (c min % − c max %).
l g=1 n g i=0 δ g,i + ζ min ≥ c min · l g=1 n g (19) l g=1 n g i=0 n g i=0 δ g,i − ζ max ≤ c max · l g=1 n g(20)
Besides the new objective function and compression rate constraints, the model makes use of all the sentence-level constraints introduced in Section 3.3, but is crucially enhanced with three discourse constraints explained in the following.
Discourse Constraints
Our first goal to is preserve the focus of each sentence. If the center, C b , is identified in the source sentence it must be retained in the target compression. If present, the entity realized as the C b in the following sentence should also be retained to ensure the focus is preserved from one sentence to the next. Such a condition is easily captured with the following ILP constraint:
δ i = 1 (21) ∀i : x i ∈ {C b (U j ), C b (U j+1 )}
As an example, consider the first discourse in Figure 2. The constraints generated from Equation (21) will require the compression to retain lava in the first two sentences and debris in the second and third sentences. As mentioned in the previous section, the centering algorithm relies on NLP technology that is not 100% accurate (named entity detection, parsing, and coreference resolution). Therefore, the algorithm can only approximate the center for each sentence and in some cases fails to identify any centers at all. Lexical chains provide a complementary annotation of the topic or theme of the document using information which is not restricted to adjacent sentences. Recall that once chains are created, they are scored, and chains with scores less than the average are discarded. We consider all remaining lexical chains as topical and require that words in these be retained in the compression. 22) ∀i : x i ∈ document topical lexical chain
δ i = 1(
Consider again the first text in Figure 2. Here, flow and rate are members of the same chain (marked with subscript 1). According to constraint (22) both words must be included in the compressed document. In the third document the words relating to "police" (police, policeman) and "people" (woman, boyfriend, man) also would be retained in the compression. Our final discourse constraint concerns pronouns. Specifically, we force personal pronouns (whose antecedent may not always be identified) to be included in the compression. 23) ∀i : x i ∈ personal pronouns
δ i = 1(
The constraints just described ensure that the compressed document will retain the discourse flow of the source document and will preserve terms indicative of important topics. Document compression aside, the discourse constraints will also benefit sentence-level compression. They provide our model, which so far relied on syntactic evidence and surface level document characteristics (i.e., word frequencies), additional evidence for retaining (discourse) relevant words.
Applying the Constraints
As explained earlier we apply the model and the constraints to each document. In our earlier sentence-based formulation, a significance score (see Section 3.2) was used to highlight which nouns and verbs should be included in the compression. As far as nouns are concerned, our discourse constraints perform a similar task. Thus, when a sentence contains discourse annotations, we are inclined to trust them more and only calculate the significance score for verbs.
During development it was observed that applying all discourse constraints simultaneously (see Equations (21)-(23)) results in relatively long compressions. To counteract this, we employ these constraints using a back-off strategy that relies on progressively less reliable information. Our back-off model works as follows: If centering information is present, we apply the appropriate constraints (Equation (21)). If no centers are present, we back off to the lexical chain information using Equation (22), and in the absence of the latter we back off to the pronoun constraint (Equation (23)). Finally, if discourse information is entirely absent from the sentence, we default to the significance score. Sentential constraints are applied throughout irrespective of discourse constraints. We determined this ordering (i.e., centering first, then lexical chains, and then pronouns) on the development set. Centering tends to be more precise, whereas lexical chains have high recall but lower precision in terms of identifying which entities are in focus and should therefore not be dropped. In our test data (see Section 6 for details), the centering constraint was used in 68.6% of the sentences. The model backed off to lexical chains for 13.7% of the test sentences, whereas the pronoun constraint was applied in 8.5%. Finally, the noun and verb significance score was used on the remaining 9.2%. Examples of our system's output for the texts in Figure 2 are given in Figure 3.
Experimental Set-up
In this section we present our experimental set-up for assessing the performance of the compression model. We describe the compression corpus used in our study, briefly introduce the model used for comparison with our approach, and explain how system output was evaluated.
Compression Corpus
Previous work on sentence compression has used almost exclusively the Ziff-Davis corpus, a compression corpus derived automatically from document-abstract pairs (Knight and Marcu 2002). Unfortunately, this corpus is not suitable for our purposes because it consists of isolated sentences taken from several different documents. We thus created a document-based compression corpus manually. Specifically, annotators were presented with one document at a time and asked to compress sentences sequentially by removing tokens. They were free to remove any words they deemed superfluous, provided their deletions (a) preserved the most important information in the source sentence, and (b) ensured the compressed sentence remained grammatical. If they wished, they could leave a sentence uncompressed. They were not allowed to delete whole sentences even if they believed they contained no information content with respect to the story, as this would blur the task with summarization. Following these guidelines,
Figure 3
Compression output on excerpts from Figure 2 using the discourse model. Words that are dropped are striken out. the annotators created compressions for 82 stories (1,629 sentences) from the BNC and the LA Times and Washington Post. 8 Forty-eight (48) documents (962 sentences) were used for training, 3 for development (63 sentences), and 31 for testing (604 sentences).
Comparison with State-of-the-Art
The discourse-based compression model was evaluated against our earlier sentencebased ILP model (without the discourse constraints). In addition, we compared our approach against a state-of-the-art model which does not take discourse-level information into account, does not use ILP, and is sentence-based. We give a brief description in the following, and refer the interested reader to McDonald (2006) for details. McDonald (2006) formalizes sentence compression as a classification task in a discriminative large-margin learning framework: Pairs of words from the source sentence are classified as being adjacent or not in the target compression. Let x = x 1 , . . . , x n denote a source sentence with a target compression y = y 1 , . . . , y m where each y j occurs in x. The function L(y i ) ∈ {1 . . . n} maps word y i in the target to the index of the word in the source, x (subject to the constraint that L(y i ) < L(y i+1 )). McDonald defines the score of a compression y for a sentence x as the dot product between a high-dimensional feature representation over bigrams and a corresponding weight vector:
s(x, y) = |y| j=2 w · f(x, L(y j−1 ), L(y j ))(24)
Decoding in this framework amounts to finding the combination of bigrams that maximize the scoring function in Equation (24). The maximization is solved using dynamic programming (see McDonald [2006] for details). The model parameters are estimated using the Margin Infused Relaxed Algorithm (MIRA; Crammer and Singer 2003), a discriminative large-margin online learning technique. This algorithm learns by compressing each sentence and comparing the result with the gold standard. The weights are updated so that the score of the correct compression (the gold standard) is greater than the score of all other compressions by a margin proportional to their loss. The loss function is the number of words falsely retained or dropped in the incorrect compression relative to the gold standard. McDonald employs a rich feature set defined over words, parts of speech, phrase structure trees, and dependencies. These are gathered over adjacent words in the compression and the words in between which were dropped.
It is important to note that McDonald (2006) is not a straw-man system. It achieves highly competitive performance compared with Knight and Marcu's (2002) noisychannel and decision-tree models. Due to its discriminative nature, the model is able to use a large feature set and to optimize compression accuracy directly. In other words, McDonald's model has a head start against our own model which does not utilize a large parallel corpus and has only a few constraints. The comparison of the two systems allows us to establish that we have a competitive state-of-the-art system, even without discourse constraints.
We trained McDonald's (2006) model on the full training set (48 documents, 962 sentences). Our implementation used an identical feature set, the only difference being that our phrase structure and dependency features were extracted from the output of Roark's (2001) parser. McDonald uses Charniak's (2000) parser, which performs comparably. We also employed a slightly modified loss function to encourage compression on our data set. McDonald's results were reported on the Ziff-Davis corpus. The language model required for the ILP system was trained on 80 million tokens from the English GigaWord corpus (LDC2007T07) using the SRI Language Modeling Toolkit with Kneser-Ney discounting. The significance score was calculated on 80 million tokens from the same corpus. The ILP model presented in Equation (1) implements a weighted combination of the significance score with a language model. The weight was tuned on the development set which consisted of three source documents and their target compressions. Our optimization procedure used Powell's method (Press et al. 1992) and a loss function based on the grammatical relations F1 between the gold standard and system output. The optimal weight was approximately 9.0. Note that the development set was the only source of parallel data our model had access to.
In order to compare all three models (sentence-based ILP, discourse-based ILP, and McDonald [2006]) on an equal footing, we ensured that their compression rates were similar. To do this, we first run McDonald's model on our data and then set the compression rate for our ILP models so that it is comparable to his output. This can be done relatively straightforwardly by adjusting the compression rate range soft constraint. In our experiments we set the minimum compression rate to 57%, the upper rate to 62%, and the violation penalty (μ) to −99. In practice, the soft constraint controlling the compression rate can be removed or specifically tuned to suit the application.
Evaluation
Previous studies evaluate the well-formedness of automatically generated compressions out of context. The target sentences are typically rated by naive subjects on two dimensions, grammaticality and importance (Knight and Marcu 2002). Automatic evaluation measures have also been proposed. Riezler et al. (2003) compare the grammatical relations found in the system output against those found in a gold standard using F1. Although F1 conflates grammaticality and importance into a single score, it nevertheless has been shown to correlate reliably with human judgments (Clarke and Lapata 2006).
The aims of our evaluation study were twofold. Firstly, we wanted to examine whether our discourse constraints improve the compressions for individual sentences. There is no hope for generating shorter documents if the compressed sentences are either too wordy or too ungrammatical. Secondly and more importantly, our goal was to evaluate the compressed documents as a whole by examining whether they are readable and the degree to which they retain key information when compared to the originals. We evaluated sentence-based compressions automatically using F1 and the grammatical relations annotations provided by RASP (Briscoe and Carroll 2002). This parser is suited to the compression task as it provides parses for both full sentences and sentence fragments and is generally robust enough to analyze semi-grammatical sentences. We computed F1 over all the relations provided by RASP (e.g., subject, direct/indirect object, modifier; 17 in total). We compared the output of our discourse system on the test set (31 documents, 604 sentences) against the sentence-based ILP model and McDonald (2006).
Our document-level evaluation was motivated by two questions: (1) Are the compressed documents readable? and (2) How much key information is preserved between the source document and its target compression? The readability of a document is fairly straightforward to measure by asking participants to provide a rating (e.g., on a seven-point scale). Measuring how much information is preserved in the compressed document is more involved. Under the assumption that the target document is to function as a replacement for the source, we can measure the extent to which the compressed version can be used to find answers for questions which have been derived from the source and are representative of its core content. We thus created questions from the source and then determined whether it was possible to find their answers by reading the compressed target. The more questions a hypothetical compression system can answer, the better it is at compressing the document as a whole.
A question-answering (Q&A) paradigm has been used previously to evaluate summaries and text compression. Morris, Kasper, and Adams (1992) performed one of the first Q&A evaluations to investigate the degree to which documents could be summarized before reading comprehension diminished. Their corpus consisted of four passages randomly selected from a set of sample Graduate Management Aptitude Test (GMAT) reading comprehension tests. The texts covered a range of topics including medieval literature, 18th-century Japan, minority-operated businesses, and Florentine art. Accompanying each text were eight multiple-choice questions, each containing five possible answers. The questions were provided by the Educational Testing Service and were designed to measure the subjects' reading comprehension. Subjects were given various textual treatments: the full text, a human-authored abstract, three systemgenerated extracts, and a final treatment where merely the questions were presented without any text. The questions-only treatment was used as a control to investigate if subjects could answer questions without any source material. Subjects were instructed to read the passage (if provided) and answer the multiple choice questions.
The advantage of using standardized tests, such as the GMAT reading comprehension test, is that Q&A pairs are provided along with a method for scoring answers (the correct answer is one among five possible choices). However, our corpora do not contain ready prepared Q&A pairs; thus we require a methodology for constructing questions and their answers and scoring documents against the answers. One such methodology is presented in the TIPSTER Text Summarization Evaluation (SUMMAC; Mani et al. 2002). SUMMAC was concerned with producing summaries tailored to specific topics. The Q&A task involved an evaluation where a topic-related summary for a document was evaluated in terms of its "informativeness," namely, the degree to which it contained answers found in the source document to a set of topic-related questions. For each topic (three in total), 30 relevant documents were chosen to generate a single summary. One annotator per topic came up with no more than five questions relating to the obligatory aspects of the topic. An obligatory aspect of a topic was defined as information that must be present in the document for the document to be relevant to the topic. The annotators then created an answer key for their topic by annotating the passages and phrases from the documents which provided the answers to the questions. In the SUMMAC evaluation, the annotator for each topic was tasked with scoring the system summaries. Scoring involved comparing the summaries against the answer key (annotated passages from the source documents) while judging whether the summary provided a Correct, Partially Correct, or Missing answer. If a summary contained an answer key and sufficient context the summary was deemed correct; however, summaries would be considered partially correct if the answer key was present but with insufficient context. If context was completely missing, misleading, or the answer key was absent then the summary was judged missing.
Our methodology for constructing Q&A pairs and for scoring documents is inspired by the SUMMAC evaluation exercise (Mani et al. 2002). Rather than creating questions for document sets (or topics) our questions were derived from individual documents. Two annotators were independently instructed to read the documents from our (test) corpus and create Q&A pairs. Each annotator drafted no more than ten questions and answers per document, related to its content. Annotators were asked to create fact-based questions which required an unambiguous answer; these were typically who, what, where, when, and how-style questions. The purpose of using two annotators per document was to allow annotators to compare and revise their Q&A pairs; this process was repeated until a common agreed-upon set of questions was reached. Revisions typically involved merging and simplifying questions to make them clearer, and in some cases splitting a question into multiple questions. Documents for which too few questions were agreed upon and for which the questions and answers were too ambiguous were removed. This left an evaluation set of six documents with between five to eight concise questions per document. Figure 4 shows a document from our test set and the questions and answers our annotators created for it.
For scoring our documents we adopt a more objective method than SUMMAC. Instead of asking the annotator who constructed the questions to check the document compressions for the answers, we ask naive participants to read the compressed documents and answer the questions as best as they can. During evaluation, the source document is not shown to our subjects; thus, if the compression is difficult to read, the participants have no point of reference to help them understand the compression. This is a departure from previous evaluations within text generation tasks, where the source text is available at judgment time; in our case only the system output is available.
The document-based evaluation was conducted remotely over the Internet using a custom-built Web interface. Upon loading the Web interface, participants were presented with a set of instructions that explained the Q&A task and provided examples. Subjects were first asked to read the compressed document and then rate its readability on a seven-point scale where 7 = excellent, and 1 = terrible. Next, questions were presented one at a time (the order being is defined by the annotators) and participants were encouraged to consult the document for the answer. Answers were written directly into a text field on the Web interface which allowed free-form text to be submitted. Once a participant provided an answer and confirmed the answer, the interface locked the answer to ensure it was not modified later. This was necessary because later questions could reveal information which would help answer previous questions. We elicited answers for six documents in four compression conditions: gold standard, using the ILP sentence-based model, the ILP discourse model, and McDonald's (2006) model. A Latin square design was used to prevent participants from seeing multiple treatments (compressions) of the same document thus removing any learning effect. A total of 116 unpaid volunteers completed the experiment. They were recruited through student mailing lists and the Language Experiments Web site. 9 The answers provided by our subjects were scored against an answer key. A correct answer was marked with a score of one, and zero otherwise. In cases where two answers were required, a score of 0.5 was awarded to each correct answer. The score for a compressed document is the average of its question scores. All subsequent tests and comparisons are performed on the document score.
Results
We first assessed the compressions produced by the two ILP models (Discourse and Sentence) and McDonald (2006) on a sentence-by-sentence basis. Table 1 shows the compression rates (CompR) for the three systems and evaluates the quality of their output using grammatical relations F1. As can be seen, all three systems produce comparable compression rates. The Discourse ILP compressions are slightly longer than McDonald's (2006) (61.0% vs. 60.1%) and slightly shorter than the Sentence ILP model (61.0% vs. 62.1%). The Discourse ILP model is significantly better than McDonald (2006) and Sentence ILP in terms of F1, indicating that discourse-level information is generally helpful. All three systems could use further improvement, as inter-annotator agreement on this data yields an F1 of 65.8% (Clarke 2008).
Let us now consider the results of our document-based evaluation. Table 2 shows the mean readability ratings obtained for each system and the percentage of questions answered correctly. We used an analysis of variance (ANOVA) to examine the effect of compression type (McDonald, Sentence ILP, Discourse ILP, Gold Standard). The ANOVA revealed a reliable effect on both readability and Q&A. Post hoc Tukey tests showed that McDonald and the two ILP models do not differ significantly in terms of readability. However, they are all significantly less readable than the gold standard (α < 0.01). For the Q&A task, we observe that our system is significantly better than McDonald (α < 0.01) and Sentence ILP (α < 0.01), but significantly worse than the gold standard (α < 0.05). McDonald and Sentence ILP yield comparable performance (their difference is not statistically significant).
These results indicate that the automatic systems lag behind the human gold standard in terms of readability. When reading entire documents, subjects are less tolerant of ungrammatical constructions. We also find out that, despite relatively low readability, the documents are overall understandable. The discourse-based model generates more informative documents-the number of questions answered correctly increases by 19% in comparison to McDonald and Sentence ILP. This is an encouraging result suggesting that there are advantages in developing compression models that exploit discourselevel information information. Figure 5 shows the output of the ILP systems (Discourse and Sentence) on two test documents. Words that are dropped have been stricken out. As can be seen, the two systems produce different compressions, and the discourse-based output is more coherent. This is corroborated by the readability results where the discourse ILP model received the highest rating. Also note that some of the compressions produced by the sentence-based model distort the meaning of the original text, presumably leading the reader to make wrong inferences. For example, in the second document (Sentence ILP version) one infers that the victim was urged to report the incident. Moreover, important information is often omitted, for example, that the victim was indeed raped or that the strike would be damaging not only to the company but also to its staff (see the Sentence ILP version in the first document).
Conclusions and Future Work
In this article we proposed a novel method for automatic sentence compression. Central in our approach is the use of discourse-level information, which we argue is an important prerequisite for document (as opposed to sentence) compression. Our model uses integer linear programming for inferring globally optimal compressions in the presence of linguistically motivated constraints. Our discourse constraints aim to capture local coherence and are inspired by Centering Theory and lexical chains. We showed that our Improvements in certain allowances were made, described as divisive by the unions, but the company has refused to compromise on a reduction in the shorter working week. Ford dismissed an immediate meeting with the unions but did not rule out talks after Christmas. It said that a strike would be damaging to the company and to its staff. Production closed down at Ford last night for the Christmas period. Plants will open again on January 2. Discourse ILP Improvements in certain allowances were made, described as divisive by the unions, but the company has refused to compromise on a reduction in the shorter working week. Ford dismissed an immediate meeting with the unions but did not rule out talks after Christmas. It said that a strike would be damaging to the company and to its staff. Production closed down at Ford last night for the Christmas period. Plants will open again on January 2. Sentence ILP He threatened her by forcing his truncheon under her chin and then raped her. She said he only refrained from inserting his truncheon into her, after she begged him not to. Afterwards he told her not to report the incident because he could have her "nicked" for soliciting. She did not report it because she did not think she would be believed. Police investigated after an anonymous report. Discourse ILP He threatened her by forcing his truncheon under her chin and then raped her. She said he only refrained from inserting his truncheon into her, after she begged him not to. Afterwards he told her not to report the incident because he could have her "nicked" for soliciting . She did not report it because she did not think she would be believed. Police investigated after an anonymous report. Sentence ILP
Figure 5
Output of Discourse and Sentence ILP systems on two test documents. Words that are stricken out have been dropped. model can be successfully employed to produce compressed documents that preserve most of the original core content.
Our results confirm the conventional wisdom that discourse-level information is helpful in summarization. We also show that this type of information can be identified robustly in free text. Our experiments focused primarily on local discourse structure using two complementary representations. Centering tends to produce more annotations since it tries to identify a center in every sentence. Lexical chains tend to provide more general information, such as the major topics in a document. Due to their approximate nature, there is no one representation that is uniquely suited to the compression task. Rather, it is the synergy between lexical chains and centering that brings improvements. The discourse annotations proposed here are not specific to our model. They could be easily translated into features and incorporated into discriminative modeling paradigms (e.g., Nguyen et al. 2004;McDonald 2006;Cohn and Lapata 2009). The same is true for the Q&A evaluation paradigm employed in our experiments. It could be straightforwardly adapted to assess the information content of shorter summaries and potentially used to perform large-scale comparisons within and across systems.
Our approach differs from most summarization work in that our summaries are fairly long. However, we believe this is the first step to understanding how compression can help summarization. An obvious extension would be to interface our compression model with sentence extraction (see for an ILP formulation of a model that jointly performs sentence extraction and compression, without, however, taking discourse level information into account). The discourse annotations can help guide the extraction method into selecting topically related sentences which can consequently be compressed together. More generally, formulating the summarization process in the ILP framework outlined here would allow the integration of varied and sometimes conflicting constraints during summary generation. Examples include the summary length, and whether it is coherent, grammatical, or repetitive. Additional flexibility can be introduced by changing some of the constraints from hard to soft (as we did with the compression rate constraints), although determining the penalty for constraint violation manually using prior knowledge is a non-trivial task (Chang, Ratinov, and Roth 2007) and automatically learning the constraint penalty results in a harder learning problem. Importantly, under the ILP formulation such constraints can be explicitly encoded and applied during inference while finding a globally optimal solution.
Figure 4
4Example document from our test set and questions with answer key created for this document.
Table 1
1Compression results: compression rate and relation-based F1. Significantly different from Discourse ILP (p < 0.01 using the Wilcoxon test).Model
CompR Precision
Recall
F1
McDonald
60.1%
43.9%
36.5% * 37.9% *
Sentence ILP
62.1%
40.7% *
39.4% * 39.0% *
Discourse ILP
61.0%
46.2%
44.2%
42.2%
Gold Standard
70.3%
--
--
--
*
Table 2
2Human evaluation results: average readability ratings and average percentage of questions answered correctly. Significantly different from Gold Standard. † Significantly different from Discourse ILP.Model
Readability Q&A (%)
McDonald
2.52 *
51.42 * †
Sentence ILP
2.76 *
52.35 * †
Discourse ILP
3.10 *
71.38 *
Gold Standard
5.41 †
85.48 †
*
It is outside the scope of this article to provide an introduction to ILP. We refer the interested reader toWinston and Venkataramanan (2003) andVanderbei (2001) for comprehensive overviews.
For a sentence of length n, there are 2 n compressions.
Clausal modifiers (cmod) are adjuncts modifying entire clauses. In the example he ate the cake because he was hungry, the because-clause is a modifier of the sentence he ate the cake.
This is the decision-based parser described inMarcu (2000); it achieves an F1 of 38.2 for the identification of elementary discourse units, 50.0 for hierarchical spans, 39.9 for nuclearity, and 23.4 for relation assignment.
The software is available from http://www1.cs.columbia.edu/nlp/tools.cgi.
LingPipe can be downloaded from http://alias-i.com/lingpipe/.
As determined by the word's part-of-speech tag.
The corpus is available from http://homepages.inf.ed.ac.uk/s0460084/data/.
Available at http://www.language-experiments.org.
AcknowledgmentsWe are grateful to Ryan McDonald for his help with the re-implementation of his system, and our annotators Vasilis Karaiskos and Sarah Luger. Thanks to Alex Lascarides, Sebastian Riedel, and Bonnie Webber for insightful comments and suggestions, and to the anonymous referees whose feedback helped to substantially improve the present article. Lapata acknowledges the support of EPSRC (grant GR/T04540/01).
Syntax directed translations and the pushdown assembler. A V Aho, J D Ullman, Journal of Computer and System Sciences. 3Aho, A. V. and J. D. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences, 3:37-56.
Using lexical chains for text summarization. R Barzilay, M Elhadad, Proceedings of the ACL-97 Intelligent Scalable Text Summarization Workshop. the ACL-97 Intelligent Scalable Text Summarization WorkshopMadridBarzilay, R. and M. Elhadad. 1997. Using lexical chains for text summarization. In Proceedings of the ACL-97 Intelligent Scalable Text Summarization Workshop, pages 10-17, Madrid.
Aggregation via set partitioning for natural language generation. Regina Barzilay, Mirella Lapata, Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. the Human Language Technology Conference of the North American Chapter of the Association for Computational LinguisticsNew York, NYBarzilay, Regina and Mirella Lapata. 2006. Aggregation via set partitioning for natural language generation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 359-366, New York, NY.
Modeling local coherence: An entity-based approach. Regina Barzilay, Mirella Lapata, Computational Linguistics. 341Barzilay, Regina and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1-34.
Salience-based content characterization of text documents. Branimir Boguraev, Chris Kennedy, Proceedings of the ACL'97/EACL'97 Workshop on Intelligent Scalable Text Summarization. the ACL'97/EACL'97 Workshop on Intelligent Scalable Text SummarizationMadridBoguraev, Branimir and Chris Kennedy. 1997. Salience-based content characterization of text documents. In Proceedings of the ACL'97/EACL'97 Workshop on Intelligent Scalable Text Summarization, pages 2-9, Madrid.
Anatomy of a large-scale hypertextual Web search engine. Sergey Brin, Michael Page, Proceedings of the 7th Conference on World Wide Web. the 7th Conference on World Wide WebBrisbaneBrin, Sergey and Michael Page. 1998. Anatomy of a large-scale hypertextual Web search engine. In Proceedings of the 7th Conference on World Wide Web, pages 107-117, Brisbane.
Robust accurate statistical annotation of general text. E J Briscoe, J Carroll, Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC-2002). the 3rd International Conference on Language Resources and Evaluation (LREC-2002)Las PalmasBriscoe, E. J. and J. Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC-2002), pages 1499-1504, Las Palmas.
An empirical study on the relation between abstracts, extracts, and the discourse structure of texts. Carlson, John M Lynn, Daniel Conroy, Dianne P Marcu, Mary E O'leary, Anthony Okurowski, Taylor, Proceedings of the DUC-2001 Workshop on Text Summarization. the DUC-2001 Workshop on Text SummarizationNew Orleans, LACarlson, Lynn, John M. Conroy, Daniel Marcu, Dianne P. O'Leary, Mary E. Okurowski, and Anthony Taylor. 2001. An empirical study on the relation between abstracts, extracts, and the discourse structure of texts. In Proceedings of the DUC-2001 Workshop on Text Summarization, New Orleans, LA.
Guiding semi-supervision with constraint-driven learning. Ming-Wei Chang, Lev Ratinov, Dan Roth, Proceedings of the 22nd International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 22nd International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsPrague. Charniak, Eugene; Seattle, WAProceedings of the 1st North American Annual Meeting of the Association for Computational LinguisticsChang, Ming-Wei, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint-driven learning. In Proceedings of the 22nd International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 280-287, Prague. Charniak, Eugene. 2000. A maximum- entropy-inspired parser. In Proceedings of the 1st North American Annual Meeting of the Association for Computational Linguistics, pages 132-139, Seattle, WA.
Global Inference for Sentence Compression: An Integer Linear Programming Approach. James Clarke, University of EdinburghPh.D. thesisClarke, James. 2008. Global Inference for Sentence Compression: An Integer Linear Programming Approach. Ph.D. thesis, University of Edinburgh.
Models for sentence compression: A comparison across domains, training requirements and evaluation measures. James Clarke, Mirella Lapata, Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsSydneyClarke, James and Mirella Lapata. 2006. Models for sentence compression: A comparison across domains, training requirements and evaluation measures. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 377-384, Sydney.
Global inference for sentence compression: An integer linear programming approach. James Clarke, Mirella Lapata, Journal of Artificial Intelligence Research. 31Clarke, James and Mirella Lapata. 2008. Global inference for sentence compression: An integer linear programming approach. Journal of Artificial Intelligence Research, 31:399-429.
Sentence compression as tree transduction. Trevor Cohn, Mirella Lapata, Journal of Artificial Intelligence Research. 34Cohn, Trevor and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research, 34:637-674.
Text compaction for display on very small screens. Simon Corston-Oliver, Proceedings of the NAACL Workshop on Automatic Summarization. the NAACL Workshop on Automatic SummarizationPittsburgh, PACorston-Oliver, Simon. 2001. Text compaction for display on very small screens. In Proceedings of the NAACL Workshop on Automatic Summarization, pages 89-98, Pittsburgh, PA.
Computing representations of the structure of written discourse. Corston-Oliver, H Simon, MSR-TR-98-15Microsoft Research, Redmond, WATechnical ReportCorston-Oliver, Simon H. 1998. Computing representations of the structure of written discourse. Technical Report MSR-TR-98-15, Microsoft Research, Redmond, WA.
Ultraconservative online algorithms for multiclass problems. Koby Crammer, Yoram Singer, Journal of Machine Learning Research. 3Crammer, Koby and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951-991.
A noisy-channel model for document compression. Iii Daumé, Hal , Daniel Marcu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, PADaumé III, Hal and Daniel Marcu. 2002. A noisy-channel model for document compression. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 449-456, Philadelphia, PA.
Joint determination of anaphoricity and coreference resolution using integer programming. Pascal Denis, Jason Baldridge, Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational LinguisticsRochester, NYDenis, Pascal and Jason Baldridge. 2007. Joint determination of anaphoricity and coreference resolution using integer programming. In Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, pages 236-243, Rochester, NY.
Reluctant paraphrase: Textual restructuring under an optimisation model. Mark Dras, Proceedings of the Fifth Biannual Meeting of the Pacific Association for Computational Linguistics. the Fifth Biannual Meeting of the Pacific Association for Computational LinguisticsBrigitte; BerlinSpringerSummarising InformationDras, Mark. 1997. Reluctant paraphrase: Textual restructuring under an optimisation model. In Proceedings of the Fifth Biannual Meeting of the Pacific Association for Computational Linguistics, pages 98-104, Ohme. Endres-Niggemeyer, Brigitte. 1998. Summarising Information. Springer, Berlin.
WordNet: An Electronic Database. Fellbaum, ChristianeMIT PressCambridge, MAFellbaum, Christiane, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA.
Improving word sense disambiguation in lexical chaining. Michel Galley, Kathleen Mckeown, Proceedings of 18th International Joint Conference on Artificial Intelligence (IJCAI-03). 18th International Joint Conference on Artificial Intelligence (IJCAI-03)Acapulco, MexicoGalley, Michel and Kathleen McKeown. 2003. Improving word sense disambiguation in lexical chaining. In Proceedings of 18th International Joint Conference on Artificial Intelligence (IJCAI-03), pages 1486-1488, Acapulco, Mexico.
Lexicalized Markov grammars for sentence compression. Michel Galley, Kathleen Mckeown, Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational LinguisticsRochester, NYGalley, Michel and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. In Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, pages 180-187, Rochester, NY.
Producing Intelligent Telegraphic Text Reduction to Provide an Audio Scanning Service for the Blind. Gregory Grefenstette, Proceedings of the AAAI Symposium on Intelligent Text Summarization. the AAAI Symposium on Intelligent Text SummarizationStanford, CAGrefenstette, Gregory. 1998. Producing Intelligent Telegraphic Text Reduction to Provide an Audio Scanning Service for the Blind. In Proceedings of the AAAI Symposium on Intelligent Text Summarization, pages 111-117, Stanford, CA.
Centering: a framework for modeling the local coherence of discourse. Barbara J Grosz, Aravind K Scott Weinstein, Joshi, Computational Linguistics. 212Grosz, Barbara J., Scott Weinstein, and Aravind K. Joshi. 1995. Centering: a framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203-225.
. M A K Halliday, Ruqaiya Hasan, LondonCohesion in English. LongmanHalliday, M. A. K. and Ruqaiya Hasan. 1976. Cohesion in English. Longman, London.
Lexical chains as representations of context for the detection and correction of malapropisms. Graeme Hirst, David St-Onge, WordNet: An Electronic Database. Christiane FellbaumCambridge, MAMIT PressHirst, Graeme and David St-Onge. 1998. Lexical chains as representations of context for the detection and correction of malapropisms. In Christiane Fellbaum, editor, WordNet: An Electronic Database. MIT Press, Cambridge, MA, pages 305-332.
Speech summarization: An approach through word extraction and a method for evaluation. Chiori Hori, Sadaoki Furui, Proceedings of the 6th conference on Applied Natural Language Processing. the 6th conference on Applied Natural Language ProcessingSeattle, WA1Jing, Hongyan. 2000. Sentence reduction for automatic text summarizationHori, Chiori and Sadaoki Furui. 2004. Speech summarization: An approach through word extraction and a method for evaluation. IEICE Transactions on Information and Systems, E87-D(1):15-25, 1. Jing, Hongyan. 2000. Sentence reduction for automatic text summarization. In Proceedings of the 6th conference on Applied Natural Language Processing, pages 310-315, Seattle, WA.
Optimising referential coherence in text generation. Rodger Kibble, Richard Power, Computational Linguistics. 304Kibble, Rodger and Richard Power. 2004. Optimising referential coherence in text generation. Computational Linguistics, 30(4):401-416.
Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Kevin Knight, Daniel Marcu, Artificial Intelligence. 1391Knight, Kevin and Daniel Marcu. 2002. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91-107.
A trainable document summarizer. Julian Kupiec, Jan O Pedersen, Francine Chen, Proceedings of SIGIR-95. SIGIR-95Seattle, WAKupiec, Julian, Jan O. Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of SIGIR-95, pages 68-73, Seattle, WA.
Improving summarization performance by sentence compression-A pilot study. Chin-Yew Lin, Proceedings of the 6th International Workshop on Information Retrieval with Asian Languages. the 6th International Workshop on Information Retrieval with Asian LanguagesSapporoLin, Chin-Yew. 2003. Improving summarization performance by sentence compression-A pilot study. In Proceedings of the 6th International Workshop on Information Retrieval with Asian Languages, pages 1-8, Sapporo.
LaTaT: Language and text analysis tools. Dekang Lin, Proceedings of the first Human Language Technology Conference. the first Human Language Technology ConferenceSan Francisco, CALin, Dekang. 2001. LaTaT: Language and text analysis tools. In Proceedings of the first Human Language Technology Conference, pages 222-227, San Francisco, CA.
Inderjeet Mani, Automatic Summarization. John Benjamins. AmsterdamMani, Inderjeet. 2001. Automatic Summarization. John Benjamins, Amsterdam.
The TIPSTER SUMMAC Text Summarization Evaluation. Inderjeet Mani, Thérèse Firmin, David House, Gary Klein, Beth Sundheim, Lynette Hirschman, Natural Language Engineering. 8Mani, Inderjeet, Thérèse Firmin, David House, Gary Klein, Beth Sundheim, and Lynette Hirschman. 2002. The TIPSTER SUMMAC Text Summarization Evaluation. Natural Language Engineering, 8:43-68.
Improving summaries by revising them. Inderjeet Mani, Barbara Gates, Eric Bloedorn, Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. the 37th Annual Meeting of the Association for Computational LinguisticsCollege Park, MDMani, Inderjeet, Barbara Gates, and Eric Bloedorn. 1999. Improving summaries by revising them. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 558-565, College Park, MD.
Rhetorical structure theory: Toward a functional theory of text organization. William C Mann, Sandra A Thompson, Text. 83Mann, William C. and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281.
Beyond the pipeline: Discrete optimization in NLP. Tomasz Marciniak, Michael Strube, Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005). the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)Ann Arbor, MIMarciniak, Tomasz and Michael Strube. 2005. Beyond the pipeline: Discrete optimization in NLP. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 136-143, Ann Arbor, MI.
The Theory and Practice of Discourse Parsing and Summarization. Daniel Marcu, The MIT PressCambridge, MAMarcu, Daniel. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press, Cambridge, MA.
Summarization with a joint model for sentence extraction and compression. André Martins, Noah A Smith, Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing. the Workshop on Integer Linear Programming for Natural Language ProcessingBoulder, COMartins, André and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, pages 1-9, Boulder, CO.
Concise integer linear programming formulations for dependency parsing. André Martins, Noah Smith, Eric Xing, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPSuntecMartins, André, Noah Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 342-350, Suntec.
Discriminative sentence compression with soft syntactic constraints. Ryan Mcdonald, Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. the 11th Conference of the European Chapter of the Association for Computational LinguisticsTrentoMcDonald, Ryan. 2006. Discriminative sentence compression with soft syntactic constraints. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 297-304, Trento.
The role of centering theory's rough-shift in the teaching and evaluation of writing skills. Eleni Miltsakaki, Karen Kukich, Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. the 38th Annual Meeting of the Association for Computational LinguisticsHong KongMiltsakaki, Eleni and Karen Kukich. 2000. The role of centering theory's rough-shift in the teaching and evaluation of writing skills. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 408-415, Hong Kong.
The effects and limitations of automated text condensing on reading comprehension performance. A Morris, G Kasper, D Adams, Information Systems Research. 31Morris, A., G. Kasper, and D. Adams. 1992. The effects and limitations of automated text condensing on reading comprehension performance. Information Systems Research, 3(1):17-35.
Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Jane Morris, Graeme Hirst, Computational Linguistics. 171Morris, Jane and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1):21-48.
Probabilistic sentence reduction using support vector machines. Minh Nguyen, Akira Le, Susumu Shimazu, Tu Bao Horiguchi, Masaru Ho, Fukushi, Proceedings of the 20th International Conference on Computational Linguistics. the 20th International Conference on Computational LinguisticsGenevaNguyen, Minh Le, Akira Shimazu, Susumu Horiguchi, Tu Bao Ho, and Masaru Fukushi. 2004. Probabilistic sentence reduction using support vector machines. In Proceedings of the 20th International Conference on Computational Linguistics, pages 743-749, Geneva.
Less is more; eliminating index terms from subordinate clauses. S H Olivers, W B Dolan, Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. the 37th Annual Meeting of the Association for Computational LinguisticsCollege Park, MDOlivers, S. H. and W. B. Dolan. 1999. Less is more; eliminating index terms from subordinate clauses. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 349-356, College Park, MD.
Abstract generation based on rhetorical structure extraction. Kenji Ono, Kazuo Sumita, Seiji Miike, Proceedings of the 15th International Conference on Computational Linguistics. the 15th International Conference on Computational LinguisticsKyotoOno, Kenji, Kazuo Sumita, and Seiji Miike. 1994. Abstract generation based on rhetorical structure extraction. In Proceedings of the 15th International Conference on Computational Linguistics, pages 344-348, Kyoto.
An evolutionary approach for improving the quality of automatic summaries. Constantin Orȃsan, ACL Workshop on Multilingual Summarization and Question Answering. Barbara Di Eugenio, and Janet HitzemanSapporo, Japan. Poesio, Massimo, Rosemary Stevenson30Centering: a parametric theory and its instantiationsOrȃsan, Constantin. 2003. An evolutionary approach for improving the quality of automatic summaries. In ACL Workshop on Multilingual Summarization and Question Answering, pages 37-45, Sapporo, Japan. Poesio, Massimo, Rosemary Stevenson, Barbara Di Eugenio, and Janet Hitzeman. 2004. Centering: a parametric theory and its instantiations. Computational Linguistics, 30(3):309-363.
. William H Press, A Saul, William T Teukolsky, Brian P Vetterling, Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P.
Numerical Recipes in C: The Art of Scientific Computing. Flannery, Cambridge University PressCambridge, UKFlannery. 1992. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Cambridge, UK.
Semantic role labeling via integer linear programming inference. Punyakanok, Dan Vasin, Wen-Tau Roth, Dav Yih, Zimak, Proceedings of the 20th International Conference on Computational Linguistics. the 20th International Conference on Computational LinguisticsGenevaPunyakanok, Vasin, Dan Roth, Wen-tau Yih, and Dav Zimak. 2004. Semantic role labeling via integer linear programming inference. In Proceedings of the 20th International Conference on Computational Linguistics, pages 1346-1352, Geneva.
Incremental integer linear programming for non-projective dependency parsing. Sebastian Riedel, James Clarke, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingSydneyRiedel, Sebastian and James Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 129-137, Sydney.
Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical-functional grammar. Riezler, Tracy H Stefan, Richard King, Annie Crouch, Zaenen, Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational LinguisticsEdmontonRiezler, Stefan, Tracy H. King, Richard Crouch, and Annie Zaenen. 2003. Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical-functional grammar. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 118-125, Edmonton.
Probabilistic top-down parsing and language modeling. Brian Roark, Computational Linguistics. 272Roark, Brian. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.
A linear programming formulation for global inference in natural language tasks. Dan Roth, Wen-Tau Yih, Proceedings of the 8th Conference on Computational Natural Language Learning. the 8th Conference on Computational Natural Language LearningBoston, MARoth, Dan and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the 8th Conference on Computational Natural Language Learning, pages 1-8, Boston, MA.
Getting the message across in RST-based text generation. Donia Scott, Clarisse Sieckenius De Souza, Current Research in Natural Language Generation. Robert Dale, Chris Mellish, and Michael ZockNew YorkAcademic PressScott, Donia and Clarisse Sieckenius de Souza. 1990. Getting the message across in RST-based text generation. In Robert Dale, Chris Mellish, and Michael Zock, editors, Current Research in Natural Language Generation. Academic Press, New York, pages 47-73.
Adaptive method for automatic abstracting and indexing. E F Sjorochod'ko, Information Processing 71: Proceedings of the IFIP Congress. Amsterdam71Sjorochod'ko, E. F. 1972. Adaptive method for automatic abstracting and indexing. In Information Processing 71: Proceedings of the IFIP Congress 71, pages 1179-1182, Amsterdam.
A corpus-based evaluation of centering and pronoun resolution. Joel R Tetreault, Computational Linguistics. 274Tetreault, Joel R. 2001. A corpus-based evaluation of centering and pronoun resolution. Computational Linguistics, 27(4):507-520.
Summarizing scientific articles-Experiments with relevance and rhetorical status. Simone Teufel, Marc Moens, Computational Linguistics. 284Teufel, Simone and Marc Moens. 2002. Summarizing scientific articles- Experiments with relevance and rhetorical status. Computational Linguistics, 28(4):409-446.
Supervised and unsupervised learning for sentence compression. Jenine Turner, Eugene Charniak, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. the 43rd Annual Meeting of the Association for Computational LinguisticsAnn Arbor, MITurner, Jenine and Eugene Charniak. 2005. Supervised and unsupervised learning for sentence compression. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 290-297, Ann Arbor, MI.
Linear Programming: Foundations and Extensions. Robert J Vanderbei, Kluwer Academic PublishersBoston2nd editionVanderbei, Robert J. 2001. Linear Programming: Foundations and Extensions. Kluwer Academic Publishers, Boston, 2nd edition.
Centering in naturally occurring discourse: An overview. Marilyn Walker, Aravind Joshi, Ellen Prince, Centering Theory in Discourse. OxfordOxford University PressWalker, Marilyn, Aravind Joshi, and Ellen Prince. 1998. Centering in naturally occurring discourse: An overview. In Centering Theory in Discourse. Oxford University Press, Oxford, pages 1-28.
Introduction to Mathematical Programming. Wayne L Winston, Munirpallam Venkataramanan, Brooks/Cole, Independence, KYWinston, Wayne L. and Munirpallam Venkataramanan. 2003. Introduction to Mathematical Programming. Brooks/Cole, Independence, KY.
Paragraph-, word-, and coherence-based approaches to sentence ranking: A comparison of algorithm and human performance. Florian Wolf, Edward Gibson, Proceedings of the 42nd Meeting of the Association for Computational Linguistics. the 42nd Meeting of the Association for Computational LinguisticsBarcelonaWolf, Florian and Edward Gibson. 2004. Paragraph-, word-, and coherence-based approaches to sentence ranking: A comparison of algorithm and human performance. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, pages 383-390, Barcelona.
Multi-candidate reduction: Sentence compression as a tool for document summarization tasks. Information Processing and Management. Zajic, Bonnie J David, Jimmy J Dorr, Richard M Lin, Schwartz, 43Zajic, David, Bonnie J. Dorr, Jimmy J. Lin, and Richard M. Schwartz. 2007. Multi-candidate reduction: Sentence compression as a tool for document summarization tasks. Information Processing and Management, 43(6):1549-1570. |
16,536,332 | SpRL-CWW: Spatial Relation Classification with Independent Multi-class Models | In this paper we describe the SpRL-CWW entry into SemEval 2015: Task 8 SpaceEval. It detects spatial and motion relations as defined by the ISO-Space specifications in two phases: (1) it detects spatial elements and spatial/motion signals with a Conditional Random Field model that uses a combination of distributed word representations and lexicosyntactic features; (2) given relation candidate tuples, it simultaneously detects relation types and labels the spatial roles of participating elements by using a combination of syntactic and semantic features in independent multi-class classification models for each relation type. In evaluation on the shared task data, our system performed particularly well on detection of elements and relations in unannotated data. | [
1957433,
14068874,
11798378,
2905151
] | SpRL-CWW: Spatial Relation Classification with Independent Multi-class Models
SemEval 2015. June 4-5, 2015
Eric Nichols [email protected]
Honda Research Institute Japan Co., Ltd. University of Calgary
Fadi Botros [email protected]
Honda Research Institute Japan Co., Ltd. University of Calgary
SpRL-CWW: Spatial Relation Classification with Independent Multi-class Models
Proceedings of the 9th International Workshop on Semantic Evaluation
the 9th International Workshop on Semantic EvaluationDenver, ColoradoSemEval 2015. June 4-5, 2015
In this paper we describe the SpRL-CWW entry into SemEval 2015: Task 8 SpaceEval. It detects spatial and motion relations as defined by the ISO-Space specifications in two phases: (1) it detects spatial elements and spatial/motion signals with a Conditional Random Field model that uses a combination of distributed word representations and lexicosyntactic features; (2) given relation candidate tuples, it simultaneously detects relation types and labels the spatial roles of participating elements by using a combination of syntactic and semantic features in independent multi-class classification models for each relation type. In evaluation on the shared task data, our system performed particularly well on detection of elements and relations in unannotated data.
Introduction
Understanding human language about location and motion is important for many applications including robotics, navigation systems, and wearable computing. Shared tasks dedicated to the problem of representing and detecting spatial and motion relations have been organized for SemEval 2012 (Kordjamshidi et al., 2012), 2013 (Kolomiyets et al., 2013), and 2015. In this paper we present SpRL-CWW, our entry to SemEval 2015 Task 8: SpaceEval, and present extended evaluation of our system to investigate the impact of the task annotations and system configurations on task performance. Kordjamshidi et al. (2011) proposed the task of Spatial Role Labeling (SpRL) to detect spatial and motion relations in text. SpRL was modeled after semantic role labeling (see (Fillmore et al., 2003;Màrquez et al., 2008)), with spatial indicators instead of predicates signaling the presence of relations, and spatial roles instead of semantic roles.
SpaceEval Task Definition
A canonical example of a spatial relation from (Kordjamshidi et al., 2011) The spatial indicator (SP ) on indicates that there is a spatial relation between the trajector (T R; primary object of spatial focus) and the landmark (LM ; secondary object of spatial focus). SpRL was formalized as a task of classifying tuples of < w SP , w T R , w LM > as spatial relations or not. The SpRL task was reformulated and reintroduced in SpaceEval 1 using the ISO-Space annotation specifications (Pustejovsky et al., 2012). The biggest change was the decoupling of the semantic type and role of spatial relation arguments. A taxonomy of Spatial Element (SE) types was introduced to describe the meaning of arguments independent of their participation in relations, and spatial roles were treated as instance-specific annotations on spatial and motion relations.
The SE types introduced are: SPATIAL_ENTITY, PATH, PLACE, MOTION, NON_MOTION_EVENT, and MEASURE. Two types were also introduced to represent expressions that indicated the presence of relations: SPATIAL_SIGNAL and MOTION.
Spatial and motion relations were redefined as:
• MOVELINK: motion relation • QSLINK: qualitative spatial relation • OLINK: spatial orientation relation Examples of SpaceEval annotations are given in Figure 1. The training data for SpaceEval consists of portions of the corpora from past SemEval SpRL tasks as well as a new dataset consisting of passages from guidebooks. Following the schema described in this section, a total of 6,782 spatial elements and signals comprising 2,186 relations were annotated. We participated in the task configurations given in Figure 2, as defined by the official SpaceEval task description.
Related Research
KUL-SKIP-CHAIN-CRF (Kordjamshidi et al., 2011) was a skip-chain CRF-based sequential labeling model. It used a combination of lexico-syntactic information and semantic role information and used preposition templates to represent long distance dependencies. It was used as a baseline system in the SemEval 2012 and 2013 SpRL tasks.
UTD-SpRL (Roberts and Harabagiu, 2012) was an entry into the SemEval 2012 SpRL task that adopted a joint relation detection and role labeling approach with the motivation that roles in spatial relations were dependent on each other. The approach used heuristics to gather spatial relation candidate tuples. A hand-crafted dictionary was used to detect SPATIAL_INDICATOR candidates, and noun phrase heads were treated as TRAJECTOR and LANDMARK candidates. A model for relation classification and role labeling was then trained with lib-LINEAR using POS, lemma, and dependency-pathbased features, with feature selection used to prune away ineffective features.
UNITOR-HMM-TK (Bastianelli et al., 2013) was an entry into the SemEval 2013 SpRL task. It used a pipeline approach with three sub-tasks: (1) spatial indicator detection, (2) spatial role 2 classification and (3) spatial relation identification.
Spatial indicators and roles were detected with sequential labeling using SV M hmm with detected indicators used as features for spatial role labeling. In addition, shallow grammatical features in the form of POS n-grams were used in place of richer syntactic information in order to avoid overfitting. The model also used PMI-score based word space representations as described in (Sahlgren, 2006).
UNITOR-HMM-TK's approach to spatial relation identification avoided feature engineering by employing an SVM model with a smoothed partial tree Figure 3: The SpRL-CWW system architecture. Spatial elements and signals are detected, from which relation candidate tuples are generated, and then relations with their arguments labeled are identified by a separate classifier for each relation type. The red arrow indicates special trigger dictionary processing that is only carried out for SpaceEval tasks 1d and 1e, and for Setting F of the relation classification task extended evaluation in Table 3. kernel over modified dependency trees to capture syntactic information. More recent work on spatial relation identification includes (Kordjamshidi and Moens, 2014).
Spatial Element and Signal Detection
Approach
SpRL-CWW uses a feature-rich CRF model to jointly label spatial elements and spatial/motion signals. Previous approaches (Kordjamshidi et al., 2011;Bastianelli et al., 2013) proposed a two-step sequential labeling method for this task. In the first step, they label spatial signals 3 since they indicate the presence of a relation, which spatial roles depend on. In the second step, they label all the other spatial roles in the sentence using the extracted signals as features. However, any errors made in the first step will deteriorate the performance of the second. Furthermore, for SpaceEval 2015 the spatial element annotations are less likely to depend on the presence of a relation and can be detected independently. Thus, our system avoids the performance degradation associated with pipeline approaches by combining the two steps.
SpRL-CWW's CRF model labels each word in a sentence with one of the labels described in Section 2, or with NONE. In line with UNITOR-HMM-TK (Bastianelli et al., 2013), shallow lexico-syntactic features are applied instead of the full syntax of the sentence to avoid over-fitting the training data. We use word vectors trained on Web-scale corpora for a fine-grained lexical representation.
An example of our feature representation for the sentence "Saitama is northwest of Tokyo." is given in Figure 4.
Evaluation
Setup
Sentences were processed with Stanford CoreNLP for POS tagging, lemmatiza- (Pennington et al., 2014). The model was trained using CRFsuite (Okazaki, 2007) with L-BFGS using L2 regularization with λ 2 = 1 * 10 −5 .
Datasets
We evaluated our system on the SpaceEval training data as described in Section 2, and additionally on the SpaceEval Task 3 test data, which was distributed with gold labeled Spatial Elements, Indicators, and Motions. The test data consisted of 16 files with 317 sentences and 1,609 spatial roles.
Results
Official task results for spatial element/signal identification (Task 1a) and classification (Task 1b) are shown in Table 1.
We performed more detailed evaluation using 5fold cross validation on the training data and on the released gold test data. Our results are presented in Table 2. These results and have an f1score that is slightly lower than the official reported result. 5 Evaluation over the test data produced a 4 http://www-nlp.stanford.edu/projects/glove/ 5 As the official evaluation data and scripts have not been fully released at the time of writing, it is not possible to determine the cause of the discrepancy in f1-scores. Comparison between strict and "relaxed" matching as used in prior Se-mEval SpRL tasks did not account for the difference. Table 3. slightly higher f1-score than on the training data. We theorize that this is due to cross-fold validation using a smaller dataset for its model.
Spatial Relation Classification and Argument Labeling
Approach
To identify spatial relations, the SpRL-CWW system determines which spatial elements and signals, can be combined to form valid spatial relations. Since the type of a relation (MOVELINK, QSLINK, or OLINK) is dependent upon its arguments, our method, inspired by UTD-SpRL (Roberts and Harabagiu, 2012), jointly classifies spatial relations and labels participating arguments in one classification step. We aim to simplify our model and improve learning by only Table 3: Settings for extended relation detection evaluation over the SpaceEval 2015 training data. All evaluation is conducted with 5-fold cross validation, the full RE feature set from Figure 5, gold standard SEs, and gold standard triggers. The overall precision, recall, and f1-scores are reported for each setting with the highest performing in bold. Setting A was used for our official submission. Where indicated, L2 regularization was performed with λ 2 = 1 * 10 −14 .
considering relations that contain a trigger and by labeling only the following attributes which correspond to primary spatial and motion roles:
• MOVELINK: trigger, mover, goal • QSLINK and OLINK: trigger, trajector, landmark
Candidate Trigger Extraction
First, candidate triggers are extracted from each sentence. The model we presented for detecting signals in Section 4.1 has a high f1-score but low precision. Because we want to prioritize recall for generating candidate tuples, when classifying relations on unannotated text, dictionaries of triggers automatically compiled from the training data are used to extract potential triggers from sentences. These dictionaries are used in Task 1d and 1e in Figure 2. In Task 3, where gold spatial roles are provided, MOTIONs are used as potential MOVELINK triggers, SPATIAL_SIGNALs are used as potential QSLINK and OLINK triggers. Evaluation of the trigger dictionaries shows that they have much higher recall than CRF models 6 . Additional relation classification evaluation in Table 3 show that the dictionaries (Setting F) achieve an f1-score improvement of 0.055 over the CRF models (Setting G).
Candidate Tuple Generation
All possible candidate relations in a sentence are then generated using the extracted triggers and the spatial elements in the sentence. A candidate tuple consists of an extracted trigger and two other spatial elements: arg1 and arg2. Since some relations, such as the one represented in Figure 1 Example 6, can have undefined arguments, tuples with undefined arguments are also generated. For Example 4 6 In particular, recall for SPATIAL_SIGNALS increases from 0.603 to 0.936 and MOTION recall increases from 0.700 to 0.812 on the SpaceEval test data.
in Figure 1, the following candidate tuples will be generated for MOVELINK classification:
• < trigger:biked, arg1:I, arg2:store > • < trigger:biked, arg1:I, arg2:home > • < trigger:biked, arg1:I, arg2:∅ > • < trigger:biked, arg1:home, arg2:store > • < trigger:biked, arg1:home, arg2:∅ > • < trigger:biked, arg1:store, arg2:∅ > Each tuple is represented by three main groups of features outlined in Figure 5. A one-against-all multi-class classifier is then applied to classify each candidate relation tuple into one of three possible classes. Three independent classifiers are trained, one for each spatial relation type, using Vowpal Wabbit (Agarwal et al., 2011). The classes used by the MOVELINK classifier are:
Class 1 -REL(arg1=mover,arg2=goal) Class 2 -REL(arg1=goal,arg2=mover) Class 3 -NONE The classes used by the QSLINK and OLINK classifiers are:
Class 1 -REL(arg1=trajector,arg2=landmark) Class 2 -REL(arg1=landmark,arg2=trajector) Class 3 -NONE
Evaluation
Setup
Once again, Stanford CoreNLP was used for POS tagging, lemmatization and dependency parsing. The classification models were trained with Vowpal Wabbit's one-against-all multi-class classifier using its online stochastic gradient descent implementation with all the default settings. Table 4: SpRL-CWW's relation classification results for the highest-performing Setting D.
Datasets
We evaluated our system on the trial and training data that was released for SpaceEval, with the exception of 9 files that didn't have spatial relations annotated. Since our system focuses on relations with a trigger, we filtered out the relations that contained no trigger. The resulting dataset of 1,801 relations was used for training and evaluation.
Results
Official task results for relation classification are shown in Table 1. Task 1d results use the SEs that were detected in the previous step (Task 1b). Task 3a results are for relation classification using gold spatial elements and signals.
Discussion
Participation in SpaceEval raised several questions which we attempt to answer by conducting extended evaluation of our system on the SpaceEval training data using 5-fold cross validation 7 . The settings and results are summarized in Table 3.
Which features were effective?
The feature ablation results in Table 5 show the three features with the largest contribution to SE and SI classification. They verify the contribution of word vectors trained on Web-scale data and support Bastianelli's et al. (2013)'s claim that shallow grammatical information is essential.
Does the fine-grained SpaceEval annotation scheme help or hinder?
In order to explore this, we compare the top performing setting with SE type-related features (Setting B) to a setting with them removed (Setting C). Absence of these features decrease the f1-score by 0.044, providing evidence that fine-grained SE types help relation classification, though the relation and spatial role taxonomy requires consideration. 7 Partitions were made by taking a stratified split of the document set when ordered by decreasing size. Furthermore, each gold Spatial Signal that was provided for Task 3 had one of three possible semantic types; DIRECTIONAL, TOPOLOGICAL or DIR_TOP (both). Instead of using all Spatial Signals as candidate triggers for QSLINKs and OLINKs, we only considered TOPOLOGICAL Spatial Signals as candidate triggers for QSLINK and DIRECTIONAL Spatial Signals as candidate triggers for OLINK. This setting (Setting D) achieved the highest f1-score and recall, demonstrating the importance of Spatial Signal semantic types in relation classification. Full relation classification results for Setting D are summarized in Table 4. 8
Is less (or no) feature engineering feasible?
We attempt this by automatically generating features using Vowpal Wabbit's quadratic feature generation. We disable all features underlined in Figure 5) and instruct VW to automatically construct features by generating all possible feature combinations. Settings E and F compare the base feature set before and after quadratic features are added. While quadratic features achieve a lower f1-score, they have the highest precision of all settings, suggesting feature generation may be useful for increasing precision of relation classification, but the low f1score of Setting F indicates care is needed in selecting the base feature set. We are exploring feature engineering reduction further with a phrase vectorbased model inspired by (Hermann et al., 2014).
Conclusion
In this paper we presented the SpRL-CWW entry to SpaceEval 2015: Task 8. Official evaluation showed that it performed especially well on unannotated data. Extended evaluation verified the contribution of Web-scale word vectors, trigger dictionaries, and SE type information; and automatic feature generation showed promise. For future work, we plan to explore phrase vector-based approaches to SpRL.
Figure 1 :
1Example relations from the SpaceEval shared task. Only annotations that are targets are shown.
Figure 2 :
2SpaceEval task configurations participated in by SpRL-CWW.
EF. 1 Figure 4 :
14Raw string in a 5-word window (i.e. Saitama is northwest of Tokyo) EF.2 Lemma in a 5-word window (i.e. Saitama be northwest of Tokyo) EF.3 POS in a 5-word window (i.e. NNP VBZ RB IN NNP) EF.4 Named Entity in a 5-word window (i.e. LOC NONE NONE NONE LOC) EF.5 Lemma concatenated with the POS in a 3-word window (i.e be::VBZ northwest::RB of::IN) EF.6 Named Entity concatenated with the POS in a 3-word window (i.e NONE::VBZ NONE::RB NONE ::IN) EF.7 Direct dependency on the head of the sentence if present (i.e. advmod) EF.8 Direct dependency on the head of the sentence concatenated with the lemma of the head (i.e. advmod::be) EF.9 300-dimension GloVe word vector EF.10 POS bigrams for a 5-word window (i.e. NNP_VBZ VBZ_RB RB_IN IN_NNP)EF.11 Raw string n-grams for 3-word window (i.e. is_northwest northwest_of ) Features for spatial element/signal detection for the sentence "Saitama is northwest of Tokyo."
Figure 5 :
5Features for joint spatial relation classification and role labeling. Underlined features are withheld from quadratic feature Settings D and E of
MOVELINK, QSLINK, OLINK: precision, recall, and F1 e. MOVELINK, QSLINK, OLINK: precision, recall, and F1 for each attribute, and an overall precision, recall, and F11. Only Unannotated Text is Provided
a. SE: precision, recall, and F1
b. SE: precision, recall, and F1 for each type, and an
overall precision, recall, and F1 precision, recall, and
F1
d. 3. Spatial Elements, their Types, and their Attributes
are Provided
a. MOVELINK, QSLINK, OLINK: precision, recall, and F1
b. MOVELINK, QSLINK, OLINK: precision, recall, and F1
for each attribute, and an overall precision, recall,
and F1
Table 2 :
2Spatial Element/Signal detection results on
training data and test data. Results are reproduced in-
dependently of official evaluation.
tion, NER, and dependency parsing. The word rep-
resentations are publicly-available 300-dimension
GloVe 4 word vectors trained on 42 billion tokens
of Web data
Features representing the extracted trigger: RF.1 Raw string RF.2 Lemma RF.3 POS RF.4 RF.2 concatenated with RF.3 Features representing each of the two arguments: RF.5 Raw string RF.6 Lemma RF.7 POS RF.8 RF.6 concatenated with RF.7 RF.9 Spatial element type (i.e Place, Path, etc.) RF.10 RF.9 of each argument concatenated together RF.11 RF.10 concatenated with RF.2 RF.12 Direction of the argument with the respect to the extracted trigger (i.e left/right) RF.13 RF.12 of each argument concatenated together RF.14 RF.13 concatenated with RF.2 RF.15 Boolean value representing whether there are other spatial elements in between the argument and the extracted trigger RF.16 RF.15 of each argument concatenated together RF.17 Dependency path between the argument and the extracted trigger (i.e. ↑ conj ↓ dep ↓ nsubj) RF.18 RF.17 of each argument concatenated together RF.19 Dependency path between the two arguments RF.20 Length of the dependency path between the argument and the extracted trigger RF.21 Bag-of-words of tokens in between the argument and the extracted trigger RF.22 Number of tokens in between the argument and the extracted trigger RF.23 RF.22 of each argument added together RF.24 Boolean value representing whether either of the arguments are null valuesFeatures representing the spatial elements that are directly to the left and to the right of the trigger:RF.25 Raw string
RF.26 Lemma
RF.27 POS
RF.28 RF.26 concatenated with RF.27
RF.29 Number of tokens in between the spatial element and
the extracted trigger
Table 5 :
5The three spatial element classification features with the largest delta in feature ablation.
http://alt.qcri.org/semeval2015/task8/
Referred to as spatial annotations in the paper.
Also known as spatial indicators.
We thank an anonymous reviewer for the suggestion to use Spatial Signal semantic types.
AcknowledgmentsThis research was supported by Honda Research Institute Japan, Co., Ltd. We also thank the anonymous reviewers for their many fruitful suggestions.
A reliable effective terascale linear learning system. Alekh Agarwal, Olivier Chapelle, Miroslav Dudík, John Langford, abs/1110.4198CoRRAlekh Agarwal, Olivier Chapelle, Miroslav Dudík, and John Langford. 2011. A reliable effective terascale linear learning system. CoRR, abs/1110.4198.
UNITOR-HMM-TK: Structured kernel-based learning for spatial role labeling. Emanuele Bastianelli, Danilo Croce, Roberto Basili, Daniele Nardi, Proceedings of the Seventh International Workshop on Semantic Evaluation. the Seventh International Workshop on Semantic EvaluationEmanuele Bastianelli, Danilo Croce, Roberto Basili, and Daniele Nardi. 2013. UNITOR-HMM-TK: Structured kernel-based learning for spatial role labeling. Proceed- ings of the Seventh International Workshop on Seman- tic Evaluation (SemEval 2013).
Background to FrameNet. Charles J Fillmore, Christopher R Johnson, Miriam R L Petruck, International Journal of Lexicography. 16Charles J. Fillmore, Christopher R. Johnson, and Miriam R.L. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16.3:235-250.
Semantic frame identification with distributed word representations. Karl Moritz Hermann, Dipanjan Das, Jason Weston, Kuzman Ganchev, Proceedings of ACL. ACLKarl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame identifica- tion with distributed word representations. In Pro- ceedings of ACL, June.
SemEval-2013 task 3: Spatial role labeling. Oleksandr Kolomiyets, Parisa Kordjamshidi, Steven Bethard, Marie-Francine Moens, Proceedings of the Seventh International Workshop on Semantic Evaluation. the Seventh International Workshop on Semantic EvaluationAtlanta, USA2Second joint conference on lexical and computational semanticsOleksandr Kolomiyets, Parisa Kordjamshidi, Steven Bethard, and Marie-Francine Moens. 2013. SemEval- 2013 task 3: Spatial role labeling. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh Inter- national Workshop on Semantic Evaluation (SemEval 2013), Second joint conference on lexical and compu- tational semantics, Atlanta, USA, 14-15 June 2013, pages 255-266.
Global machine learning for spatial ontology population. Parisa Kordjamshidi, Marie-Francine Moens, Web Semantics: Science, Services and Agents on the World Wide Web. Parisa Kordjamshidi and Marie-Francine Moens. 2014. Global machine learning for spatial ontology popula- tion. Web Semantics: Science, Services and Agents on the World Wide Web.
Spatial role labeling: Towards extraction of spatial relations from natural language. Parisa Kordjamshidi, Martijn Van Otterlo, Marie-Francine Moens, ACM Transactions on Speech and Language Processing (TSLP). 834Parisa Kordjamshidi, Martijn Van Otterlo, and Marie- Francine Moens. 2011. Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Transactions on Speech and Language Processing (TSLP), 8(3):4.
SemEval-2012 task 3: Spatial role labeling. Parisa Kordjamshidi, Steven Bethard, Marie-Francine Moens, SemEval-2012*SEM 2012: The First Joint Conference on Lexical and Computational Semantics. 1Proceedings of the Sixth International Workshop on Semantic EvaluationParisa Kordjamshidi, Steven Bethard, and Marie- Francine Moens. 2012. SemEval-2012 task 3: Spa- tial role labeling. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), SemEval-2012, pages 365-373, June.
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mcclosky, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsChristopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.
Semantic role labeling: an introduction to the special issue. Lluís Màrquez, Xavier Carreras, C Kenneth, Suzanne Litkowski, Stevenson, Computational Linguistics. 342Lluís Màrquez, Xavier Carreras, Kenneth C Litkowski, and Suzanne Stevenson. 2008. Semantic role labeling: an introduction to the special issue. Computational Linguistics, 34(2):145-159.
CRFsuite: a fast implementation of conditional random fields (CRFs). Naoaki Okazaki, Naoaki Okazaki. 2007. CRFsuite: a fast implemen- tation of conditional random fields (CRFs). http: //www.chokkan.org/software/crfsuite/.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar, October.
A linguistically grounded annotation language for spatial information. James Pustejovsky, Jessica Moszkowicz, Marc Verhagen, TAL. 253James Pustejovsky, Jessica Moszkowicz, and Marc Ver- hagen. 2012. A linguistically grounded annotation language for spatial information. TAL, 53(2).
UTD-SpRL: A joint approach to spatial role labeling. Kirk Roberts, Sanda Harabagiu, *SEM 2012: The First Joint Conference on Lexical and Computational Semantics. 1Proceedings of the Sixth International Workshop on Semantic EvaluationKirk Roberts and Sanda Harabagiu. 2012. UTD-SpRL: A joint approach to spatial role labeling. In *SEM 2012: The First Joint Conference on Lexical and Com- putational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 419-424.
The Word-Space Model. Magnus Sahlgren, University of Stockholm (SwedenPh.D. thesisMagnus Sahlgren. 2006. The Word-Space Model. Ph.D. thesis, University of Stockholm (Sweden). |
9,389,310 | Toward hierarchical models for statistical machine translation of inflected languages | In statistical machine translation, correspondences between the words in the source and the target language are learned from bilingual corpora on the basis of so called alignment models. Existing statistical systems for MT often treat different derivatives of the same lemma as if they were independent of each other. In this paper we argue that a better exploitation of the bilingual training data can be achieved by explicitly taking into account the interdependencies of the different derivatives. We do this along two directions: Usage of hierarchical lexicon models and the introduction of equivalence classes in order to ignore information not relevant for the translation task. The improvement of the translation results is demonstrated on a German-English corpus. | [
2667234,
13442531,
2650085,
8122565,
9717543,
651085,
5284722,
5849588
] | Toward hierarchical models for statistical machine translation of inflected languages
Sonja Nießen
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen -University of Technology
D-52056AachenGermany
Hermann Ney
Lehrstuhl für Informatik VI
Computer Science Department
RWTH Aachen -University of Technology
D-52056AachenGermany
Toward hierarchical models for statistical machine translation of inflected languages
In statistical machine translation, correspondences between the words in the source and the target language are learned from bilingual corpora on the basis of so called alignment models. Existing statistical systems for MT often treat different derivatives of the same lemma as if they were independent of each other. In this paper we argue that a better exploitation of the bilingual training data can be achieved by explicitly taking into account the interdependencies of the different derivatives. We do this along two directions: Usage of hierarchical lexicon models and the introduction of equivalence classes in order to ignore information not relevant for the translation task. The improvement of the translation results is demonstrated on a German-English corpus.
Introduction
The statistical approach to machine translation has become widely accepted in the last few years. It has been successfully applied to realistic tasks in various national and international research programs. However in many applications only small amounts of bilingual training data are available for the desired domain and language pair, and it is highly desirable to avoid at least parts of the costly data collection process. Some recent publications have dealt with the problem of translation with scarce resources. (Brown et al., 1994) describe the use of dictionaries. (Al-Onaizan et al., 2000) report on an experiment of Tetun-to-English translation by different groups, including one using statistical machine translation. They assume the absence of linguistic knowledge sources such as morphological analyzers and dictionaries. Nevertheless, they found that human mind is very well capable of deriving dependencies such as morphology, cognates, proper names, spelling variations etc., and that this capability was finally at the basis of the better results produced by humans compared to corpus based machine translation. The additional information results from complex reasoning and it is not directly accessible from the full word form representation of the data.
In this paper, we take a different point of view: Even if full bilingual training data is scarce, monolingual knowledge sources like morphological analyzers and data for training the target language model as well as conventional dictionaries (one word and its translation per entry) may be available and of substantial usefulness for improving the performance of statistical translation systems. This is especially the case for highly inflected languages like German.
We address the question of how to achieve a better exploitation of the resources for training the parameters for statistical machine translation by taking into account explicit knowledge about the languages under consideration. In our approach we introduce equivalence classes in order to ignore information not relevant to the translation process. We furthermore suggest the use of hierarchical lexicon models.
The paper is organized as follows. After reviewing the statistical approach to machine translation, we first explain our motivation for examining the morphological characteristics of an inflected language like German. We then describe the chosen output representation after the analysis and present our approach for exploiting the information from morpho-syntactic analysis. Experimental results on the German-English Verbmobil task are reported.
. In the experiments reported in this paper, the source language is German and the target language is English. Every English string is considered as a possible translation for the input. Waibel, 1997;Nießen et al., 1998;Och and Weber, 1998) make use of a special way of structuring the string translation model like proposed by (Brown et al., 1993): The correspondence between the words in the source and the target string is described by alignments which assign one target word position to each source word position. The lexicon probability ) % ¢ of a certain English word is assumed to depend basically only on the source word ¢ aligned to it.
The overall architecture of the statistical translation approach is depicted in Figure 1. In this figure we already anticipate the fact that we can transform the source strings in a certain manner.
Basic Considerations
The parameters of the statistical knowledge sources mentioned above are trained on bilingual corpora. In general, the resulting probabilistic lexica contain all word forms occurring in this training corpora as separate entries, not taking into account whether or not they are derivatives of the same lemma. Bearing in mind that 40% of the word forms have only been seen once in training (see Table 2), it is obvious that learning the correct translations is difficult for many words. Besides, new input sentences are expected to contain unknown word forms, for which no translation can be retrieved from the lexica. As Table 2 shows, this problem is especially relevant for highly inflected languages like German: Texts in German contain many more different word forms than their English translations. The table also reveals that these words are often derived from a much smaller set of base forms ("lemmata"), and when we look at the number of different lemmata and the respective number of lemmata, for which there is only one occurrence in the training data, German and English texts are more resembling. Another aspect is the fact that conventional dictionaries are often available in an electronic form for the considered language pair. Their usability for statistical machine translation is restricted because they are substantially different from full bilingual parallel corpora inasmuch the entries are often pairs of base forms that are translations of each other, whereas the corpora contain full sentences with inflected forms. To make the information taken from external dictionaries more useful for the translation of inflected language is an interesting objective.
As a consequence of these considerations, we aim at taking into account the interdependencies between the different derivatives of the same base form.
Output Representation after Morpho-syntactic Analysis
We use GERCG, a constraint grammar parser for German for lexical analysis and morphological and syntactic disambiguation. For a description of the Constraint Grammar approach we refer the reader to (Karlsson, 1990). Figure 2 gives an example of the information provided by this tool.
Input:
Wir wollen nach dem Essen nach Essen aufbrechen "<*wir>" "wir" * PRON PERS PL1 NOM "<wollen>" "wollen" V IND PRÄS PL1 "<nach>" "nach" pre PRÄP Dat "<dem>" "das" ART DEF SG DAT NEUTR "<*essen>" "*essen" S NEUTR SG DAT "<nach>" "nach" pre PRÄP Dat "<*essen>" "*essen" S EIGEN NEUTR SG DAT "*esse" S FEM PL DAT "*essen" S NEUTR PL DAT "*essen" S NEUTR SG DAT "<aufbrechen>" "aufbrechen" V INF Figure 2: Sample analysis of a German sentence A full word form is represented by the information provided by the morpho-syntactic analysis: From the interpretation "gehen-V-IND-PRÄS-SG1", i.e. the lemma plus part of speech plus the other tags the word form "gehe" can be restored. From Figure 2 we see that the tool can quite reliably disambiguate between different readings: It infers for instance that the word "wollen" is a verb in the indicative present first person plural form. Without any context taken into account, "wollen" has other readings. It can even be interpreted as derived not from a verb, but from an adjective with the meaning "made of wool". In this sense, the information inherent to the original word forms is augmented by the disambiguating analyzer. This can be useful for deriving the correct translation of ambiguous words.
In the rare cases where the tools returned more than one reading, it is often possible to apply simple heuristics based on domain specific preference rules or to use a more general, non-ambiguous analysis.
The new representation of the corpus where full word forms are replaced by lemma plus morphological and syntactic tags makes it possible to gradually reduce the information: For example we can consider certain instances of words as equivalent. We have used this fact to better exploit the bilingual training data along two directions: Omitting unimportant information and using hierarchical translation models.
Equivalence classes of words with similar Translations
Inflected forms of words in the input language contain information that is not relevant for translation. This is especially true for the task of translating from a highly inflected language like German into English for instance: In bilingual German-English corpora, the German part contains many more different word forms than the English part (see Table 2). It is useful for the process of statistical machine translation to define equivalence classes of word forms which tend to be translated by the same target language word, because then, the resulting statistical translation lexica become smoother and the coverage is significantly improved. We construct these equivalence classes by omitting those informations from morpho-syntactic analysis, which are not relevant for the translation task. The representation of the corpus like it is provided by the analyzing tools helps to identifyand access -the unimportant information. The definition of relevant and unimportant information, respectively, depends on many factors like the involved languages, the translation direction and the choice of the models.
Linguistic knowledge can provide information about which characteristics of an input sentence are crucial to the translation task and which can be ignored, but it is desirable to find a method for automating this decision process. We found that the impact on the end result due to different choices of features to be ignored was not large enough to serve as reliable criterion. Instead, we could think of defining a likelihood criterion on a held-out corpus for this purpose. Another possibility is to assess the impact on the alignment quality after training, which can be evaluated automatically (Langlais et al., 1998;, but as we found that the alignment quality on the Verbmobil data is consistently very high, and extremely robust against manipulation of the training data, we abandoned this approach.
We resorted to detecting candidates from the probabilistic lexica trained for translation from German to English. For this, we focussed on those derivatives of the same base form, which resulted in the same translation. For each set of tags, we counted how often an additional tag could be replaced by a certain other tag without effect on the translation. Table 1 gives some of the most frequently identified candidates to be ignored while translating: The gender of nouns is irrelevant for their translation (which is straightforward, because the gender is unambiguous for a certain noun) and the case, i.e. nominative, dative, accusative. For the genitive forms, the translation in English differs. For verbs we found the candidates number and person. That is, the translation of the first person singular form of a verb is often the same as the translation of the third person plural form, for example. As a consequence, we dropped those tags, which were most often identified as irrelevant for translation from German to English.
Hierarchical Models
One way of taking into account the interdependencies of different derivatives of the same base form is to introduce equivalence classes 0 2 1 at various levels of abstraction starting with the inflected form and ending with the lemma.
Consider, for example, the German verb form ¢ ¦ "ankomme", which is derived from the lemma "ankommen" and which can be translated into English by ¦ "arrive". The hierarchy of equivalence classes is as follows: is the maximal number of morpho-syntactic tags.
3 5 4 ¥
contains the forms "ankomme", "ankommst" and "ankommt"; in 0 2 3 @ 4 7 6 the number (SG or PL) is ignored and so on. The largest equivalence class contains all derivatives of the infinitive "ankommen".
We can now define the lexicon probability of a word ¢ to be translated by with respect to the level
A : ) 1 B % ¢ ¦ D C EF H G I Q P ) S R 1 8 U T ) % ¢ R 1 8 " "(1)) % ¢ ¦ D 8 ) 8 a % ¢ c b © © b 3 ) 3 % ¢ ¨ ( 2)
Translation Experiments
Experiments were carried out on Verbmobil data, which consists of spontaneously spoken dialogs in the appointment scheduling domain (Wahlster, 1993). German source sentences are translated into English.
Treatment of Ambiguity
Common bilingual corpora normally contain full sentences which provide enough context information for ruling out all but one reading for an inflected word form. To reduce the remaining uncertainty, we have implemented preference rules. For instance, we assume that the corpus is correctly true-case-converted beforehand and as a consequence, we drop non-noun interpretations of uppercase words. Besides, we prefer indicative verb readings instead of subjunctive or imperative. For the remaining ambiguities, we resort to the unambiguous parts of the readings, i.e. we drop all tags causing mixed interpretations. There are some special problems with the analysis of external lexica, which do not provide enough context to enable efficient disambiguation. We are currently implementing methods for handling this special situation.
It can be argued that it would be more elegant to leave the decision between different readings, for instance, to the overall decision process in search. We plan this integration for the future.
Performance Measures
We use the following evaluation criteria : d SSER (subjective sentence error rate): Each translated sentence is judged by a human examiner according to an error scale from 0.0 (semantically and syntactically correct) to 1.0 (completely wrong). 1 The probability functions are defined to return zero for impossible interpretations of e . d ISER (information item semantic error rate): The test sentences are segmented into information items; for each of them, the translation candidates are assigned either "ok" or an error class. If the intended information is conveyed, the error count is not increased, even if there are slight syntactical errors, which do not seriously deteriorate the intelligibility.
Translation Results
The training set consists of 58 322 sentence pairs. Table 2 summarizes the characteristics of the training corpus used for training the parameters of Model 4 proposed in (Brown et al., 1993). Testing We used a translation system called "singleword based approach" described in and compared to other approaches in .
Lexicon Combination
So far we have performed experiments with hierarchical lexica, where two levels are combined, i.e.
in Equation (2) is set to 1.`8 and`¥ are set to f h g and ) % ¢ R Q 8
is modeled as a uniform distribution over all derivations of the lemma R B 8 occurring in the training data plus the base form itself, in case it is not contained. The process of lemmatization is unique in the majority of cases, and as a consequence, the sum in Equation (1) is not needed for a two-level lexicon combination of full word forms and lemmata.
As the results summarized in Table 4 show, the combined lexicon outperforms the conventional one-level lexicon. As expected, the quality gain achieved by smoothing the lexicon is larger if the training procedure can take advantage of an additional conventional dictionary to learn translation pairs, because these dictionaries typically only contain base forms of words, whereas translations of fully inflected forms are needed in the test situation.
Examples taken from the test set are given in Figure 3. Smoothing the lexicon entries over the derivatives of the same lemma enables the translation of "sind" by "would" instead of "are". The smoothed lexicon contains the translation "convenient" for any derivative of "bequem". The comparative "more convenient" would be the completely correct translation.
Equivalence classes
As already mentioned, we resorted to choosing one single reading for each word by applying some heuristics (see Section 7.1). For the normal training corpora, unlike additional external dictionaries, this is not critical because they contain predominantly full sentences which provide enough context for an efficient disambiguation. Currently, we are working on the problem of analyzing the entries in conventional dictionaries, but for the time being, experiments for equivalence classes have been carried out using only bilingual corpora for estimating the model parameters. Table 5 shows the effect of the introduction of equivalence classes. The information from the morpho-syntactic analyzer (stems plus tags like described in Section 4) is reduced by dropping unimportant information like described in Section 5. Both error metrics could be decreased in comparison to the usage of the original corpus with inflected word forms. A reduction of 3.3% of the information item semantic error rate shows that more of the intended meaning could be found in the produced translations. The first two examples in Figure 4 demonstrate the effect of the disambiguating analyzer which identifies "Hotelzimmer" as singular on the basis of the context (the word itself can represent the plural form as well), and "das" as article in contrast to a pronoun. The third example shows the advantage of grouping words in equivalence classes: The training data does not contain the word "billigeres", but when generalizing over the gender and case information, a correct translation can be produced.
Conclusion and Future Work
We have presented methods for a better exploitation of the bilingual training data for statistical machine translation by explicitly taking into account the interdependencies of the different derivatives of the same base form. We suggest the usage of hierarchical models as well as an alternative representation of the data in combination with the identification and omission of information not relevant for the translation task.
First experiments prove their general applicability to realistic tasks such as spontaneously spoken dialogs. We expect the described methods to yield more improvement of the translation quality for cases where much smaller amounts of training data are available.
As there is a large overlap between the modeled events in the combined probabilistic models, we assume that log-linear combination would result in more improvement of the translation quality than the combination by linear interpolation does. We will investigate this in the future. We also plan to integrate the decision regarding the choice of readings into the search process.
Figure 1 :
1Architecture of the translation approach based on Bayes' decision rule.
Figure 3 :
3Examples for the effect of the combined lexica.
, then according to Bayes' decision rule, we have to choose the English string that maximizes the product of the English lan-If we assign a probability
¥
¢
£ ¥
!
to each pair
of strings
¥
"
¢
¤ £ ¥
guage model
#
$
¥
and the string translation
model
% ¢
£ ¥
&
'
¥
(
.
Many existing systems for statistical machine
translation (Wang and
Table 1 :
1Candidates for equivalence classes.POS
candidates
noun
gender: MASK,FEM,NEUTR
and case: NOM,DAT,AKK
verb
number: SG,PL
and person: 1,2,3
adjective gender, case and number
number case
Table 2 :
2Corpus statistics: Verbmobil training.Singletons are types occurring only once in train-
ing.
English German
no. of running words 550 213 519 790
no. of word forms
4 670
7 940
no. of singletons
1 696
3 452
singletons [%]
36
43
no. of lemmata
3 875
3 476
no. of singletons
1 322
1 457
was carried out on 200 sentences not contained in
the training data. For a detailed statistics see Ta-
ble 3.
Table 3 :
3Statistics of the Verbmobil test corpus for German-to-English translation. Unknowns are word forms not contained in the training corpus.no. of sentences
200
no. of running words
2 055
no. of word forms
385
no. of unknown word forms
25
Table 5 :
5Effect of the introduction of equivalence classes. For the baseline we used the original inflected word forms.SSER [%] ISER [%]
inflected words
37.4
26.8
equivalence classes
35.9
23.5
Table 4 :
4Effect of two-level lexicon combination. For the baseline we used the conventional one-level full form lexicon. input sind Sie mit einem Doppelzimmer einverstanden? baseline are you agree with a double room? combined lexica would you agree with a double room? input mit dem Zug ist es bequemer baseline by train it is UNKNOWN-bequemer combined lexica by train it is convenientext. dictionary SSER [%] ISER [%]
baseline
yes
35.7
23.9
combined
yes
33.8
22.3
baseline
no
37.4
26.8
combined
no
36.9
25.8
Acknowledgement. This work was partly supported by the German Federal Ministry of Education, Science, Research and Technology under the Contract Number 01 IV 701 T4 (VERBMOBIL).
Translating with scarce resources. Yaser Al-Onaizan, Ulrich Germann, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Daniel Marcu, Kenji Yamada, Proceedings of the Seventeenth National Conference on Artificial Intelligence (AAAI). the Seventeenth National Conference on Artificial Intelligence (AAAI)Austin, TexasYaser Al-Onaizan, Ulrich Germann, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Daniel Marcu, and Kenji Yamada. 2000. Translating with scarce re- sources. In Proceedings of the Seventeenth Na- tional Conference on Artificial Intelligence (AAAI), pages 672-678, Austin, Texas, August.
Mathematics of statistical machine translation: Parameter estimation. P F Brown, S A Della Pietra, V J Della Pietra, R L Mercer, Computational Linguistics. 192P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. Mathematics of statistical ma- chine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-311.
But dictionaries are data too. P F Brown, S A Della Pietra, M J Della Pietra, V J Goldsmith, Proc. ARPA Human Language Technology Workshop '93. ARPA Human Language Technology Workshop '93Princeton, NJ, March; San Mateo, CAMorgan Kaufmann Publishers. distributed as Human Language Technology byP. F. Brown, S. A. Della Pietra, and M. J. Della Pietra, V. J. a nd Goldsmith. 1994. But dictionaries are data too. In Proc. ARPA Human Language Tech- nology Workshop '93, pages 202-205, Princeton, NJ, March. distributed as Human Language Tech- nology by San Mateo, CA: Morgan Kaufmann Pub- lishers.
Constraint grammar as a framework for parsing running text. Fred Karlsson, Proceedings of the 13th International Conference on Computational Linguistics. the 13th International Conference on Computational LinguisticsHelsinki, Finland3Fred Karlsson. 1990. Constraint grammar as a frame- work for parsing running text. In Proceedings of the 13th International Conference on Computational Linguistics, volume 3, pages 168-173, Helsinki, Finland.
Methods and practical issues in evaluating alignment techniques. Philippe Langlais, Michel Simard, Jean Véronis, Proceedings of. null36Philippe Langlais, Michel Simard, and Jean Véronis. 1998. Methods and practical issues in evaluat- ing alignment techniques. In Proceedings of 36th
Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistic. Montréal, P.Q., CanadaAnnual Meeting of the Association for Computa- tional Linguistics and 17th International Confer- ence on Computational Linguistic, pages 711-717, Montréal, P.Q., Canada, August.
Algorithms for statistical translation of spoken language. Hermann Ney, Sonja Nießen, Franz Josef Och, Hassan Sawaf, Christoph Tillmann, Stephan Vogel, IEEE Transactions on Speech and Audio Processing. 81Hermann Ney, Sonja Nießen, Franz Josef Och, Has- san Sawaf, Christoph Tillmann, and Stephan Vogel. 2000. Algorithms for statistical translation of spo- ken language. IEEE Transactions on Speech and Audio Processing, 8(1):24-36, January.
A DP based search algorithm for statistical machine translation. Sonja Nießen, Stephan Vogel, Hermann Ney, Christoph Tillmann ; Montréal, P Q Canada, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics. the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational LinguisticsAugustSonja Nießen, Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1998. A DP based search al- gorithm for statistical machine translation. In Pro- ceedings of the 36th Annual Meeting of the Associa- tion for Computational Linguistics and the 17th In- ternational Conference on Computational Linguis- tics, pages 960-967, Montréal, P.Q., Canada, Au- gust.
An evaluation tool for machine translation: Fast evaluation for MT research. Sonja Nießen, Franz Josef Och, Gregor Leusch, Hermann Ney, Proceedings of the 2nd International Conference on Language Resources and Evaluation. the 2nd International Conference on Language Resources and EvaluationAthens, GreeceSonja Nießen, Franz Josef Och, Gregor Leusch, and Hermann Ney. 2000. An evaluation tool for ma- chine translation: Fast evaluation for MT research. In Proceedings of the 2nd International Conference on Language Resources and Evaluation, pages 39- 45, Athens, Greece, May.
Improved statistical alignment models. Josef Franz, Hermann Och, Ney, Proc. of the 38th Annual Meeting of the Association for Computational Linguistics. of the 38th Annual Meeting of the Association for Computational LinguisticsHongkong, ChinaFranz Josef Och and Hermann Ney. 2000. Im- proved statistical alignment models. In Proc. of the 38th Annual Meeting of the Association for Com- putational Linguistics, pages 440-447, Hongkong, China, October.
Improving statistical natural language translation with categories and rules. Josef Franz, Hans Och, Weber, Proceedings of the 36th. the 36thFranz Josef Och and Hans Weber. 1998. Improv- ing statistical natural language translation with cat- egories and rules. In Proceedings of the 36th
Examples for the effect of equivalence classes resulting from dropping morpho-syntactic tags not relevant for translation. First the translation using the original representation, then the new representation, its reduced form and the resulting translation. Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics. Montréal, P.Q., Canada4Figure 4: Examples for the effect of equivalence classes resulting from dropping morpho-syntactic tags not relevant for translation. First the translation using the original representation, then the new representation, its reduced form and the resulting translation. Annual Meeting of the Association for Computa- tional Linguistics and the 17th International Con- ference on Computational Linguistics, pages 985- 989, Montréal, P.Q., Canada, August.
Word re-ordering and DP-based search in statistical machine translation. Christoph Tillmann, Hermann Ney, Proc. COLING 2000: The 18th Int. Conf. on Computational Linguistics. COLING 2000: The 18th Int. Conf. on Computational LinguisticsSaarbrücken, GermanyChristoph Tillmann and Hermann Ney. 2000. Word re-ordering and DP-based search in statistical ma- chine translation. In Proc. COLING 2000: The 18th Int. Conf. on Computational Linguistics, pages 850-856, Saarbrücken, Germany, August.
Verbmobil: Translation of Face-to-Face Dialogs. Wolfgang Wahlster, Proceedings of the MT Summit IV. the MT Summit IVKobe, JapanWolfgang Wahlster. 1993. Verbmobil: Translation of Face-to-Face Dialogs. In Proceedings of the MT Summit IV, pages 127-135, Kobe, Japan.
Decoding algorithm in statistical translation. Ye- , Yi Wang, Alex Waibel, Proceedings of the ACL/EACL '97. the ACL/EACL '97Madrid, SpainYe-Yi Wang and Alex Waibel. 1997. Decoding al- gorithm in statistical translation. In Proceedings of the ACL/EACL '97, Madrid, Spain, pages 366-372, July. |
29,463,464 | 本文提出一基於段落對應的雙語語料中迭代進行詞對應及小句對應 (subsentential alignment) 之模型, 並提出可行的實作方式。和基於句對應的詞對應演算法相比, 本文提出的演算法不 須經過句對應,在現實應用有更大的彈性。 而和基於字典的句對應演算法相比,本文提出的 演算法不須額外的字典支援, 完全藉由本身的統計資訊進行詞條的蒐集和利用。 實驗結果顯 示詞對應和 K-vec 相比有較佳的 precison 和 recall 值, 而小句對應結果顯示約有 77.74% 的小 句對應是完全或部份正確。 | [
2474505,
10146127,
2144821,
1050709,
14531125
] | 黃子桓 高照明 臺灣大學資訊工程學系 臺灣大學外國語文學系
基於統計與佚代的中英雙語詞及小句對應演算法
本文提出一基於段落對應的雙語語料中迭代進行詞對應及小句對應 (subsentential alignment) 之模型, 並提出可行的實作方式。和基於句對應的詞對應演算法相比, 本文提出的演算法不 須經過句對應,在現實應用有更大的彈性。 而和基於字典的句對應演算法相比,本文提出的 演算法不須額外的字典支援, 完全藉由本身的統計資訊進行詞條的蒐集和利用。 實驗結果顯 示詞對應和 K-vec 相比有較佳的 precison 和 recall 值, 而小句對應結果顯示約有 77.74% 的小 句對應是完全或部份正確。
令 E = B e 1 B e 2 . . .、C = B c 1 B c 2 . . ., 其中 B e|c i 表示一英文 (中文) 區塊, 稱此時的切分狀態為 Ω。 令 B e i = e i,1 e i,2 . . .,B c j = c j,1 c j,2 . . ., 其中 e i,k (c j,l ) 表示一英文 (中文) 詞彙。 令 asso(e, c) 表示詞彙 e 和詞彙 c 的相依權值大小 (此相依權值可視需要選用如 MI、t-score、LLR 等。 在本 實驗中我們以 MI 為主,搭配 t-score 以過濾詞頻低的對應), 則 ASSO(Ω) = ∑ i ∑ j asso(e i , c j ) 即為在 Ω 切分狀態下的總體相依值。 令 new(Ω, i, start e , end e , start c , end c ) 表示一種新的切分 狀態, 其意義為在 Ω 切分狀態中,第 i 區塊被切分了, 切分方式為 e i,starte e i,starte+1 . . . e i,ende 和 c i,startc c i,startc+1 . . . c i,endc 為 一 組 對 應 區 塊 , 而 e i,1 e i,2 . . . e i,starte−1 e i,ende+1 . . . e i,|Ei| 和 c i,1 c i,2 . . . c i,startc−1 c i,endc+1 . . . c i,|Ci| 為另一組對應區塊。 因此,對 Ω 狀態而言,計算 value = max 1≤starte ≤|E i | 1≤startc ≤|C i | starte ≤ende ≤|E i | startc ≤endc≤|C i | ASSO(new(Ω, i, start e , end e , start c , end c )) ∀i = 1, 2, . . . , |Ω| 若 value > ASSO(Ω),即表示該切分方式能夠提高總體相依值, 依此時之 start e 、 end e 、 start c 、 end c 進行切分, 可得一新的切分狀態 Ω 。若 value ≤ ASSO(Ω),如「 E i E i+1 正確對應 C j C j+1 C j+2 」即表示不論是 E i E i+1 或 C j C j+1 C j+2 皆無法再切割以得 到更小的正確對應。部份正確對應以上述正確對應為例, 任意 {E i , E i+1 } 的子集合對應任意 {C j , C j+1 , C j+2 } 的子集合都可視為部份對應。若非上述兩種情況,則稱為錯誤對應。
另一個不需要辭典的方法是在
Kay and Röscheisen (1993) 以詞彙的頻率 (去除低頻的詞及高頻 的詞) 及在文章中出現的分佈, 建立可能的詞對應表及句對應表並不斷的修正, 以 relaxation 方 法達到收斂。 與 Gale 與Church (1991) 及 Brown 等 (1991) 方法一樣, Kay and Röscheisen (clause alignment) 則如 Kit et al. (2004) 以雙語法律條文的 glossary 和雙語辭典, 再加上適當的標點符號轉換、 數字轉換 (如阿拉伯數字與羅馬數字),再設計一估計函數來結合 全部資訊而得相似程度, 以其評估字句對應,可達 94.6% 的正確率。 Kit et al. (2004) 以詞彙訊 息得到非常高的小句對應正確率的主要原因是所用的語料為法律雙語文件且使用法律術語的辭 典, 且此類文件中代表法律條文的數字一再出現。 在我們之前的實驗 (林與高 (2004) ) 顯示在 一般的中英雙語文章使用雙語辭典、數字、 及標點訊息在大句的對應正確率尚且不到 90%, 小 句的正確率必定無法達到 Kit et al. (2004) 的水準。 Wu et al. (2004) 則提出利用句長和標點符號進行小句對應 (subsentential alignment), 加上雙 語中的同源資訊 (如雙語中相同的數字部份), 以香港立法局會議記錄為實驗資料,可達 98% 的 正確率。 Wu et al. Association-based binlingual word alignment 中, 詞彙的出現頻率扮演著關鍵的角色。 不 論使用 MI、t-score 或者 LLR 來評估兩詞彙的相依程度, 皆利用頻率的資訊來計算。
Fung 與 Church (1994) 的 K-vec 演算法將雙語語料庫各切分為相等的 K 區塊, 每一詞皆記 錄該詞在 K 個區塊中出現與否, 組成一 K 維 vector (v 1 , v 2 , . . . , v K ),v i ∈ {0, 1}。 對雙語的 兩兩詞彙,皆透過彼此 vector 計算各自頻率及共同出現於相同區塊的頻率, 並以 MI (Mutual Information) 來計算兩詞彙的相依程度。 由於 MI 對於頻率甚少的詞會計算出極大值,嚴重影響 可信度, 因此藉由 t-score 值修正,透過給定的常數值,忽略 t-score 值小於該常數值的結果, 將大大提昇 MI 的可信度。 由於 K-vec 演算法需要切割雙語語料成 K 個區塊,錯誤的切割將使結果不如預期。 因 此 Fung 與 McKeown (1994) 再提出 DK-vec 以解決此問題。 在 DK-vec 中每一詞彙皆記錄兩vector, position vector 記錄該詞彙出現於雙語語料的所有位置, recency vector 則記錄兩兩位置 的距離。 以 position vector 的資料為橫座標值,recency vector 的資料為縱座標值, 並連接相鄰 之點,則可得一分佈於 2-D 座標系的函式分佈取樣圖。 利用 pattern matching 的 Dynamic Time Warping 的技術, 可計算兩兩詞彙函式分佈取樣的相似程度, 從相似程度的高低可得兩詞彙的 相依值。 Melamed (1998) 的 Competitive linking algorithm 是基於已正確句對應的詞對應演算法。 對於 雙語的兩兩詞彙,competitive linking algorithm 使用 LLR (Log-Likelihood-Ratio) 來評估其相依程 雙語句對應的研究開始於 90 年代初期。 Gale 與 Church (1991) 及 Brown 等 (1991) 觀察到 長句的翻譯對應句一般而言較長, 而短句的翻譯句通常較短。他們利用句長的關連性配合動態 規劃或 EM 演算法得到 96% 以上的正確率。 Gale 與 Church (1991) 及 Brown 等 (1991) 兩者最大 的差別是前者透過人工先得到先驗機率 (prior probability) 而後者利用 EM 演算法得到相關的參 數。Wu (1994) 及 Xu and Tan (1996) 以句長為主結合一個包含日期及數字等訊息小的辭典得到 Brown 等 (1991) 用加拿大國會 Hansard 英法平行語料, Wu (1994) 則利用香港立法局議會質詢與 答詢的中英平行語料, 由於是口語紀錄所以句子較短,且不少是一對一對應。 Gale 與 Church (1991) 統計 Hansard 語料 80% 以上是一對一的對應關係, 罕有多對多的對應關係或增添或刪 減的情形發生, 所以以句長為主的統計方法得到很好的效果。 但 McEnery and Oakes (1996) 以1 導言
語言翻譯在資訊的傳遞上扮演十分重要的角色,在過去,語言翻譯的工作皆以人工翻譯為
主。 由於電腦科學的進步,運算能力大幅提高,各類相關的理論、演算法也相繼被提出, 如何
利用電腦來進行自動翻譯工作成了重要的研究課題。 在許多自動翻譯的研究中,詞對應 (word
alignment) 是不可或缺的重要步驟, 其正確率往往對翻譯的結果有關鍵性的影響。 傳統詞對應
乃是由人工所建立,如雙語詞典即是人工建立的詞對應資料庫。 但人工建立不但費時費力,難
以跟上新詞增加的速度, 且詞典有其極限,再完善的詞典皆不可能包含所有雙語詞彙對應。 加
以現今網路上已有大量的雙語機讀資料,在研究資料充沛的情形下, 由電腦自動建立詞對應亦
為一重要的研究方向。
現有的電腦自動建立詞對應研究中,有許多是基於已正確句對應的研究, 並且取得不錯的
研究成果。 然而要達到正確句對應並不容易,以人工標示費時費力, 且在現實環境中,並不保
證有正確句對應。而以機器自動句對應的演算法, 基於語言的特性,不同性質的文章,其正確
率的變動非常大 (McEnery and Oakes, 1996), 與廣泛應用的水準尚有差距。 以詞對應的角度來
看,正確的詞對應有助於句對應;而從句對應來看, 正確的句對應對詞對應也是十分正面的助
益。 句對應和詞對應可說是雞生蛋、蛋生雞的問題。 本研究主要探討如何在同一語料中同時進
行句對應及詞對應,並藉由彼此提高正確率。 我們選擇使用已正確段落對應的中、英語料庫,
理由如下:
1. 基於翻譯的習慣,以句為單位來看,往往會有增減的情形,但若以段落為單位來看, 則
較少有增減的情形。因此一般的翻譯文章或者已正確句對應, 或者經過極少的工作即可
達到正確段落對應,對於現實的應用有很大的幫助。
2. 由於語言的結構關係,段落與段落間往往有利於機器處理的分隔符號存在, 因此機器自
動分段可達 100% 正確。因此使用正確段落對應的語料庫, 在分段上幾乎不會有失敗的情
形發生。
2 相關研究
2.1 詞對應演算法
度。 當一雙語對應句中兩兩詞彙的相依權值皆計算完畢, 將所有雙語詞彙對由相依權值大至小
排序,依序取出雙語詞彙對,若兩詞彙皆未與其它詞彙連結, 則連結此兩詞彙,否則忽略並處
理下一詞彙對,直到所有的詞彙對皆已處理完畢。
2.2 句對應演算法
96% 的正確率。 以句長為基礎的統計方法的優點是不需要語言知識及辭典就可以運作。 缺點是
如果語料中含有豐富的多對多的句對應關係, 或是翻譯的語料中有增添或刪減的現象發生就會
造成正確率大幅下降。 前述幾項研究由於大都採用議會的紀錄, 例如 Gale 與 Church (1991) 及
Gale 與 Church (1991) 的方法做實驗卻顯示此種演算法的正確率對不同的文類與語言會產生很大
的差異。 例如波蘭文英文平行語料的正確率因文類不同介於於 100% 與 64.4%, 而他們所實驗
的中英新聞平行語料更低於 55%, 這證明單純以句長關連性顯然無法得到高正確率。
在目前切分狀態 Ω 中,對每一區塊進行切分嘗試,並記錄新的切分方式於 Ω 。 3. 如果 |Ω| = |Ω | 則結束,否則回到 2.。 區塊。 例如限制 start e|c 的前一個詞必須是分句符號 (如句號、問號、驚嘆號等), end e|c 後一 詞也必須是分句符號,則所得的區塊對將成為句對應或多句對應形式。 亦即此演算法為一詞對 應暨句對應之演算法。 3.3 加速與實作 在上述演算法中,由於要對所有可能的切分方式計算 ASSO 值, 亦即對於所有可能的切分 方式都要執行一次類似 K-vec 的演算過程, 則此演算法的計算複雜度將會十分地高, 在實作上 雖然並不困難,但計算時間將會十分地久。 而若是加上對 start e|c 及 end e|c 的限制,將會有效 減少可能切分方式的總數。 然而計算時間仍然相當長,因此難以取得廣泛應用。 在此我們提出 一個加速的作法。 考慮上述的理論架構,對於每個可能的切分方式都要重新計算 ASSO 值, 顯然付出太大的 代價。 重新計算 ASSO 值的理由在於這是一個足夠好、可信賴的評估方式, 可有效評估現行 分割方式的優劣。 因此加速的關鍵即在於使用新的評估方式, 新的評估方式需滿足下列條件: 1. 和 ASSO 相比同樣可被信賴。 究院中文斷詞系統 (http://ckipsvr.iis.sinica.edu.tw/) 來進行分詞。 該系統提供線上使用,為現階段 中文分詞正確率最高的系統之一。 5 實驗結果與討論 我們實作了我們所提出的演算法,並實作 K-vec 演算法以進行比較。 在 K-vec 演算法中, 由於作者建議切分區塊數 K = √ total word number 會有較好的結果, 而在我們的實驗資料中, 段落數的平方 (59 × 59 = 3481) 恰約等於總詞數 (中文 3291、英文 3908), 因此我們以段落作為 K-vec 的區塊。 相依權值使用 MI 及 t-score 判別, MI 及 t-score 的參數比照原作者的建議:以 t-score 值為篩選器, 只考慮 t-score ≥ 1.65 的詞對應。MI 則做為主要相依權值的依據, 輸出時 以 MI 的值由大到小排序,並捨棄 MI < 1.0 的結果。 在這個條件下的輸出如下表所示: 演算法 t-score 篩選值 MI 最小值 詞對應數 正確數 precision 雖然在 MI ≥ 1.0 的輸出條件下,我們的演算法 precision 較低, 但由 Figure1 和 Figure 2 可 看出在同樣個數的輸出 (輸出皆以 MI 的權值大小為序)下,我們的演算法有較好的表現。 在輸 出前 10 條詞對應時,我們的演算法和 K-vec 差異不大, 但從第 10 條詞對應之後的輸出結果, 明顯我們的演算法有更高的正確率, 到前 30 項輸出仍維持 0.6 以上的 precision, 而在前 30 項 輸出時 K-vec 的 precision 僅約 0.45。表示所有的切分方
式都無法再提高總體相依權值,因此該區塊沒有再被切分的必要。 重覆此一步驟,則切分之區
塊數將會不斷增加, 直到所有的區塊都無法再被切分。
演算法如下:
1. 以雙語語料的段落對應作為初始切分狀態。
2. 4. 利用目前切分狀態求出詞彙間的相依權值並輸出結果。
3.2 段落對齊平行語料的詞對應暨小句對應演算法
在上述演算法中,如果加上特殊的限制條件,則可使切分區塊的自由度降低,形成特定的
2. 計算複雜度要低。
我們所提出的新評估方式說明如下:
在一個區塊中,我們可以指定的標點符號 (例如句號、問號等) 將區塊再切分為較小的區塊
(可能包含一或多個句子),稱為子區塊。令 B e = S e
1 S e
2 . . . S e
m 和 B c = S c
1 S c
2 . . . S c
n ,其中 B e 和
B c 為雙語語料中對應的其中一區塊; 而 S
c|e
i
表示子區塊。令 W
e|c
i
= {w
e|c
i,1 , w
e|c
i,2 , . . .} 表示在
S
c|e
i 子區塊中,所有相異詞彙所成的集合。 定義
score(S e
i , S c
j ) =
∑
e∈W e
i
∑
c∈W c
j
asso(e, c)
其中 asso(e, c) 的定義如前所述。 則藉由求出
(max e , max c ) = arg max
1≤i≤|B e |
1≤j≤|B c |
score(S e
i , S c
j )
可知,在現行條件下,S e
maxe 對應 S c
maxc 是最可信賴的。 因此,將 S e
maxe 與 S c
maxc 取出使成新
的區塊。 對於每一區塊,重覆這個步驟,直到區塊的總數不再變動。
asso(e, c) 乃根據目前為止的詞對應相依權值來計算; 而接著詞對應乃根據新的切分狀態來
求其相依權值。交互迭代後將會收歛, 亦即兩者皆不再變動。由於我們的切分是以標點符號切
分的子區塊為單位, 因此若目前處理區塊只包含一個子區塊,則不可再被區分, 因此該演算法
保證會收歛。 而初始的段落對應則用於提供最初的相依權值計算, 此外也保證初始的區塊對應
是完全正確的。
4 實驗材料
本研究使用的中英對譯文章取自光華雜誌 (http://www.sinorama.com.tw/ch/),統計資料如
下:
段落數 總詞數 相異詞數
中文
59
3291
1192
英文
59
3908
1082
英文分詞以空白和標點符號為主,搭配常見縮寫詞以減少分詞錯誤。 計算英文相異詞時則
以一般變化規則 (-s -ing -ed 等) 加上不規則動詞變化表來還原各詞類原形。 中文分詞則以中央研
K-vec
1.65
1.0
28
12
0.42
ours
1.65
1.0
79
32
0.40
Figure 1: 輸出結果數與正確數關係圖
Figure 2: 輸出結果數與 precision 關係圖
由下面列舉的詞對應結果可以看出,部份正確的詞對應佔了相當的比例, 如果加上這些複
合詞的擷取演算法,則可望大幅提升 precision。 此外,由於基於統計的相依權值計算相當依賴
詞頻,過多或過少都會影響其信心。 以本實驗為例,詞頻甚低的正確詞對應有可能無法通過
t-score 的篩選門檻; 而詞頻不夠高的功能詞對應也可能仍有相當高的 MI 值以致於未被過濾。
詞頻太低是所有基於統計的詞對應都會面臨的困難問題, 因為過低的詞頻並沒有辦法分辨其為
偶然或是正確; 而功能詞的部份可用預先建立的功能詞列表來解決。
在 recall 方面,由於 recall 的計算需要以人工找出雙語語料中所有正確的詞對應, 在此實驗
資料中,共有 1192 個相異中文詞、1082 個相異英文詞, 基於時間及人力的關係無法以人工標
記此實驗資料的所有正確詞對應, 然仍可知在分母相同的條件下,我們的演算法有較高的 recall
值。
由於翻譯的關係,雙語對譯的用詞可能相當靈活, 例如同一個動詞卻在不同位置用不同的
詞彙翻譯, 以致於所有的對譯詞頻都不高,而無法找出正確詞對應。 由於此原因,許多詞彙無
法找出正確詞對應, 而最利於找出的則是有固定翻譯及適當詞頻的專有名詞。
以下試列舉出前 18 條由我們提出的演算法所得到的詞對應結果, 其正確對應與否於附註中
說明,若部份正確則在附註中顯示正確之詞對應。
asso 英文詞
中文詞
附註
6.363 table
桌
正確
5.948 Kuo
郭慧明
正確
5.626 hope
希望
正確
5.141 Jen-an
人安
正確
4.948 each
天
each day 每天
4.877 volunteers 義工
正確
4.778 day
天
正確
4.778 goal
努力
錯誤
4.725 fund
經費
正確
4.626 Jen-an
基金會
The Jen-an Foundation 人安基金會
4.626 each
每
正確
4.626 month
月
正確
4.533 even
甚至
正確
4.488 but
長期
錯誤
4.404 welfare
社福
social welfare 社福
4.247 social
社福
social welfare 社福
4.141 elderly
失
elderly people 三失老人
4.041 day
每
each day 每天
在 分 句 演 算 法 方 面 , 我 們 的 原 始 語 料 共 有 59 段 落 , 在 實 作 中 , 我 們 指 定 這 些 符 號
. , ; ! ? 。 , ; ! ? 作為切分區塊的標點符號限制。 經過我們的演算法,最後收歛時共輸
出 265 個對應區塊。 下表是經由我們的演算法的分句結果與原始文章以 . ; ! ? 。 ; ! ? 分句的
比較結果, 中文部份我們用兩組標點符號來分句,其中一組包含逗號,另一組不包含:
句數 句平均詞數 標準差
原始文章以
標點分句
中文 (使用逗號) 396
8.300
4.166
中文
104
31.615
18.074
英文
165
23.666
12.524
以本演算法
分句
中文
265
12.384
9.683
英文
265
14.709
9.172
在利用 regular expression 或其它方法解決英文縮寫點 (如:Mr. 或 I.B.M.) 的問題之後, 英
文基本上可以靠句號,問號,驚嘆號,分號當作分隔句子的界限。 中文的句子無法像英文一樣
靠標點符號來判斷。 原因是逗點在中文使用的非常的鬆散, 逗號和句號的使用是作者風格的問
題而非文法的問題。 如果用句號、問號、驚嘆號、分號來分的話, 很多是比句子更大的言談單
位 (discourse) , 如果加上逗號的話又會造成許多只是詞組而不是句子。 這就是為什麼當我們用
。 ! ? ;來分割句子時, 中文句數比英文句子少很多, 而中文加逗點作為分隔句子界限之後
又比英文句子多很多的原因。 從以上的討論,我們可以看出經過我們的分句演算法所得到的是
比句子還要小的區塊, 可視為一種小句對應的結果。
由於我們的演算法並不保證按順序對應, 因此輸出結果並不按原始文章的順序。另外基於
我們演算法的特性, 不相鄰的區塊有被合而為一的可能。因為上述原因, 要對輸出的對應區
塊分析其正確分句程度極為困難。 因此我們採用較簡單的估計方式, 以人工標記 265 句中完
全正確、部份正確及完全錯誤的小句對應, 完全正確表示該對應是最小可能的切分方式,例
whenever cswf has needed them 在 創世 有 需要 時 the service hua-shan offer the elderly are of two variety 華山 照顧 老人 的 方式 有 兩 種 部份正確 it is also renowned as a "master fundraiser" and admire by other social welfare organization for operate at a surplus year after year 還被喻為「募款高手」 not only is cswf well known for its service how do they do it 他們是怎麼做到的 we finally reach the pvs hospice 來到植物人安養中心:創世的發源地 在實驗結果裡,和同樣不需已句對應的 K-vec 演算法相比,我們提出的方法有較佳的 precision 值, 且在同樣的條件下能找出更多正確的詞對應,亦即有較佳的 recall 值。 而在句對 應中,結果顯示有許多輸出是詞組和子句的對應, 換言之我們的演算法能得到小句對應, 這是實驗的
結果統計如下:
對應數 所佔全體比例
總對應數
265
-
完全正確
59
22.26%
部份正確
147
55.47%
完全錯誤
59
22.26%
由於我們的小句對應是基於「完全對應」,即任一小區塊皆必對應於某區塊, 且僅對應於
該區塊。因此任一區塊若為部份正確, 則必然會影響另一區塊為部份正確或完全錯誤, 因此部
份正確數佔了極大比例是可預期的結果。
以下試舉部份小句對應結果:
英文區塊
中文區塊
完全正確
even into his old age
甚至 在 遲暮 之 年
完全錯誤
at the start
幫幾個家庭喘口氣而已
cswf has open branch hospice around the
country
創世的目標是全省23 個縣市都有植
物人安養院
thus far they have complete 13
籌備中的有4 個
6 結論與未來研究方向
我們的研究展示了一個不必依賴正確句對應,也不必依賴字典的迭代詞與小句對應演算
法。 相較於依賴句對應的詞對應演算法,我們的演算法不必經過人工或機器的句對應, 可有效
減少工作量,並且避免了由錯誤句對應所引發的錯誤。而相較於依賴字典的句對應演算法, 我
們的演算法如同一邊分句一邊建立小型字典,除了不需額外資料庫外, 對於字典沒有的新字我
們的演算法仍能透過統計的方式得到相依關係, 因此擁有較大的彈性可適應不同種類的文章。
目前大部分基於統計演算法不容易做到的。
未來的研究方向擇要列舉如下:
1. 目前我們的模型僅標示出一對一的詞對應, 實際上詞對應有很大的機會是多對多,尤其
是具有特定翻譯的專業詞彙。 對於這些複合詞,若能在迭代過程中取出,則可增加詞對
應的信心, 進而對句對應有正面的助益。 因此如何利用此模型來運用複合詞資訊,將是
未來研究的方向之一。
2. 由於演算法的特性,在切分的情形下,會將原本不連續的區塊合併。 對詞對應而言,這
個合併的動作並不會造成太大的影響, 但對句對應而言此動作並不恰當。而這個問題可
透過修正的切法方法來解決,例如, 當找到最有信心的子區塊對應時,將該區塊切分成
三組新對應而不是兩組, 可避免合併的動作。
3. 本研究基於兩前提:正確段落對應及正確分詞。 由於現實語料的支援,正確段落對應
可視為合理的假設,分詞對英文而言也有很高的正確率, 然而分詞對中文而言遠較英文
困難,正確率也遠不及英文。 錯誤的分詞結果將改變詞頻,對詞對應的結果有很大的影
響。 如何降低對分詞正確性的依賴是我們未來研究的課題。
4. 本研究的理論模型乃「不可迴溯式」,如果在過程中發生錯誤的切分, 則該錯誤會永久
保留,甚至可能會擴散。雖然過程中每個步驟都儘可能選取最有信心的切分方式, 但不
可避免一定有發生錯誤的可能。如果能在現有模型上加入可事後補救的機制, 將可使穩
定度更為提升。
5. 除了經統計所得的訊息外,在一般的雙語語料中常常還有其它的訊息可供利用, 例如數
字、未翻譯的人名、地名、專業詞彙等,這些訊息比統計所的的詞對應更為可靠, 因此
在我們提出的演算法中結合這類訊息的使用,我們預期能得到更好的結果。
致謝
本研究得到國科會 NSC93-2815-C-002-063H 「從中英平行語料庫自動擷取雙語詞組知識」
及 93-2411-H-002-013 「 詞彙語意關係之自動標注─以中英平行語料庫為基礎(3/3)」經費補助,
特此致謝。
參考資料
Aligning sentences in parallel corpora. F Peter, Jennifer C Brown, Robert L Lai, Mercer, Meeting of the Association for Computational Linguistics. Peter F. Brown, Jennifer C. Lai, and Robert L. Mercer. Aligning sentences in parallel corpora. In Meeting of the Association for Computational Linguistics, pages 169-176, 1991.
Deriving translation data from bilingual texts. R Catizone, G Russell, S Warwick, Proceedings of the First Lexical Acquisition Workshop. the First Lexical Acquisition WorkshopR. Catizone, G. Russell, and S. Warwick. Deriving translation data from bilingual texts. Proceedings of the First Lexical Acquisition Workshop, 1989.
Extraction of translation unit from chinese-english parallel corpora. B Chang, P Danielsson, W Teubert, COLING-02: The First SIGHAN Workshop on Chinese. B. Chang, P. Danielsson, and W. Teubert. Extraction of translation unit from chinese-english parallel corpora. COLING-02: The First SIGHAN Workshop on Chinese, 2002.
P Fung, K Church, A new approach for aligning parallel texts. COLING-94: 15th International Conference on Computational Linguistics. P. Fung and K. Church. K-vec: A new approach for aligning parallel texts. COLING-94: 15th International Conference on Computational Linguistics, pages 1096-1102, Aug 1994.
Aligning noisy parallel corpora across language groups: Word pair feature matching by dynamic time warping. P Fung, K Mckeown, AMTA-94, Association for Machine Translation in the AmericasP. Fung and K. McKeown. Aligning noisy parallel corpora across language groups: Word pair feature matching by dynamic time warping. AMTA-94, Association for Machine Translation in the Americas, pages 81-88, 1994.
A program for aligning sentences in bilingual corpora. W Gale, K Church, Proceedings of the Annual Conference of the Association for Computational Linguistics. the Annual Conference of the Association for Computational LinguisticsW. Gale and K. Church. A program for aligning sentences in bilingual corpora. Proceedings of the Annual Conference of the Association for Computational Linguistics, pages 177-184, 1991.
Text-translation alignment. M Kay, M Röscheisen, Computational linguistics. M. Kay and M. Röscheisen. Text-translation alignment. Computational linguistics, (1):121-142, 1993.
Clause alignment for hong kong legal texts: A lexical-based approach. C Kit, J J Webster, K-K Sin, H Pan, H Li, International Journal of Corpus Linguistics. 1C. Kit, J. J. Webster, K-K. Sin, H. Pan, and H. Li. Clause alignment for hong kong legal texts: A lexical-based approach. International Journal of Corpus Linguistics, (1):29-51, 2004.
Models of co-occurrence. I D Melamed, IRCS Technical Report. I. D. Melamed. Models of co-occurrence. IRCS Technical Report, 1998.
Models of translational equivalence. I D Melamed, Computational Linguistics. I. D. Melamed. Models of translational equivalence. Computational Linguistics, pages 221-249, 2000.
Association-based bilingual word alignment. Robert C Moore, Proceedings, Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond. Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and BeyondAnn Arbor, MichiganRobert C. Moore. Association-based bilingual word alignment. Proceedings, Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond, Ann Arbor, Michigan, pages 1-8, 2005.
Aligning a parallel english-chinese corpus statistically with lexical criteria. D Wu, Proceedings of the 32nd Annual meeting of the Association for Computational Linguistics. the 32nd Annual meeting of the Association for Computational LinguisticsD. Wu. Aligning a parallel english-chinese corpus statistically with lexical criteria. Proceedings of the 32nd Annual meeting of the Association for Computational Linguistics, 1994.
Subsentential translation memory for computer assisted writing and translation. Jian-Cheng Wu, Thomas C Chuang, Wen-Chi Shei, Jason S Chang, The Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics. Barcelona, SpainAssociation for Computational LinguisticsJian-Cheng Wu, Thomas C. Chuang, Wen-Chi Shei, and Jason S. Chang. Subsentential translation memory for computer assisted writing and translation. In The Companion Volume to the Proceed- ings of 42st Annual Meeting of the Association for Computational Linguistics, pages 106-109, Barcelona, Spain, July 2004. Association for Computational Linguistics.
Automatic alignment of english-chinese bilingual texts of cns news. Computational Linguistic Archive. D Xu, C L Tan, D. Xu and C. L. Tan. Automatic alignment of english-chinese bilingual texts of cns news. Compu- tational Linguistic Archive, 1996.
. 高照明 結合統計與語言訊息的混合式中英雙語句對應演算法 林語君, Rocling, 林語君 and 高照明. 結合統計與語言訊息的混合式中英雙語句對應演算法. ROCLING, 2004. |