entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
su-etal-2017-sample
Sample-efficient Actor-Critic Reinforcement Learning with Supervised Data for Dialogue Management
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5518/
Su, Pei-Hao and Budzianowski, Pawe{\l} and Ultes, Stefan and Ga{\v{s}}i{\'c}, Milica and Young, Steve
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
147--157
Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sample-efficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sample-efficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learn deep RL-based dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.
null
null
10.18653/v1/W17-5518
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,626
inproceedings
agarwal-dymetman-2017-surprisingly
A surprisingly effective out-of-the-box char2char model on the {E}2{E} {NLG} Challenge dataset
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5519/
Agarwal, Shubham and Dymetman, Marc
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
158--163
We train a char2char model on the E2E NLG Challenge data, by exploiting {\textquotedblleft}out-of-the-box{\textquotedblright} the recently released tfseq2seq framework, using some of the standard options offered by this tool. With minimal effort, and in particular without delexicalization, tokenization or lowercasing, the obtained raw predictions, according to a small scale human evaluation, are excellent on the linguistic side and quite reasonable on the adequacy side, the primary downside being the possible omissions of semantic material. However, in a significant number of cases (more than 70{\%}), a perfect solution can be found in the top-20 predictions, indicating promising directions for solving the remaining issues.
null
null
10.18653/v1/W17-5519
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,627
inproceedings
rach-etal-2017-interaction
Interaction Quality Estimation Using Long Short-Term Memories
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5520/
Rach, Niklas and Minker, Wolfgang and Ultes, Stefan
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
164--169
For estimating the Interaction Quality (IQ) in Spoken Dialogue Systems (SDS), the dialogue history is of significant importance. Previous works included this information manually in the form of precomputed temporal features into the classification process. Here, we employ a deep learning architecture based on Long Short-Term Memories (LSTM) to extract this information automatically from the data, thus estimating IQ solely by using current exchange features. We show that it is thereby possible to achieve competitive results as in a scenario where manually optimized temporal features have been included.
null
null
10.18653/v1/W17-5520
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,628
inproceedings
braun-etal-2017-evaluating
Evaluating Natural Language Understanding Services for Conversational Question Answering Systems
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5522/
Braun, Daniel and Hernandez Mendez, Adrian and Matthes, Florian and Langen, Manfred
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
174--185
Conversational interfaces recently gained a lot of attention. One of the reasons for the current hype is the fact that chatbots (one particularly popular form of conversational interfaces) nowadays can be created without any programming knowledge, thanks to different toolkits and so-called Natural Language Understanding (NLU) services. While these NLU services are already widely used in both, industry and science, so far, they have not been analysed systematically. In this paper, we present a method to evaluate the classification performance of NLU services. Moreover, we present two new corpora, one consisting of annotated questions and one consisting of annotated questions with the corresponding answers. Based on these corpora, we conduct an evaluation of some of the most popular NLU services. Thereby we want to enable both, researchers and companies to make more educated decisions about which service they should use.
null
null
10.18653/v1/W17-5522
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,630
inproceedings
ghosh-etal-2017-role
The Role of Conversation Context for Sarcasm Detection in Online Interactions
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5523/
Ghosh, Debanjan and Richard Fabbri, Alexander and Muresan, Smaranda
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
186--196
Computational models for sarcasm detection have often relied on the content of utterances in isolation. However, speaker`s sarcastic intent is not always obvious without additional context. Focusing on social media discussions, we investigate two issues: (1) does modeling of conversation context help in sarcasm detection and (2) can we understand what part of conversation context triggered the sarcastic reply. To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the sarcastic response. We show that the conditional LSTM network (Rockt{\"aschel et al. 2015) and LSTM networks with sentence level attention on context and response outperform the LSTM model that reads only the response. To address the second issue, we present a qualitative analysis of attention weights produced by the LSTM models with attention and discuss the results compared with human performance on the task.
null
null
10.18653/v1/W17-5523
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,631
inproceedings
yu-etal-2017-voila
{VOILA}: An Optimised Dialogue System for Interactively Learning Visually-Grounded Word Meanings (Demonstration System)
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5524/
Yu, Yanchao and Eshghi, Arash and Lemon, Oliver
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
197--200
We present VOILA: an optimised, multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human user. VOILA is: (1) able to learn new visual categories interactively from users from scratch; (2) trained on real human-human dialogues in the same domain, and so is able to conduct natural spontaneous dialogue; (3) optimised to find the most effective trade-off between the accuracy of the visual categories it learns and the cost it incurs to users. VOILA is deployed on Furhat, a human-like, multi-modal robot head with back-projection of the face, and a graphical virtual character.
null
null
10.18653/v1/W17-5524
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,632
inproceedings
novikova-etal-2017-e2e
The {E}2{E} Dataset: New Challenges For End-to-End Generation
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5525/
Novikova, Jekaterina and Du{\v{s}}ek, Ond{\v{r}}ej and Rieser, Verena
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
201--206
This paper describes the E2E data, a new dataset for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.
null
null
10.18653/v1/W17-5525
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,633
inproceedings
el-asri-etal-2017-frames
{F}rames: a corpus for adding memory to goal-oriented dialogue systems
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5526/
El Asri, Layla and Schulz, Hannes and Sharma, Shikhar and Zumer, Jeremie and Harris, Justin and Fine, Emery and Mehrotra, Rahul and Suleman, Kaheer
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
207--219
This paper proposes a new dataset, Frames, composed of 1369 human-human dialogues with an average of 15 turns per dialogue. This corpus contains goal-oriented dialogues between users who are given some constraints to book a trip and assistants who search a database to find appropriate trips. The users exhibit complex decision-making behaviour which involve comparing trips, exploring different options, and selecting among the trips that were discussed during the dialogue. To drive research on dialogue systems towards handling such behaviour, we have annotated and released the dataset and we propose in this paper a task called frame tracking. This task consists of keeping track of different semantic frames throughout each dialogue. We propose a rule-based baseline and analyse the frame tracking task through this baseline.
null
null
10.18653/v1/W17-5526
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,634
inproceedings
skantze-2017-towards
Towards a General, Continuous Model of Turn-taking in Spoken Dialogue using {LSTM} Recurrent Neural Networks
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5527/
Skantze, Gabriel
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
220--230
Previous models of turn-taking have mostly been trained for specific turn-taking decisions, such as discriminating between turn shifts and turn retention in pauses. In this paper, we present a predictive, continuous model of turn-taking using Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN). The model is trained on human-human dialogue data to predict upcoming speech activity in a future time window. We show how this general model can be applied to two different tasks that it was not specifically trained for. First, to predict whether a turn-shift will occur or not in pauses, where the model achieves a better performance than human observers, and better than results achieved with more traditional models. Second, to make a prediction at speech onset whether the utterance will be a short backchannel or a longer utterance. Finally, we show how the hidden layer in the network can be used as a feature vector for turn-taking decisions in a human-robot interaction scenario.
null
null
10.18653/v1/W17-5527
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,635
inproceedings
tran-etal-2017-neural
Neural-based Natural Language Generation in Dialogue using {RNN} Encoder-Decoder with Semantic Aggregation
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5528/
Tran, Van-Khanh and Nguyen, Le-Minh and Tojo, Satoshi
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
231--240
Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.
null
null
10.18653/v1/W17-5528
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,636
inproceedings
lopez-gambino-etal-2017-beyond
Beyond On-hold Messages: Conversational Time-buying in Task-oriented Dialogue
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5529/
L{\'o}pez Gambino, Soledad and Zarrie{\ss}, Sina and Schlangen, David
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
241--246
A common convention in graphical user interfaces is to indicate a {\textquotedblleft}wait state{\textquotedblright}, for example while a program is preparing a response, through a changed cursor state or a progress bar. What should the analogue be in a spoken conversational system? To address this question, we set up an experiment in which a human information provider (IP) was given their information only in a delayed and incremental manner, which systematically created situations where the IP had the turn but could not provide task-related information. Our data analysis shows that 1) IPs bridge the gap until they can provide information by re-purposing a whole variety of task- and grounding-related communicative actions (e.g. echoing the user`s request, signaling understanding, asserting partially relevant information), rather than being silent or explicitly asking for time (e.g. {\textquotedblleft}please wait{\textquotedblright}), and that 2) IPs combined these actions productively to ensure an ongoing conversation. These results, we argue, indicate that natural conversational interfaces should also be able to manage their time flexibly using a variety of conversational resources.
null
null
10.18653/v1/W17-5529
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,637
inproceedings
ortega-vu-2017-neural
Neural-based Context Representation Learning for Dialog Act Classification
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5530/
Ortega, Daniel and Vu, Ngoc Thang
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
247--252
We explore context representation learning methods in neural-based models for dialog act classification. We propose and compare extensively different methods which combine recurrent neural network architectures and attention mechanisms (AMs) at different context levels. Our experimental results on two benchmark datasets show consistent improvements compared to the models without contextual information and reveal that the most suitable AM in the architecture depends on the nature of the dataset.
null
null
10.18653/v1/W17-5530
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,638
inproceedings
noseworthy-etal-2017-predicting
Predicting Success in Goal-Driven Human-Human Dialogues
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5531/
Noseworthy, Michael and Cheung, Jackie Chi Kit and Pineau, Joelle
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
253--262
In goal-driven dialogue systems, success is often defined based on a structured definition of the goal. This requires that the dialogue system be constrained to handle a specific class of goals and that there be a mechanism to measure success with respect to that goal. However, in many human-human dialogues the diversity of goals makes it infeasible to define success in such a way. To address this scenario, we consider the task of automatically predicting success in goal-driven human-human dialogues using only the information communicated between participants in the form of text. We build a dataset from stackoverflow.com which consists of exchanges between two users in the technical domain where ground-truth success labels are available. We then propose a turn-based hierarchical neural network model that can be used to predict success without requiring a structured goal definition. We show this model outperforms rule-based heuristics and other baselines as it is able to detect patterns over the course of a dialogue and capture notions such as gratitude.
null
null
10.18653/v1/W17-5531
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,639
inproceedings
johnson-etal-2017-generating
Generating and Evaluating Summaries for Partial Email Threads: Conversational {B}ayesian Surprise and Silver Standards
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5532/
Johnson, Jordon and Masrani, Vaden and Carenini, Giuseppe and Ng, Raymond
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
263--272
We define and motivate the problem of summarizing partial email threads. This problem introduces the challenge of generating reference summaries for partial threads when human annotation is only available for the threads as a whole, particularly when the human-selected sentences are not uniformly distributed within the threads. We propose an oracular algorithm for generating these reference summaries with arbitrary length, and we are making the resulting dataset publicly available. In addition, we apply a recent unsupervised method based on Bayesian Surprise that incorporates background knowledge into partial thread summarization, extend it with conversational features, and modify the mechanism by which it handles redundancy. Experiments with our method indicate improved performance over the baseline for shorter partial threads; and our results suggest that the potential benefits of background knowledge to partial thread summarization should be further investigated with larger datasets.
null
null
10.18653/v1/W17-5532
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,640
inproceedings
yaghoubzadeh-kopp-2017-enabling
Enabling robust and fluid spoken dialogue with cognitively impaired users
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5533/
Yaghoubzadeh, Ramin and Kopp, Stefan
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
273--283
We present the flexdiam dialogue management architecture, which was developed in a series of projects dedicated to tailoring spoken interaction to the needs of users with cognitive impairments in an everyday assistive domain, using a multimodal front-end. This hybrid DM architecture affords incremental processing of uncertain input, a flexible, mixed-initiative information grounding process that can be adapted to users' cognitive capacities and interactive idiosyncrasies, and generic mechanisms that foster transitions in the joint discourse state that are understandable and controllable by those users, in order to effect a robust interaction for users with varying capacities.
null
null
10.18653/v1/W17-5533
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,641
inproceedings
bruni-fernandez-2017-adversarial
Adversarial evaluation for open-domain dialogue generation
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5534/
Bruni, Elia and Fern{\'a}ndez, Raquel
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
284--288
We investigate the potential of adversarial evaluation methods for open-domain dialogue generation systems, comparing the performance of a discriminative agent to that of humans on the same task. Our results show that the task is hard, both for automated models and humans, but that a discriminative agent can learn patterns that lead to above-chance performance.
null
null
10.18653/v1/W17-5534
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,642
inproceedings
nejat-etal-2017-exploring
Exploring Joint Neural Model for Sentence Level Discourse Parsing and Sentiment Analysis
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5535/
Nejat, Bita and Carenini, Giuseppe and Ng, Raymond
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
289--298
Discourse Parsing and Sentiment Analysis are two fundamental tasks in Natural Language Processing that have been shown to be mutually beneficial. In this work, we design and compare two Neural Based models for jointly learning both tasks. In the proposed approach, we first create a vector representation for all the text segments in the input sentence. Next, we apply three different Recursive Neural Net models: one for discourse structure prediction, one for discourse relation prediction and one for sentiment analysis. Finally, we combine these Neural Nets in two different joint models: Multi-tasking and Pre-training. Our results on two standard corpora indicate that both methods result in improvements in each task but Multi-tasking has a bigger impact than Pre-training. Specifically for Discourse Parsing, we see improvements in the prediction of the set of contrastive relations.
null
null
10.18653/v1/W17-5535
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,643
inproceedings
sano-etal-2017-predicting
Predicting Causes of Reformulation in Intelligent Assistants
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5536/
Sano, Shumpei and Kaji, Nobuhiro and Sassano, Manabu
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
299--309
Intelligent assistants (IAs) such as Siri and Cortana conversationally interact with users and execute a wide range of actions (e.g., searching the Web, setting alarms, and chatting). IAs can support these actions through the combination of various components such as automatic speech recognition, natural language understanding, and language generation. However, the complexity of these components hinders developers from determining which component causes an error. To remove this hindrance, we focus on reformulation, which is a useful signal of user dissatisfaction, and propose a method to predict the reformulation causes. We evaluate the method using the user logs of a commercial IA. The experimental results have demonstrated that features designed to detect the error of a specific component improve the performance of reformulation cause detection.
null
null
10.18653/v1/W17-5536
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,644
inproceedings
oraby-etal-2017-serious
Are you serious?: Rhetorical Questions and Sarcasm in Social Media Dialog
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5537/
Oraby, Shereen and Harrison, Vrindavan and Misra, Amita and Riloff, Ellen and Walker, Marilyn
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
310--319
Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for {\textquotedblleft}sarcastic{\textquotedblright} and 0.77 F1 for {\textquotedblleft}other{\textquotedblright} in forums, and 0.83 F1 for both {\textquotedblleft}sarcastic{\textquotedblright} and {\textquotedblleft}other{\textquotedblright} in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs.
null
null
10.18653/v1/W17-5537
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,645
inproceedings
jang-etal-2017-finding
Finding Structure in Figurative Language: Metaphor Detection with Topic-based Frames
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5538/
Jang, Hyeju and Maki, Keith and Hovy, Eduard and Ros{\'e}, Carolyn
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
320--330
In this paper, we present a novel and highly effective method for induction and application of metaphor frame templates as a step toward detecting metaphor in extended discourse. We infer implicit facets of a given metaphor frame using a semi-supervised bootstrapping approach on an unlabeled corpus. Our model applies this frame facet information to metaphor detection, and achieves the state-of-the-art performance on a social media dataset when building upon other proven features in a nonlinear machine learning model. In addition, we illustrate the mechanism through which the frame and topic information enable the more accurate metaphor detection.
null
null
10.18653/v1/W17-5538
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,646
inproceedings
manuvinakurike-etal-2017-using
Using Reinforcement Learning to Model Incrementality in a Fast-Paced Dialogue Game
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5539/
Manuvinakurike, Ramesh and DeVault, David and Georgila, Kallirroi
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
331--341
We apply Reinforcement Learning (RL) to the problem of incremental dialogue policy learning in the context of a fast-paced dialogue game. We compare the policy learned by RL with a high-performance baseline policy which has been shown to perform very efficiently (nearly as well as humans) in this dialogue game. The RL policy outperforms the baseline policy in offline simulations (based on real user data). We provide a detailed comparison of the RL policy and the baseline policy, including information about how much effort and time it took to develop each one of them. We also highlight the cases where the RL policy performs better, and show that understanding the RL policy can provide valuable insights which can inform the creation of an even better rule-based policy.
null
null
10.18653/v1/W17-5539
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,647
inproceedings
hu-walker-2017-inferring
Inferring Narrative Causality between Event Pairs in Films
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5540/
Hu, Zhichao and Walker, Marilyn
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
342--351
To understand narrative, humans draw inferences about the underlying relations between narrative events. Cognitive theories of narrative understanding define these inferences as four different types of causality, that include pairs of events A, B where A physically causes B (X drop, X break), to pairs of events where A causes emotional state B (Y saw X, Y felt fear). Previous work on learning narrative relations from text has either focused on {\textquotedblleft}strict{\textquotedblright} physical causality, or has been vague about what relation is being learned. This paper learns pairs of causal events from a corpus of film scene descriptions which are action rich and tend to be told in chronological order. We show that event pairs induced using our methods are of high quality and are judged to have a stronger causal relation than event pairs from Rel-Grams.
null
null
10.18653/v1/W17-5540
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,648
inproceedings
leuski-artstein-2017-lessons
Lessons in Dialogue System Deployment
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5541/
Leuski, Anton and Artstein, Ron
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
352--355
We analyze deployment of an interactive dialogue system in an environment where deep technical expertise might not be readily available. The initial version was created using a collection of research tools. We summarize a number of challenges with its deployment at two museums and describe a new system that simplifies the installation and user interface; reduces reliance on 3rd-party software; and provides a robust data collection mechanism.
null
null
10.18653/v1/W17-5541
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,649
inproceedings
yoshino-etal-2017-information
Information Navigation System with Discovering User Interests
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5542/
Yoshino, Koichiro and Suzuki, Yu and Nakamura, Satoshi
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
356--359
We demonstrate an information navigation system for sightseeing domains that has a dialogue interface for discovering user interests for tourist activities. The system discovers interests of a user with focus detection on user utterances, and proactively presents related information to the discovered user interest. A partially observable Markov decision process (POMDP)-based dialogue manager, which is extended with user focus states, controls the behavior of the system to provide information with several dialogue acts for providing information. We transferred the belief-update function and the policy of the manager from other system trained on a different domain to show the generality of defined dialogue acts for our information navigation system.
null
null
10.18653/v1/W17-5542
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,650
inproceedings
rahimtoroghi-etal-2017-modelling
Modelling Protagonist Goals and Desires in First-Person Narrative
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5543/
Rahimtoroghi, Elahe and Wu, Jiaqi and Wang, Ruimin and Anand, Pranav and Walker, Marilyn
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
360--369
Many genres of natural language text are narratively structured, a testament to our predilection for organizing our experiences as narratives. There is broad consensus that understanding a narrative requires identifying and tracking the goals and desires of the characters and their narrative outcomes. However, to date, there has been limited work on computational models for this problem. We introduce a new dataset, DesireDB, which includes gold-standard labels for identifying statements of desire, textual evidence for desire fulfillment, and annotations for whether the stated desire is fulfilled given the evidence in the narrative context. We report experiments on tracking desire fulfillment using different methods, and show that LSTM Skip-Thought model achieves F-measure of 0.7 on our corpus.
null
null
10.18653/v1/W17-5543
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,651
inproceedings
brixey-etal-2017-shihbot
{SHIH}bot: A {F}acebook chatbot for Sexual Health Information on {HIV}/{AIDS}
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5544/
Brixey, Jacqueline and Hoegen, Rens and Lan, Wei and Rusow, Joshua and Singla, Karan and Yin, Xusen and Artstein, Ron and Leuski, Anton
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
370--373
We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot`s response database is com-piled from professional medical and public health resources in order to provide reliable information to users. The system`s backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network.
null
null
10.18653/v1/W17-5544
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,652
inproceedings
ravichander-etal-2017-say
How Would You Say It? Eliciting Lexically Diverse Dialogue for Supervised Semantic Parsing
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5545/
Ravichander, Abhilasha and Manzini, Thomas and Grabmair, Matthias and Neubig, Graham and Francis, Jonathan and Nyberg, Eric
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
374--383
Building dialogue interfaces for real-world scenarios often entails training semantic parsers starting from zero examples. How can we build datasets that better capture the variety of ways users might phrase their queries, and what queries are actually realistic? Wang et al. (2015) proposed a method to build semantic parsing datasets by generating canonical utterances using a grammar and having crowdworkers paraphrase them into natural wording. A limitation of this approach is that it induces bias towards using similar language as the canonical utterances. In this work, we present a methodology that elicits meaningful and lexically diverse queries from users for semantic parsing tasks. Starting from a seed lexicon and a generative grammar, we pair logical forms with mixed text-image representations and ask crowdworkers to paraphrase and confirm the plausibility of the queries that they generated. We use this method to build a semantic parsing dataset from scratch for a dialog agent in a smart-home simulation. We find evidence that this dataset, which we have named SmartHome, is demonstrably more lexically diverse and difficult to parse than existing domain-specific semantic parsing datasets.
null
null
10.18653/v1/W17-5545
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,653
inproceedings
lison-bibauw-2017-dialogues
Not All Dialogues are Created Equal: Instance Weighting for Neural Conversational Models
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5546/
Lison, Pierre and Bibauw, Serge
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
384--394
Neural conversational models require substantial amounts of dialogue data to estimate their parameters and are therefore usually learned on large corpora such as chat forums or movie subtitles. These corpora are, however, often challenging to work with, notably due to their frequent lack of turn segmentation and the presence of multiple references external to the dialogue itself. This paper shows that these challenges can be mitigated by adding a weighting model into the architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimised. Evaluation results on retrieval-based models trained on movie and TV subtitles demonstrate that the inclusion of such a weighting model improves the model performance on unsupervised metrics.
null
null
10.18653/v1/W17-5546
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,654
inproceedings
hohn-2017-data
A data-driven model of explanations for a chatbot that helps to practice conversation in a foreign language
Jokinen, Kristiina and Stede, Manfred and DeVault, David and Louis, Annie
aug
2017
Saarbr{\"ucken, Germany
Association for Computational Linguistics
https://aclanthology.org/W17-5547/
H{\"ohn, Sviatlana
Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue
395--405
This article describes a model of other-initiated self-repair for a chatbot that helps to practice conversation in a foreign language. The model was developed using a corpus of instant messaging conversations between German native and non-native speakers. Conversation Analysis helped to create computational models from a small number of examples. The model has been validated in an AIML-based chatbot. Unlike typical retrieval-based dialogue systems, the explanations are generated at run-time from a linguistic database.
null
null
10.18653/v1/W17-5547
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,655
inproceedings
park-etal-2017-building
Building a Better Bitext for Structurally Different Languages through Self-training
Afli, Haithem and Liu, Chao-Hong
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5601/
Park, Jungyeul and Dugast, Lo{\"ic and Hong, Jeen-Pyo and Shin, Chang-Uk and Cha, Jeong-Won
Proceedings of the First Workshop on Curation and Applications of Parallel and Comparable Corpora
1--10
We propose a novel method to bootstrap the construction of parallel corpora for new pairs of structurally different languages. We do so by combining the use of a pivot language and self-training. A pivot language enables the use of existing translation models to bootstrap the alignment and a self-training procedure enables to achieve better alignment, both at the document and sentence level. We also propose several evaluation methods for the resulting alignment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,657
inproceedings
afli-etal-2017-multinews
{M}ulti{N}ews: A Web collection of an Aligned Multimodal and Multilingual Corpus
Afli, Haithem and Liu, Chao-Hong
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5602/
Afli, Haithem and Lohar, Pintu and Way, Andy
Proceedings of the First Workshop on Curation and Applications of Parallel and Comparable Corpora
11--15
Integrating Natural Language Processing (NLP) and computer vision is a promising effort. However, the applicability of these methods directly depends on the availability of a specific multimodal data that includes images and texts. In this paper, we present a collection of a Multimodal corpus of comparable texts and their images in 9 languages from the web news articles of Euronews website. This corpus has found widespread use in the NLP community in Multilingual and multimodal tasks. Here, we focus on its acquisition of the images and text data and their multilingual alignment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,658
inproceedings
zhou-etal-2017-learning
Learning Phrase Embeddings from Paraphrases with {GRU}s
Afli, Haithem and Liu, Chao-Hong
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5603/
Zhou, Zhihao and Huang, Lifu and Ji, Heng
Proceedings of the First Workshop on Curation and Applications of Parallel and Comparable Corpora
16--23
Learning phrase representations has been widely explored in many Natural Language Processing tasks (e.g., Sentiment Analysis, Machine Translation) and has shown promising improvements. Previous studies either learn non-compositional phrase representations with general word embedding learning techniques or learn compositional phrase representations based on syntactic structures, which either require huge amounts of human annotations or cannot be easily generalized to all phrases. In this work, we propose to take advantage of large-scaled paraphrase database and present a pairwise-GRU framework to generate compositional phrase representations. Our framework can be re-used to generate representations for any phrases. Experimental results show that our framework achieves state-of-the-art results on several phrase similarity tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,659
inproceedings
nakazawa-etal-2017-overview
Overview of the 4th Workshop on {A}sian Translation
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5701/
Nakazawa, Toshiaki and Higashiyama, Shohei and Ding, Chenchen and Mino, Hideya and Goto, Isao and Kazawa, Hideto and Oda, Yusuke and Neubig, Graham and Kurohashi, Sadao
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
1--54
This paper presents the results of the shared tasks from the 4th workshop on Asian translation (WAT2017) including J{\ensuremath{\leftrightarrow}}E, J{\ensuremath{\leftrightarrow}}C scientific paper translation subtasks, C{\ensuremath{\leftrightarrow}}J, K{\ensuremath{\leftrightarrow}}J, E{\ensuremath{\leftrightarrow}}J patent translation subtasks, H{\ensuremath{\leftrightarrow}}E mixed domain subtasks, J{\ensuremath{\leftrightarrow}}E newswire subtasks and J{\ensuremath{\leftrightarrow}}E recipe subtasks. For the WAT2017, 12 institutions participated in the shared tasks. About 300 translation results have been submitted to the automatic evaluation server, and selected submissions were manually evaluated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,661
inproceedings
takeno-etal-2017-controlling
Controlling Target Features in Neural Machine Translation via Prefix Constraints
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5702/
Takeno, Shunsuke and Nagata, Masaaki and Yamamoto, Kazuhide
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
55--63
We propose \textit{prefix constraints}, a novel method to enforce constraints on target sentences in neural machine translation. It places a sequence of special tokens at the beginning of target sentence (target prefix), while side constraints places a special token at the end of source sentence (source suffix). Prefix constraints can be predicted from source sentence jointly with target sentence, while side constraints (Sennrich et al., 2016) must be provided by the user or predicted by some other methods. In both methods, special tokens are designed to encode arbitrary features on target-side or metatextual information. We show that prefix constraints are more flexible than side constraints and can be used to control the behavior of neural machine translation, in terms of output length, bidirectional decoding, domain adaptation, and unaligned target word generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,662
inproceedings
sekizawa-etal-2017-improving
Improving {J}apanese-to-{E}nglish Neural Machine Translation by Paraphrasing the Target Language
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5703/
Sekizawa, Yuuki and Kajiwara, Tomoyuki and Komachi, Mamoru
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
64--69
Neural machine translation (NMT) produces sentences that are more fluent than those produced by statistical machine translation (SMT). However, NMT has a very high computational cost because of the high dimensionality of the output layer. Generally, NMT restricts the size of vocabulary, which results in infrequent words being treated as out-of-vocabulary (OOV) and degrades the performance of the translation. In evaluation, we achieved a statistically significant BLEU score improvement of 0.55-0.77 over the baselines including the state-of-the-art method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,663
inproceedings
imankulova-etal-2017-improving
Improving Low-Resource Neural Machine Translation with Filtered Pseudo-Parallel Corpus
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5704/
Imankulova, Aizhan and Sato, Takayuki and Komachi, Mamoru
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
70--78
Large-scale parallel corpora are indispensable to train highly accurate machine translators. However, manually constructed large-scale parallel corpora are not freely available in many language pairs. In previous studies, training data have been expanded using a pseudo-parallel corpus obtained using machine translation of the monolingual corpus in the target language. However, in low-resource language pairs in which only low-accuracy machine translation systems can be used, translation quality is reduces when a pseudo-parallel corpus is used naively. To improve machine translation performance with low-resource language pairs, we propose a method to expand the training data effectively via filtering the pseudo-parallel corpus using a quality estimation based on back-translation. As a result of experiments with three language pairs using small, medium, and large parallel corpora, language pairs with fewer training data filtered out more sentence pairs and improved BLEU scores more significantly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,664
inproceedings
fujita-sumita-2017-japanese
{J}apanese to {E}nglish/{C}hinese/{K}orean Datasets for Translation Quality Estimation and Automatic Post-Editing
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5705/
Fujita, Atsushi and Sumita, Eiichiro
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
79--88
Aiming at facilitating the research on quality estimation (QE) and automatic post-editing (APE) of machine translation (MT) outputs, especially for those among Asian languages, we have created new datasets for Japanese to English, Chinese, and Korean translations. As the source text, actual utterances in Japanese were extracted from the log data of our speech translation service. MT outputs were then given by phrase-based statistical MT systems. Finally, human evaluators were employed to grade the quality of MT outputs and to post-edit them. This paper describes the characteristics of the created datasets and reports on our benchmarking experiments on word-level QE, sentence-level QE, and APE conducted using the created datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,665
inproceedings
morishita-etal-2017-ntt
{NTT} Neural Machine Translation Systems at {WAT} 2017
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5706/
Morishita, Makoto and Suzuki, Jun and Nagata, Masaaki
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
89--94
In this year, we participated in four translation subtasks at WAT 2017. Our model structure is quite simple but we used it with well-tuned hyper-parameters, leading to a significant improvement compared to the previous state-of-the-art system. We also tried to make use of the unreliable part of the provided parallel corpus by back-translating and making a synthetic corpus. Our submitted system achieved the new state-of-the-art performance in terms of the BLEU score, as well as human evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,666
inproceedings
wang-etal-2017-xmu-neural
{XMU} Neural Machine Translation Systems for {WAT} 2017
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5707/
Wang, Boli and Tan, Zhixing and Hu, Jinming and Chen, Yidong and Shi, Xiaodong
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
95--98
This paper describes the Neural Machine Translation systems of Xiamen University for the shared translation tasks of WAT 2017. Our systems are based on the Encoder-Decoder framework with attention. We participated in three subtasks. We experimented subword segmentation, synthetic training data and model ensembling. Experiments show that all these methods can give substantial improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,667
inproceedings
neishi-etal-2017-bag
A Bag of Useful Tricks for Practical Neural Machine Translation: Embedding Layer Initialization and Large Batch Size
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5708/
Neishi, Masato and Sakuma, Jin and Tohda, Satoshi and Ishiwatari, Shonosuke and Yoshinaga, Naoki and Toyoda, Masashi
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
99--109
In this paper, we describe the team UT-IIS`s system and results for the WAT 2017 translation tasks. We further investigated several tricks including a novel technique for initializing embedding layers using only the parallel corpus, which increased the BLEU score by 1.28, found a practical large batch size of 256, and gained insights regarding hyperparameter settings. Ultimately, our system obtained a better result than the state-of-the-art system of WAT 2016. Our code is available on \url{https://github.com/nem6ishi/wat17}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,668
inproceedings
long-etal-2017-patent
Patent {NMT} integrated with Large Vocabulary Phrase Translation by {SMT} at {WAT} 2017
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5709/
Long, Zi and Kimura, Ryuichiro and Utsuro, Takehito and Mitsuhashi, Tomoharu and Yamamoto, Mikio
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
110--118
Neural machine translation (NMT) cannot handle a larger vocabulary because the training complexity and decoding complexity proportionally increase with the number of target words. This problem becomes even more serious when translating patent documents, which contain many technical terms that are observed infrequently. Long et al.(2017) proposed to select phrases that contain out-of-vocabulary words using the statistical approach of branching entropy. The selected phrases are then replaced with tokens during training and post-translated by the phrase translation table of SMT. In this paper, we apply the method proposed by Long et al. (2017) to the WAT 2017 Japanese-Chinese and Japanese-English patent datasets. Evaluation on Japanese-to-Chinese, Chinese-to-Japanese, Japanese-to-English and English-to-Japanese patent sentence translation proved the effectiveness of phrases selected with branching entropy, where the NMT model of Long et al.(2017) achieves a substantial improvement over a baseline NMT model without the technique proposed by Long et al.(2017).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,669
inproceedings
ehara-2017-smt
{SMT} reranked {NMT}
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5710/
Ehara, Terumasa
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
119--126
System architecture, experimental settings and experimental results of the EHR team for the WAT2017 tasks are described. We participate in three tasks: JPCen-ja, JPCzh-ja and JPCko-ja. Although the basic architecture of our system is NMT, reranking technique is conducted using SMT results. One of the major drawback of NMT is under-translation and over-translation. On the other hand, SMT infrequently makes such translations. So, using reranking of n-best NMT outputs by the SMT output, discarding such translations can be expected. We can improve BLEU score from 46.03 to 47.08 by this technique in JPCzh-ja task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,670
inproceedings
imamura-sumita-2017-ensemble
Ensemble and Reranking: Using Multiple Models in the {NICT}-2 Neural Machine Translation System at {WAT}2017
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5711/
Imamura, Kenji and Sumita, Eiichiro
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
127--134
In this paper, we describe the NICT-2 neural machine translation system evaluated at WAT2017. This system uses multiple models as an ensemble and combines models with opposite decoding directions by reranking (called bi-directional reranking). In our experimental results on small data sets, the translation quality improved when the number of models was increased to 32 in total and did not saturate. In the experiments on large data sets, improvements of 1.59-3.32 BLEU points were achieved when six-model ensembles were combined by the bi-directional reranking.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,671
inproceedings
oda-etal-2017-simple
A Simple and Strong Baseline: {NAIST}-{NICT} Neural Machine Translation System for {WAT}2017 {E}nglish-{J}apanese Translation Task
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5712/
Oda, Yusuke and Sudoh, Katsuhito and Nakamura, Satoshi and Utiyama, Masao and Sumita, Eiichiro
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
135--139
This paper describes the details about the NAIST-NICT machine translation system for WAT2017 English-Japanese Scientific Paper Translation Task. The system consists of a language-independent tokenizer and an attentional encoder-decoder style neural machine translation model. According to the official results, our system achieves higher translation accuracy than any systems submitted previous campaigns despite simple model architecture.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,672
inproceedings
kinoshita-etal-2017-comparison
Comparison of {SMT} and {NMT} trained with large Patent Corpora: {J}apio at {WAT}2017
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5713/
Kinoshita, Satoshi and Oshio, Tadaaki and Mitsuhashi, Tomoharu
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
140--145
Japio participates in patent subtasks (JPC-EJ/JE/CJ/KJ) with phrase-based statistical machine translation (SMT) and neural machine translation (NMT) systems which are trained with its own patent corpora in addition to the subtask corpora provided by organizers of WAT2017. In EJ and CJ subtasks, SMT and NMT systems whose sizes of training corpora are about 50 million and 10 million sentence pairs respectively achieved comparable scores for automatic evaluations, but NMT systems were superior to SMT systems for both official and in-house human evaluations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,673
inproceedings
cromieres-etal-2017-kyoto
{K}yoto {U}niversity Participation to {WAT} 2017
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5714/
Cromieres, Fabien and Dabre, Raj and Nakazawa, Toshiaki and Kurohashi, Sadao
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
146--153
We describe here our approaches and results on the WAT 2017 shared translation tasks. Following our good results with Neural Machine Translation in the previous shared task, we continue this approach this year, with incremental improvements in models and training methods. We focused on the ASPEC dataset and could improve the state-of-the-art results for Chinese-to-Japanese and Japanese-to-Chinese translations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,674
inproceedings
kocmi-etal-2017-cuni
{CUNI} {NMT} System for {WAT} 2017 Translation Tasks
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5715/
Kocmi, Tom and Vari{\v{s}}, Du{\v{s}}an and Bojar, Ond{\v{r}}ej
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
154--159
The paper presents this year`s CUNI submissions to the WAT 2017 Translation Task focusing on the Japanese-English translation, namely Scientific papers subtask, Patents subtask and Newswire subtask. We compare two neural network architectures, the standard sequence-to-sequence with attention (Seq2Seq) and an architecture using convolutional sentence encoder (FBConv2Seq), both implemented in the NMT framework Neural Monkey that we currently participate in developing. We also compare various types of preprocessing of the source Japanese sentences and their impact on the overall results. Furthermore, we include the results of our experiments with out-of-domain data obtained by combining the corpora provided for each subtask.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,675
inproceedings
matsumura-komachi-2017-tokyo
{T}okyo Metropolitan University Neural Machine Translation System for {WAT} 2017
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5716/
Matsumura, Yukio and Komachi, Mamoru
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
160--166
In this paper, we describe our neural machine translation (NMT) system, which is based on the attention-based NMT and uses long short-term memories (LSTM) as RNN. We implemented beam search and ensemble decoding in the NMT system. The system was tested on the 4th Workshop on Asian Translation (WAT 2017) shared tasks. In our experiments, we participated in the scientific paper subtasks and attempted Japanese-English, English-Japanese, and Japanese-Chinese translation tasks. The experimental results showed that implementation of beam search and ensemble decoding can effectively improve the translation quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,676
inproceedings
singh-etal-2017-comparing
Comparing Recurrent and Convolutional Architectures for {E}nglish-{H}indi Neural Machine Translation
Nakazawa, Toshiaki and Goto, Isao
nov
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5717/
Singh, Sandhya and Panjwani, Ritesh and Kunchukuttan, Anoop and Bhattacharyya, Pushpak
Proceedings of the 4th Workshop on {A}sian Translation ({WAT}2017)
167--170
In this paper, we empirically compare the two encoder-decoder neural machine translation architectures: convolutional sequence to sequence model (ConvS2S) and recurrent sequence to sequence model (RNNS2S) for English-Hindi language pair as part of IIT Bombay`s submission to WAT2017 shared task. We report the results for both English-Hindi and Hindi-English direction of language pair.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,677
inproceedings
skeppstedt-etal-2017-automatic
Automatic detection of stance towards vaccination in online discussion forums
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5801/
Skeppstedt, Maria and Kerren, Andreas and Stede, Manfred
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
1--8
A classifier for automatic detection of stance towards vaccination in online forums was trained and evaluated. Debate posts from six discussion threads on the British parental website Mumsnet were manually annotated for stance {\textquoteleft}against' or {\textquoteleft}for' vaccination, or as {\textquoteleft}undecided'. A support vector machine, trained to detect the three classes, achieved a macro F-score of 0.44, while a macro F-score of 0.62 was obtained by the same type of classifier on the binary classification task of distinguishing stance {\textquoteleft}against' vaccination from stance {\textquoteleft}for' vaccination. These results show that vaccine stance detection in online forums is a difficult task, at least for the type of model investigated and for the relatively small training corpus that was used. Future work will therefore include an expansion of the training data and an evaluation of other types of classifiers and features.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,679
inproceedings
abd-yusof-etal-2017-analysing
Analysing the Causes of Depressed Mood from Depression Vulnerable Individuals
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5802/
Abd Yusof, Noor Fazilla and Lin, Chenghua and Guerin, Frank
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
9--17
We develop a computational model to discover the potential causes of depression by analysing the topics in a usergenerated text. We show the most prominent causes, and how these causes evolve over time. Also, we highlight the differences in causes between students with low and high neuroticism. Our studies demonstrate that the topics reveal valuable clues about the causes contributing to depressed mood. Identifying causes can have a significant impact on improving the quality of depression care; thereby providing greater insights into a patient`s state for pertinent treatment recommendations. Hence, this study significantly expands the ability to discover the potential factors that trigger depression, making it possible to increase the efficiency of depression treatment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,680
inproceedings
takeuchi-etal-2017-multivariate
Multivariate Linear Regression of Symptoms-related Tweets for Infectious Gastroenteritis Scale Estimation
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5803/
Takeuchi, Ryo and Iso, Hayate and Ito, Kaoru and Wakamiya, Shoko and Aramaki, Eiji
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
18--25
To date, various Twitter-based event detection systems have been proposed. Most of their targets, however, share common characteristics. They are seasonal or global events such as earthquakes and flu pandemics. In contrast, this study targets unseasonal and local disease events. Our system investigates the frequencies of disease-related words such as {\textquotedblleft}nausea{\textquotedblright},{\textquotedblleft}chill{\textquotedblright},and {\textquotedblleft}diarrhea{\textquotedblright} and estimates the number of patients using regression of these word frequencies. Experiments conducted using Japanese 47 areas from January 2017 to April 2017 revealed that the detection of small and unseasonal event is extremely difficult (overall performance: 0.13). However, we found that the event scale and the detection performance show high correlation in the specified cases (in the phase of patient increasing or decreasing). The results also suggest that when 150 and more patients appear in a high population area, we can expect that our social sensors detect this outbreak. Based on these results, we can infer that social sensors can reliably detect unseasonal and local disease events under certain conditions, just as they can for seasonal or global events.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,681
inproceedings
huang-etal-2017-incorporating
Incorporating Dependency Trees Improve Identification of Pregnant Women on Social Media Platforms
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5804/
Huang, Yi-Jie and Su, Chu Hsien and Chang, Yi-Chun and Ting, Tseng-Hsin and Fu, Tzu-Yuan and Wang, Rou-Min and Dai, Hong-Jie and Chang, Yung-Chun and Jonnagaddala, Jitendra and Hsu, Wen-Lian
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
26--32
The increasing popularity of social media lead users to share enormous information on the internet. This information has various application like, it can be used to develop models to understand or predict user behavior on social media platforms. For example, few online retailers have studied the shopping patterns to predict shopper`s pregnancy stage. Another interesting application is to use the social media platforms to analyze users' health-related information. In this study, we developed a tree kernel-based model to classify tweets conveying pregnancy related information using this corpus. The developed pregnancy classification model achieved an accuracy of 0.847 and an F-score of 0.565. A new corpus from popular social media platform Twitter was developed for the purpose of this study. In future, we would like to improve this corpus by reducing noise such as retweets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,682
inproceedings
wang-etal-2017-using
Using a Recurrent Neural Network Model for Classification of Tweets Conveyed Influenza-related Information
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5805/
Wang, Chen-Kai and Singh, Onkar and Tang, Zhao-Li and Dai, Hong-Jie
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
33--38
Traditional disease surveillance systems depend on outpatient reporting and virological test results released by hospitals. These data have valid and accurate information about emerging outbreaks but it`s often not timely. In recent years the exponential growth of users getting connected to social media provides immense knowledge about epidemics by sharing related information. Social media can now flag more immediate concerns related to out-breaks in real time. In this paper we apply the long short-term memory recurrent neural net-work (RNN) architecture to classify tweets conveyed influenza-related information and compare its performance with baseline algorithms including support vector machine (SVM), decision tree, naive Bayes, simple logistics, and naive Bayes multinomial. The developed RNN model achieved an F-score of 0.845 on the MedWeb task test set, which outperforms the F-score of SVM without applying the synthetic minority oversampling technique by 0.08. The F-score of the RNN model is within 1{\%} of the highest score achieved by SVM with oversampling technique.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,683
inproceedings
adam-etal-2017-zikahack
{Z}ika{H}ack 2016: A digital disease detection competition
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5806/
Adam, Dillon C and Jonnagaddala, Jitendra and Han-Chen, Daniel and Batongbacal, Sean and Almeida, Luan and Zhu, Jing Z and Yang, Jenny J and Mundekkat, Jumail M and Badman, Steven and Chughtai, Abrar and MacIntyre, C Raina
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
39--46
Effective response to infectious diseases outbreaks relies on the rapid and early detection of those outbreaks. Invalidated, yet timely and openly available digital information can be used for the early detection of outbreaks. Public health surveillance authorities can exploit these early warnings to plan and co-ordinate rapid surveillance and emergency response programs. In 2016, a digital disease detection competition named ZikaHack was launched. The objective of the competition was for multidisciplinary teams to design, develop and demonstrate innovative digital disease detection solutions to retrospectively detect the 2015-16 Brazilian Zika virus outbreak earlier than traditional surveillance methods. In this paper, an overview of the ZikaHack competition is provided. The challenges and lessons learned in organizing this competition are also discussed for use by other researchers interested in organizing similar competitions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,684
inproceedings
kim-etal-2017-method
A Method to Generate a Machine-Labeled Data for Biomedical Named Entity Recognition with Various Sub-Domains
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5807/
Kim, Juae and Kwon, Sunjae and Ko, Youngjoong and Seo, Jungyun
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
47--51
Biomedical Named Entity (NE) recognition is a core technique for various works in the biomedical domain. In previous studies, using machine learning algorithm shows better performance than dictionary-based and rule-based approaches because there are too many terminological variations of biomedical NEs and new biomedical NEs are constantly generated. To achieve the high performance with a machine-learning algorithm, good-quality corpora are required. However, it is difficult to obtain the good-quality corpora because an-notating a biomedical corpus for ma-chine-learning is extremely time-consuming and costly. In addition, most previous corpora are insufficient for high-level tasks because they cannot cover various domains. Therefore, we propose a method for generating a large amount of machine-labeled data that covers various domains. To generate a large amount of machine-labeled data, firstly we generate an initial machine-labeled data by using a chunker and MetaMap. The chunker is developed to extract only biomedical NEs with manually annotated data. MetaMap is used to annotate the category of bio-medical NE. Then we apply the self-training approach to bootstrap the performance of initial machine-labeled data. In our experiments, the biomedical NE recognition system that is trained with our proposed machine-labeled data achieves much high performance. As a result, our system outperforms biomedical NE recognition system that using MetaMap only with 26.03{\%}p improvements on F1-score.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,685
inproceedings
tu-etal-2017-enhancing
Enhancing Drug-Drug Interaction Classification with Corpus-level Feature and Classifier Ensemble
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5808/
Tu, Jing Cyun and Lai, Po-Ting and Tsai, Richard Tzong-Han
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
52--56
The study of drug-drug interaction (DDI) is important in the drug discovering. Both PubMed and DrugBank are rich resources to retrieve DDI information which is usually represented in plain text. Automatically extracting DDI pairs from text improves the quality of drug discov-ering. In this paper, we presented a study that focuses on the DDI classification. We normalized the drug names, and developed both sentence-level and corpus-level features for DDI classification. A classifier ensemble approach is used for the unbalance DDI labels problem. Our approach achieved an F-score of 65.4{\%} on SemEval 2013 DDI test set. The experimental results also show the effects of proposed corpus-level features in the DDI task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,686
inproceedings
warikoo-etal-2017-chemical
Chemical-Induced Disease Detection Using Invariance-based Pattern Learning Model
Jonnagaddala, Jitendra and Dai, Hong-Jie and Chang, Yung-Chun
nov
2017
Taipei, Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-5809/
Warikoo, Neha and Chang, Yung-Chun and Hsu, Wen-Lian
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 ({DDDSM}-2017)
57--64
In this work, we introduce a novel feature engineering approach named {\textquotedblleft}algebraic invariance{\textquotedblright} to identify discriminative patterns for learning relation pair features for the chemical-disease relation (CDR) task of BioCreative V. Our method exploits the existing structural similarity of the key concepts of relation descriptions from the CDR corpus to generate robust linguistic patterns for SVM tree kernel-based learning. Preprocessing of the training data classifies the entity pairs as either related or unrelated to build instance types for both inter-sentential and intra-sentential scenarios. An invariant function is proposed to process and optimally cluster similar patterns for both positive and negative instances. The learning model for CDR pairs is based on the SVM tree kernel approach, which generates feature trees and vectors and is modeled on suitable invariance based patterns, bringing brevity, precision and context to the identifier features. Results demonstrate that our method outperformed other compared approaches, achieved a high recall rate of 85.08{\%}, and averaged an F1-score of 54.34{\%} without the use of any additional knowledge bases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,687
inproceedings
winder-etal-2017-ntucle
{NTUCLE}: Developing a Corpus of Learner {E}nglish to Provide Writing Support for Engineering Students
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5901/
Winder, Roger Vivek Placidus and MacKinnon, Joseph and Li, Shu Yun and Lin, Benedict Christopher Tzer Liang and Heah, Carmel Lee Hah and Morgado da Costa, Lu{\'i}s and Kuribayashi, Takayuki and Bond, Francis
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
1--11
This paper describes the creation of a new annotated learner corpus. The aim is to use this corpus to develop an automated system for corrective feedback on students' writing. With this system, students will be able to receive timely feedback on language errors before they submit their assignments for grading. A corpus of assignments submitted by first year engineering students was compiled, and a new error tag set for the NTU Corpus of Learner English (NTUCLE) was developed based on that of the NUS Corpus of Learner English (NUCLE), as well as marking rubrics used at NTU. After a description of the corpus, error tag set and annotation process, the paper presents the results of the annotation exercise as well as follow up actions. The final error tag set, which is significantly larger than that for the NUCLE error categories, is then presented before a brief conclusion summarising our experience and future plans.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,689
inproceedings
jiang-lee-2017-carrier
{C}arrier Sentence Selection for Fill-in-the-blank Items
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5903/
Jiang, Shu and Lee, John
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
17--22
Fill-in-the-blank items are a common form of exercise in computer-assisted language learning systems. To automatically generate an effective item, the system must be able to select a high-quality carrier sentence that illustrates the usage of the target word. Previous approaches for carrier sentence selection have considered sentence length, vocabulary difficulty, the position of the target word and the presence of finite verbs. This paper investigates the utility of word co-occurrence statistics and lexical similarity as selection criteria. In an evaluation on generating fill-in-the-blank items for learning Chinese as a foreign language, we show that these two criteria can improve carrier sentence quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,691
inproceedings
redkar-etal-2017-hindi
{H}indi Shabdamitra: A {W}ordnet based {E}-Learning Tool for Language Learning and Teaching
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5904/
Redkar, Hanumant and Singh, Sandhya and Somasundaram, Meenakshi and Gorasia, Dhara and Kulkarni, Malhar and Bhattacharyya, Pushpak
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
23--28
In today`s technology driven digital era, education domain is undergoing a transformation from traditional approaches to more learner controlled and flexible methods of learning. This transformation has opened the new avenues for interdisciplinary research in the field of educational technology and natural language processing in developing quality digital aids for learning and teaching. The tool presented here - Hindi Shabhadamitra, developed using Hindi Wordnet for Hindi language learning, is one such e-learning tool. It has been developed as a teaching and learning aid suitable for formal school based curriculum and informal setup for self learning users. Besides vocabulary, it also provides word based grammar along with images and pronunciation for better learning and retention. This aid demonstrates that how a rich lexical resource like wordnet can be systematically remodeled for practical usage in the educational domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,692
inproceedings
fung-etal-2017-nlptea
{NLPTEA} 2017 Shared Task {--} {C}hinese Spelling Check
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5905/
Fung, Gabriel and Debosschere, Maxime and Wang, Dingmin and Li, Bo and Zhu, Jia and Wong, Kam-Fai
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
29--34
This paper provides an overview along with our findings of the Chinese Spelling Check shared task at NLPTEA 2017. The goal of this task is to develop a computer-assisted system to automatically diagnose typing errors in traditional Chinese sentences written by students. We defined six types of errors which belong to two categories. Given a sentence, the system should detect where the errors are, and for each detected error determine its type and provide correction suggestions. We designed, constructed, and released a benchmark dataset for this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,693
inproceedings
yeh-etal-2017-chinese
{C}hinese Spelling Check based on N-gram and String Matching Algorithm
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5906/
Yeh, Jui-Feng and Chang, Li-Ting and Liu, Chan-Yi and Hsu, Tsung-Wei
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
35--38
This paper presents a Chinese spelling check approach based on language models combined with string match algorithm to treat the problems resulted from the influence caused by Cantonese mother tone. N-grams first used to detecting the probability of sentence constructed by the writers, a string matching algorithm called Knuth-Morris-Pratt (KMP) Algorithm is used to detect and correct the error. According to the experimental results, the proposed approach can detect the error and provide the corresponding correction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,694
inproceedings
zhao-etal-2017-n
N-gram Model for {C}hinese Grammatical Error Diagnosis
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5907/
Zhao, Jianbo and Liu, Hao and Bao, Zuyi and Bai, Xiaopeng and Li, Si and Lin, Zhiqing
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
39--44
Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task. The experiment results show that the proposed method is good at correction of Chinese grammatical errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,695
inproceedings
horbach-etal-2017-influence
The Influence of Spelling Errors on Content Scoring Performance
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5908/
Horbach, Andrea and Ding, Yuning and Zesch, Torsten
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
45--53
Spelling errors occur frequently in educational settings, but their influence on automatic scoring is largely unknown. We therefore investigate the influence of spelling errors on content scoring performance using the example of the ASAP corpus. We conduct an annotation study on the nature of spelling errors in the ASAP dataset and utilize these finding in machine learning experiments that measure the influence of spelling errors on automatic content scoring. Our main finding is that scoring methods using both token and character n-gram features are robust against spelling errors up to the error frequency in ASAP.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,696
inproceedings
mizumoto-nagata-2017-analyzing
Analyzing the Impact of Spelling Errors on {POS}-Tagging and Chunking in Learner {E}nglish
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5909/
Mizumoto, Tomoya and Nagata, Ryo
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
54--58
Part-of-speech (POS) tagging and chunking have been used in tasks targeting learner English; however, to the best our knowledge, few studies have evaluated their performance and no studies have revealed the causes of POS-tagging/chunking errors in detail. Therefore, we investigate performance and analyze the causes of failure. We focus on spelling errors that occur frequently in learner English. We demonstrate that spelling errors reduced POS-tagging performance by 0.23{\%} owing to spelling errors, and that a spell checker is not necessary for POS-tagging/chunking of learner English.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,697
inproceedings
zampieri-etal-2017-complex
Complex Word Identification: Challenges in Data Annotation and System Performance
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5910/
Zampieri, Marcos and Malmasi, Shervin and Paetzold, Gustavo and Specia, Lucia
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
59--63
This paper revisits the problem of complex word identification (CWI) following up the SemEval CWI shared task. We use ensemble classifiers to investigate how well computational methods can discriminate between complex and non-complex words. Furthermore, we analyze the classification performance to understand what makes lexical complexity challenging. Our findings show that most systems performed poorly on the SemEval CWI dataset, and one of the reasons for that is the way in which human annotation was performed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,698
inproceedings
shioda-etal-2017-suggesting
Suggesting Sentences for {ESL} using Kernel Embeddings
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5911/
Shioda, Kent and Komachi, Mamoru and Ikeya, Rue and Mochihashi, Daichi
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
64--68
Sentence retrieval is an important NLP application for English as a Second Language (ESL) learners. ESL learners are familiar with web search engines, but generic web search results may not be adequate for composing documents in a specific domain. However, if we build our own search system specialized to a domain, it may be subject to the data sparseness problem. Recently proposed word2vec partially addresses the data sparseness problem, but fails to extract sentences relevant to queries owing to the modeling of the latent intent of the query. Thus, we propose a method of retrieving example sentences using kernel embeddings and N-gram windows. This method implicitly models latent intent of query and sentences, and alleviates the problem of noisy alignment. Our results show that our method achieved higher precision in sentence retrieval for ESL in the domain of a university press release corpus, as compared to a previous unsupervised method used for a semantic textual similarity task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,699
inproceedings
bedi-etal-2017-event
Event Timeline Generation from History Textbooks
Tseng, Yuen-Hsien and Chen, Hsin-Hsi and Lee, Lung-Hao and Yu, Liang-Chih
dec
2017
Taipei, Taiwan
Asian Federation of Natural Language Processing
https://aclanthology.org/W17-5912/
Bedi, Harsimran and Patil, Sangameshwar and Hingmire, Swapnil and Palshikar, Girish
Proceedings of the 4th Workshop on Natural Language Processing Techniques for Educational Applications ({NLPTEA} 2017)
69--77
Event timeline serves as the basic structure of history, and it is used as a disposition of key phenomena in studying history as a subject in secondary school. In order to enable a student to understand a historical phenomenon as a series of connected events, we present a system for automatic event timeline generation from history textbooks. Additionally, we propose Message Sequence Chart (MSC) and time-map based visualization techniques to visualize an event timeline. We also identify key computational challenges in developing natural language processing based applications for history textbooks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,700
inproceedings
wang-etal-2017-group
Group Linguistic Bias Aware Neural Response Generation
Zhang, Yue and Sui, Zhifang
dec
2017
Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-6001/
Wang, Jianan and Wang, Xin and Li, Fang and Xu, Zhen and Wang, Zhuoran and Wang, Baoxun
Proceedings of the 9th {SIGHAN} Workshop on {C}hinese Language Processing
1--10
For practical chatbots, one of the essential factor for improving user experience is the capability of customizing the talking style of the agents, that is, to make chatbots provide responses meeting users' preference on language styles, topics, etc. To address this issue, this paper proposes to incorporate linguistic biases, which implicitly involved in the conversation corpora generated by human groups in the Social Network Services (SNS), into the encoder-decoder based response generator. By attaching a specially designed neural component to dynamically control the impact of linguistic biases in response generation, a Group Linguistic Bias Aware Neural Response Generation (GLBA-NRG) model is eventually presented. The experimental results on the dataset from the Chinese SNS show that the proposed architecture outperforms the current response generating models by producing both meaningful and vivid responses with customized styles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,702
inproceedings
bao-etal-2017-neural
Neural Regularized Domain Adaptation for {C}hinese Word Segmentation
Zhang, Yue and Sui, Zhifang
dec
2017
Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-6002/
Bao, Zuyi and Li, Si and Xu, Weiran and Gao, Sheng
Proceedings of the 9th {SIGHAN} Workshop on {C}hinese Language Processing
11--20
For Chinese word segmentation, the large-scale annotated corpora mainly focus on newswire and only a handful of annotated data is available in other domains such as patents and literature. Considering the limited amount of annotated target domain data, it is a challenge for segmenters to learn domain-specific information while avoid getting over-fitted at the same time. In this paper, we propose a neural regularized domain adaptation method for Chinese word segmentation. The teacher networks trained in source domain are employed to regularize the training process of the student network by preserving the general knowledge. In the experiments, our neural regularized domain adaptation method achieves a better performance comparing to previous methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,703
inproceedings
benajiba-etal-2017-sentimental
The Sentimental Value of {C}hinese Sub-Character Components
Zhang, Yue and Sui, Zhifang
dec
2017
Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-6003/
Benajiba, Yassine and Biran, Or and Weng, Zhiliang and Zhang, Yong and Sun, Jin
Proceedings of the 9th {SIGHAN} Workshop on {C}hinese Language Processing
21--29
Sub-character components of Chinese characters carry important semantic information, and recent studies have shown that utilizing this information can improve performance on core semantic tasks. In this paper, we hypothesize that in addition to semantic information, sub-character components may also carry emotional information, and that utilizing it should improve performance on sentiment analysis tasks. We conduct a series of experiments on four Chinese sentiment data sets and show that we can significantly improve the performance in various tasks over that of a character-level embeddings baseline. We then focus on qualitatively assessing multiple examples and trying to explain how the sub-character components affect the results in each case.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,704
inproceedings
li-etal-2017-chinese
{C}hinese Answer Extraction Based on {POS} Tree and Genetic Algorithm
Zhang, Yue and Sui, Zhifang
dec
2017
Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-6004/
Li, Shuihua and Zhang, Xiaoming and Li, Zhoujun
Proceedings of the 9th {SIGHAN} Workshop on {C}hinese Language Processing
30--36
Answer extraction is the most important part of a chinese web-based question answering system. In order to enhance the robustness and adaptability of answer extraction to new domains and eliminate the influence of the incomplete and noisy search snippets, we propose two new answer exraction methods. We utilize text patterns to generate Part-of-Speech (POS) patterns. In addition, a method is proposed to construct a POS tree by using these POS patterns. The POS tree is useful to candidate answer extraction of web-based question answering. To retrieve a efficient POS tree, the similarities between questions are used to select the question-answer pairs whose questions are similar to the unanswered question. Then, the POS tree is improved based on these question-answer pairs. In order to rank these candidate answers, the weights of the leaf nodes of the POS tree are calculated using a heuristic method. Moreover, the Genetic Algorithm (GA) is used to train the weights. The experimental results of 10-fold crossvalidation show that the weighted POS tree trained by GA can improve the accuracy of answer extraction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,705
inproceedings
huang-etal-2017-learning-parenthetical
Learning from Parenthetical Sentences for Term Translation in Machine Translation
Zhang, Yue and Sui, Zhifang
dec
2017
Taiwan
Association for Computational Linguistics
https://aclanthology.org/W17-6005/
Huang, Guoping and Zhang, Jiajun and Zhou, Yu and Zong, Chengqing
Proceedings of the 9th {SIGHAN} Workshop on {C}hinese Language Processing
37--45
Terms extensively exist in specific domains, and term translation plays a critical role in domain-specific machine translation (MT) tasks. However, it`s a challenging task to translate them correctly for the huge number of pre-existing terms and the endless new terms. To achieve better term translation quality, it is necessary to inject external term knowledge into the underlying MT system. Fortunately, there are plenty of term translation knowledge in parenthetical sentences on the Internet. In this paper, we propose a simple, straightforward and effective framework to improve term translation by learning from parenthetical sentences. This framework includes: (1) a focused web crawler; (2) a parenthetical sentence filter, acquiring parenthetical sentences including bilingual term pairs; (3) a term translation knowledge extractor, extracting bilingual term translation candidates; (4) a probability learner, generating the term translation table for MT decoders. The extensive experiments demonstrate that our proposed framework significantly improves the translation quality of terms and sentences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,706
inproceedings
kawahara-etal-2017-automatically
Automatically Acquired Lexical Knowledge Improves {J}apanese Joint Morphological and Dependency Analysis
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6301/
Kawahara, Daisuke and Hayashibe, Yuta and Morita, Hajime and Kurohashi, Sadao
Proceedings of the 15th International Conference on Parsing Technologies
1--10
This paper presents a joint model for morphological and dependency analysis based on automatically acquired lexical knowledge. This model takes advantage of rich lexical knowledge to simultaneously resolve word segmentation, POS, and dependency ambiguities. In our experiments on Japanese, we show the effectiveness of our joint model over conventional pipeline models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,723
inproceedings
yu-bohnet-2017-dependency
Dependency Language Models for Transition-based Dependency Parsing
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6302/
Yu, Juntao and Bohnet, Bernd
Proceedings of the 15th International Conference on Parsing Technologies
11--17
In this paper, we present an approach to improve the accuracy of a strong transition-based dependency parser by exploiting dependency language models that are extracted from a large parsed corpus. We integrated a small number of features based on the dependency language models into the parser. To demonstrate the effectiveness of the proposed approach, we evaluate our parser on standard English and Chinese data where the base parser could achieve competitive accuracy scores. Our enhanced parser achieved state-of-the-art accuracy on Chinese data and competitive results on English data. We gained a large absolute improvement of one point (UAS) on Chinese and 0.5 points for English.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,724
inproceedings
falenska-cetinoglu-2017-lexicalized
Lexicalized vs. Delexicalized Parsing in Low-Resource Scenarios
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6303/
Falenska, Agnieszka and {\c{Cetino{\u{glu, {\"Ozlem
Proceedings of the 15th International Conference on Parsing Technologies
18--24
We present a systematic analysis of lexicalized vs. delexicalized parsing in low-resource scenarios, and propose a methodology to choose one method over another under certain conditions. We create a set of simulation experiments on 41 languages and apply our findings to 9 low-resource languages. Experimental results show that our methodology chooses the best approach in 8 out of 9 cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,725
inproceedings
sagot-martinez-alonso-2017-improving
Improving neural tagging with lexical information
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6304/
Sagot, Beno{\^i}t and Mart{\'i}nez Alonso, H{\'e}ctor
Proceedings of the 15th International Conference on Parsing Technologies
25--31
Neural part-of-speech tagging has achieved competitive results with the incorporation of character-based and pre-trained word embeddings. In this paper, we show that a state-of-the-art bi-LSTM tagger can benefit from using information from morphosyntactic lexicons as additional input. The tagger, trained on several dozen languages, shows a consistent, average improvement when using lexical information, even when also using character-based embeddings, thus showing the complementarity of the different sources of lexical information. The improvements are particularly important for the smaller datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,726
inproceedings
madhyastha-etal-2017-prepositional
Prepositional Phrase Attachment over Word Embedding Products
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6305/
Madhyastha, Pranava Swaroop and Carreras, Xavier and Quattoni, Ariadna
Proceedings of the 15th International Conference on Parsing Technologies
32--43
We present a low-rank multi-linear model for the task of solving prepositional phrase attachment ambiguity (PP task). Our model exploits tensor products of word embeddings, capturing all possible conjunctions of latent embeddings. Our results on a wide range of datasets and task settings show that tensor products are the best compositional operation and that a relatively simple multi-linear model that uses only word embeddings of lexical features can outperform more complex non-linear architectures that exploit the same information. Our proposed model gives the current best reported performance on an out-of-domain evaluation and performs competively on out-of-domain dependency parsing datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,727
inproceedings
lee-etal-2017-l1
{L}1-{L}2 Parallel Dependency Treebank as Learner Corpus
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6306/
Lee, John and Li, Keying and Leung, Herman
Proceedings of the 15th International Conference on Parsing Technologies
44--49
This opinion paper proposes the use of parallel treebank as learner corpus. We show how an L1-L2 parallel treebank {---} i.e., parse trees of non-native sentences, aligned to the parse trees of their target hypotheses {---} can facilitate retrieval of sentences with specific learner errors. We argue for its benefits, in terms of corpus re-use and interoperability, over a conventional learner corpus annotated with error tags. As a proof of concept, we conduct a case study on word-order errors made by learners of Chinese as a foreign language. We report precision and recall in retrieving a range of word-order error categories from L1-L2 tree pairs annotated in the Universal Dependency framework.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,728
inproceedings
lee-don-2017-splitting
Splitting Complex {E}nglish Sentences
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6307/
Lee, John and Don, J. Buddhika K. Pathirage
Proceedings of the 15th International Conference on Parsing Technologies
50--55
This paper applies parsing technology to the task of syntactic simplification of English sentences, focusing on the identification of text spans that can be removed from a complex sentence. We report the most comprehensive evaluation to-date on this task, using a dataset of sentences that exhibit simplification based on coordination, subordination, punctuation/parataxis, adjectival clauses, participial phrases, and appositive phrases. We train a decision tree with features derived from text span length, POS tags and dependency relations, and show that it significantly outperforms a parser-only baseline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,729
inproceedings
tanaka-etal-2017-hierarchical
Hierarchical Word Structure-based Parsing: A Feasibility Study on {UD}-style Dependency Parsing in {J}apanese
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6308/
Tanaka, Takaaki and Hayashi, Katsuhiko and Nagata, Masaaki
Proceedings of the 15th International Conference on Parsing Technologies
56--60
In applying word-based dependency parsing such as Universal Dependencies (UD) to Japanese, the uncertainty of word segmentation emerges for defining a word unit of the dependencies. We introduce the following hierarchical word structures to dependency parsing in Japanese: morphological units (a short unit word, SUW) and syntactic units (a long unit word, LUW). An SUW can be used to segment a sentence consistently, while it is too short to represent syntactic construction. An LUW is a unit including functional multiwords and LUW-based analysis facilitates the capturing of syntactic structure and makes parsing results more precise than SUW-based analysis. This paper describes the results of a feasibility study on the ability and the effectiveness of parsing methods based on hierarchical word structure (LUW chunking+parsing) in comparison to single layer word structure (SUW parsing). We also show joint analysis of LUW-chunking and dependency parsing improves the performance of identifying predicate-argument structures, while there is not much difference between overall results of them. not much difference between overall results of them.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,730
inproceedings
bhat-etal-2017-leveraging
Leveraging Newswire Treebanks for Parsing Conversational Data with Argument Scrambling
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6309/
Bhat, Riyaz A. and Bhat, Irshad and Sharma, Dipti
Proceedings of the 15th International Conference on Parsing Technologies
61--66
We investigate the problem of parsing conversational data of morphologically-rich languages such as Hindi where argument scrambling occurs frequently. We evaluate a state-of-the-art non-linear transition-based parsing system on a new dataset containing 506 dependency trees for sentences from Bollywood (Hindi) movie scripts and Twitter posts of Hindi monolingual speakers. We show that a dependency parser trained on a newswire treebank is strongly biased towards the canonical structures and degrades when applied to conversational data. Inspired by Transformational Generative Grammar (Chomsky, 1965), we mitigate the sampling bias by generating all theoretically possible alternative word orders of a clause from the existing (kernel) structures in the treebank. Training our parser on canonical and transformed structures improves performance on conversational data by around 9{\%} LAS over the baseline newswire parser.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,731
inproceedings
sogaard-2017-using
Using hyperlinks to improve multilingual partial parsers
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6310/
S{\o}gaard, Anders
Proceedings of the 15th International Conference on Parsing Technologies
67--71
Syntactic annotation is costly and not available for the vast majority of the world`s languages. We show that sometimes we can do away with less labeled data by exploiting more readily available forms of mark-up. Specifically, we revisit an idea from Valentin Spitkovsky`s work (2010), namely that hyperlinks typically bracket syntactic constituents or chunks. We strengthen his results by showing that not only can hyperlinks help in low resource scenarios, exemplified here by Quechua, but learning from hyperlinks can also improve state-of-the-art NLP models for English newswire. We also present out-of-domain evaluation on English Ontonotes 4.0.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,732
inproceedings
kurtz-kuhlmann-2017-exploiting
Exploiting Structure in Parsing to 1-Endpoint-Crossing Graphs
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6312/
Kurtz, Robin and Kuhlmann, Marco
Proceedings of the 15th International Conference on Parsing Technologies
78--87
Deep dependency parsing can be cast as the search for maximum acyclic subgraphs in weighted digraphs. Because this search problem is intractable in the general case, we consider its restriction to the class of 1-endpoint-crossing (1ec) graphs, which has high coverage on standard data sets. Our main contribution is a characterization of 1ec graphs as a subclass of the graphs with pagenumber at most 3. Building on this we show how to extend an existing parsing algorithm for 1-endpoint-crossing trees to the full class. While the runtime complexity of the extended algorithm is polynomial in the length of the input sentence, it features a large constant, which poses a challenge for practical implementations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,734
inproceedings
kohita-etal-2017-effective
Effective Online Reordering with Arc-Eager Transitions
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6313/
Kohita, Ryosuke and Noji, Hiroshi and Matsumoto, Yuji
Proceedings of the 15th International Conference on Parsing Technologies
88--98
We present a new transition system with word reordering for unrestricted non-projective dependency parsing. Our system is based on decomposed arc-eager rather than arc-standard, which allows more flexible ambiguity resolution between a local projective and non-local crossing attachment. In our experiment on Universal Dependencies 2.0, we find our parser outperforms the ordinary swap-based parser particularly on languages with a large amount of non-projectivity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,735
inproceedings
de-lhoneux-etal-2017-arc
Arc-Hybrid Non-Projective Dependency Parsing with a Static-Dynamic Oracle
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6314/
de Lhoneux, Miryam and Stymne, Sara and Nivre, Joakim
Proceedings of the 15th International Conference on Parsing Technologies
99--104
In this paper, we extend the arc-hybrid system for transition-based parsing with a swap transition that enables reordering of the words and construction of non-projective trees. Although this extension breaks the arc-decomposability of the transition system, we show how the existing dynamic oracle for this system can be modified and combined with a static oracle only for the swap transition. Experiments on 5 languages show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,736
inproceedings
liu-zhang-2017-encoder
Encoder-Decoder Shift-Reduce Syntactic Parsing
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6315/
Liu, Jiangming and Zhang, Yue
Proceedings of the 15th International Conference on Parsing Technologies
105--114
Encoder-decoder neural networks have been used for many NLP tasks, such as neural machine translation. They have also been applied to constituent parsing by using bracketed tree structures as a target language, translating input sentences into syntactic trees. A more commonly used method to linearize syntactic trees is the shift-reduce system, which uses a sequence of transition-actions to build trees. We empirically investigate the effectiveness of applying the encoder-decoder network to transition-based parsing. On standard benchmarks, our system gives comparable results to the stack LSTM parser for dependency parsing, and significantly better results compared to the aforementioned parser for constituent parsing, which uses bracketed tree formats.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,737
inproceedings
ballesteros-carreras-2017-arc
Arc-Standard Spinal Parsing with Stack-{LSTM}s
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6316/
Ballesteros, Miguel and Carreras, Xavier
Proceedings of the 15th International Conference on Parsing Technologies
115--121
We present a neural transition-based parser for spinal trees, a dependency representation of constituent trees. The parser uses Stack-LSTMs that compose constituent nodes with dependency-based derivations. In experiments, we show that this model adapts to different styles of dependency relations, but this choice has little effect for predicting constituent structure, suggesting that LSTMs induce useful states by themselves.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,738
inproceedings
teichmann-etal-2017-coarse
Coarse-To-Fine Parsing for Expressive Grammar Formalisms
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6317/
Teichmann, Christoph and Koller, Alexander and Groschwitz, Jonas
Proceedings of the 15th International Conference on Parsing Technologies
122--127
We generalize coarse-to-fine parsing to grammar formalisms that are more expressive than PCFGs and/or describe languages of trees or graphs. We evaluate our algorithm on PCFG, PTAG, and graph parsing. While we achieve the expected performance gains on PCFGs, coarse-to-fine does not help for PTAG and can even slow down parsing for graphs. We discuss the implications of this finding.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,739
inproceedings
do-rehbein-2017-evaluating
Evaluating {LSTM} models for grammatical function labelling
Miyao, Yusuke and Sagae, Kenji
sep
2017
Pisa, Italy
Association for Computational Linguistics
https://aclanthology.org/W17-6318/
Do, Bich-Ngoc and Rehbein, Ines
Proceedings of the 15th International Conference on Parsing Technologies
128--133
To improve grammatical function labelling for German, we augment the labelling component of a neural dependency parser with a decision history. We present different ways to encode the history, using different LSTM architectures, and show that our models yield significant improvements, resulting in a LAS for German that is close to the best result from the SPMRL 2014 shared task (without the reranker).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
55,740
inproceedings
pisarevskaya-etal-2017-deception
Deception Detection for the {R}ussian Language: Lexical and Syntactic Parameters
Makary, Mireille and Oakes, Michael
sep
2017
Varna, Bulgaria
INCOMA Inc.
https://aclanthology.org/W17-7701/
Pisarevskaya, Dina and Litvinova, Tatiana and Litvinova, Olga
Proceedings of the 1st Workshop on Natural Language Processing and Information Retrieval associated with {RANLP} 2017
1--10
The field of automated deception detection in written texts is methodologically challenging. Different linguistic levels (lexics, syntax and semantics) are basically used for different types of English texts to reveal if they are truthful or deceptive. Such parameters as POS tags and POS tags n-grams, punctuation marks, sentiment polarity of words, psycholinguistic features, fragments of synta{\cyrs}tic structures are taken into consideration. The importance of different types of parameters was not compared for the Russian language before and should be investigated before moving to complex models and higher levels of linguistic processing. On the example of the Russian Deception Bank Corpus we estimate the impact of three groups of features (POS features including bigrams, sentiment and psycholinguistic features, syntax and readability features) on the successful deception detection and find out that POS features can be used for binary text classification, but the results should be double-checked and, if possible, improved.
null
null
10.26615/978-954-452-038-0_001
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,008
inproceedings
cercel-etal-2017-oiqa
o{IQ}a: An Opinion Influence Oriented Question Answering Framework with Applications to Marketing Domain
Makary, Mireille and Oakes, Michael
sep
2017
Varna, Bulgaria
INCOMA Inc.
https://aclanthology.org/W17-7702/
Cercel, Dumitru-Clementin and Onose, Cristian and Trausan-Matu, Stefan and Pop, Florin
Proceedings of the 1st Workshop on Natural Language Processing and Information Retrieval associated with {RANLP} 2017
11--18
Understanding questions and answers in QA system is a major challenge in the domain of natural language processing. In this paper, we present a question answering system that influences the human opinions in a conversation. The opinion words are quantified by using a lexicon-based method. We apply Latent Semantic Analysis and the cosine similarity measure between candidate answers and each question to infer the answer of the chatbot.
null
null
10.26615/978-954-452-038-0_002
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,009
inproceedings
sanchan-etal-2017-automatic
Automatic Summarization of Online Debates
Makary, Mireille and Oakes, Michael
sep
2017
Varna, Bulgaria
INCOMA Inc.
https://aclanthology.org/W17-7703/
Sanchan, Nattapong and Aker, Ahmet and Bontcheva, Kalina
Proceedings of the 1st Workshop on Natural Language Processing and Information Retrieval associated with {RANLP} 2017
19--27
Debate summarization is one of the novel and challenging research areas in automatic text summarization which has been largely unexplored. In this paper, we develop a debate summarization pipeline to summarize key topics which are discussed or argued in the two opposing sides of online debates. We view that the generation of debate summaries can be achieved by clustering, cluster labeling, and visualization. In our work, we investigate two different clustering approaches for the generation of the summaries. In the first approach, we generate the summaries by applying purely term-based clustering and cluster labeling. The second approach makes use of X-means for clustering and Mutual Information for labeling the clusters. Both approaches are driven by ontologies. We visualize the results using bar charts. We think that our results are a smooth entry for users aiming to receive the first impression about what is discussed within a debate topic containing waste number of argumentations.
null
null
10.26615/978-954-452-038-0_003
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,010
inproceedings
salem-etal-2017-game
A Game with a Purpose for Automatic Detection of Children`s Speech Disabilities using Limited Speech Resources
Makary, Mireille and Oakes, Michael
sep
2017
Varna, Bulgaria
INCOMA Inc.
https://aclanthology.org/W17-7704/
Salem, Reem and Elmahdy, Mohamed and Abdennadher, Slim and Hamed, Injy
Proceedings of the 1st Workshop on Natural Language Processing and Information Retrieval associated with {RANLP} 2017
28--34
Speech therapists and researchers are becoming more concerned with the use of computer-based systems in the therapy of speech disorders. In this paper, we propose a computer-based game with a purpose (GWAP) for speech therapy of Egyptian speaking children suffering from Dyslalia. Our aim is to detect if a certain phoneme is pronounced correctly. An Egyptian Arabic speech corpus has been collected. A baseline acoustic model was trained using the Egyptian corpus. In order to benefit from existing large amounts of Modern Standard Arabic (MSA) resources, MSA acoustic models were adapted with the collected Egyptian corpus. An independent testing set that covers common speech disorders has been collected for Egyptian speakers. Results show that adapted acoustic models give better recognition accuracy which could be relied on in the game and that children show more interest in playing the game than in visiting the therapist. A noticeable progress in children Dyslalia appeared with the proposed system.
null
null
10.26615/978-954-452-038-0_004
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,011
inproceedings
lejtovicz-dorn-2017-connecting
Connecting people digitally - a semantic web based approach to linking heterogeneous data sets
Zervanou, Kalliopi and Osenova, Petya and Wandl-Vogt, Eveline and Cristea, Dan
sep
2017
Varna
INCOMA Inc.
https://aclanthology.org/W17-7801/
Lejtovicz, Katalin and Dorn, Amelie
Proceedings of the Workshop Knowledge Resources for the Socio-Economic Sciences and Humanities associated with {RANLP} 2017
1--8
In this paper we present a semantic enrichment approach for linking two distinct data sets: the {\"OBL (Austrian Biographical Dictionary) and the DB{\"O (Database of Bavarian Dialects in Austria). Although the data sets are different in their content and in the structuring of data, they contain similar common {\textquotedblleftentities{\textquotedblright such as names of persons. Here we describe the semantic enrichment process of how these data sets can be inter-linked through URIs (Uniform Resource Identifiers) taking person names as a concrete example. Moreover, we also point to societal benefits of applying such semantic enrichment methods in order to open and connect our resources to various services.
null
null
10.26615/978-954-452-040-3_001
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,013
inproceedings
colhon-etal-2017-multiform
A Multiform Balanced Dependency Treebank for {R}omanian
Zervanou, Kalliopi and Osenova, Petya and Wandl-Vogt, Eveline and Cristea, Dan
sep
2017
Varna
INCOMA Inc.
https://aclanthology.org/W17-7802/
Colhon, Mihaela and M{\u{a}}r{\u{a}}nduc, C{\u{a}}t{\u{a}}lina and Mititelu, C{\u{a}}t{\u{a}}lin
Proceedings of the Workshop Knowledge Resources for the Socio-Economic Sciences and Humanities associated with {RANLP} 2017
9--18
The UAIC-RoDia-DepTb is a balanced treebank, containing texts in non-standard language: 2,575 chats sentences, old Romanian texts (a Gospel printed in 1648, a codex of laws printed in 1818, a novel written in 1910), regional popular poetry, legal texts, Romanian and foreign fiction, quotations. The proportions are comparable; each of these types of texts is represented by subsets of at least 1,000 phrases, so that the parser can be trained on their peculiarities. The annotation of the treebank started in 2007, and it has classical tags, such as those in school grammar, with the intention of using the resource for didactic purposes. The classification of circumstantial modifiers is rich in semantic information. We present in this paper the development in progress of this resource which has been automatically annotated and entirely manually corrected. We try to add new texts, and to make it available in more formats, by keeping all the morphological and syntactic information annotated, and adding logical-semantic information. We will describe here two conversions, from the classic syntactic format into Universal Dependencies format and into a logical-semantic layer, which will be shortly presented.
null
null
10.26615/978-954-452-040-3_002
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,014
inproceedings
fokkens-etal-2017-grasp
{GR}a{SP}: Grounded Representation and Source Perspective
Zervanou, Kalliopi and Osenova, Petya and Wandl-Vogt, Eveline and Cristea, Dan
sep
2017
Varna
INCOMA Inc.
https://aclanthology.org/W17-7803/
Fokkens, Antske and Vossen, Piek and Rospocher, Marco and Hoekstra, Rinke and van Hage, Willem Robert
Proceedings of the Workshop Knowledge Resources for the Socio-Economic Sciences and Humanities associated with {RANLP} 2017
19--25
When people or organizations provide information, they make choices regarding what information they include and how they present it. The combination of these two aspects (the content and stance provided by the source) represents a perspective. Investigating differences in perspective can provide various useful insights in the reliability of information, the way perspectives change over time, shared beliefs among groups of a similar social or political background and contrasts between other groups, etc. This paper introduces GRaSP, a generic framework for modeling perspectives and their sources.
null
null
10.26615/978-954-452-040-3_003
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,015
inproceedings
stambolieva-2017-educational
Educational Content Generation for Business and Administration {FL} Courses with the {NBU} {PLT} Platform
Zervanou, Kalliopi and Osenova, Petya and Wandl-Vogt, Eveline and Cristea, Dan
sep
2017
Varna
INCOMA Inc.
https://aclanthology.org/W17-7804/
Stambolieva, Maria
Proceedings of the Workshop Knowledge Resources for the Socio-Economic Sciences and Humanities associated with {RANLP} 2017
26--30
The paper presents part of an ongoing project of the Laboratory for Language Technologies of New Bulgarian University {--} {\textquotedblleft}An e-Platform for Language Teaching (PLT){\textquotedblright} {--} the development of corpus-based teaching content for Business English courses. The presentation offers information on: 1/ corpus creation and corpus management with PLT; 2/ PLT corpus annotation; 3/ language task generation and the Language Task Bank (LTB); 4/ content transfer to the NBU Moodle platform, test generation and feedback on student performance.
null
null
10.26615/978-954-452-040-3_004
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,016
inproceedings
kazakov-etal-2017-machine
Machine Learning Models of Universal Grammar Parameter Dependencies
Zervanou, Kalliopi and Osenova, Petya and Wandl-Vogt, Eveline and Cristea, Dan
sep
2017
Varna
INCOMA Inc.
https://aclanthology.org/W17-7805/
Kazakov, Dimitar and Cordoni, Guido and Ceolin, Andrea and Irimia, Monica-Alexandrina and Kim, Shin-Sook and Michelioudakis, Dimitris and Radkevich, Nina and Guardiano, Cristina and Longobardi, Giuseppe
Proceedings of the Workshop Knowledge Resources for the Socio-Economic Sciences and Humanities associated with {RANLP} 2017
31--37
The use of parameters in the description of natural language syntax has to balance between the need to discriminate among (sometimes subtly different) languages, which can be seen as a cross-linguistic version of Chomsky`s (1964) descriptive adequacy, and the complexity of the acquisition task that a large number of parameters would imply, which is a problem for explanatory adequacy. Here we present a novel approach in which a machine learning algorithm is used to find dependencies in a table of parameters. The result is a dependency graph in which some of the parameters can be fully predicted from others. These empirical findings can be then subjected to linguistic analysis, which may either refute them by providing typological counter-examples of languages not included in the original dataset, dismiss them on theoretical grounds, or uphold them as tentative empirical laws worth of further study.
null
null
10.26615/978-954-452-040-3_005
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
56,017