entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
grover-etal-2008-named
Named Entity Recognition for Digitised Historical Texts
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1253/
Grover, Claire and Givon, Sharon and Tobin, Richard and Ball, Julian
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We describe and evaluate a prototype system for recognising person and place names in digitised records of British parliamentary proceedings from the late 17th and early 19th centuries. The output of an OCR engine is the input for our system and we describe certain issues and errors in this data and discuss the methods we have used to overcome the problems. We describe our rule-based named entity recognition system for person and place names which is implemented using the LT-XML2 and LT-TTT2 text processing tools. We discuss the annotation of a development and testing corpus and provide results of an evaluation of our system on the test corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,642
inproceedings
song-strassel-2008-entity
Entity Translation and Alignment in the {ACE}-07 {ET} Task
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1254/
Song, Zhiyi and Strassel, Stephanie
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Entities - people, organizations, locations and the like - have long been a central focus of natural language processing technology development, since entities convey essential content in human languages. For multilingual systems, accurate translation of named entities and their descriptors is critical. LDC produced Entity Translation pilot data to support the ACE ET 2007 Evaluation and the current paper delves more deeply into the entity alignment issue across languages, combining the automatic alignment techniques developed for ACE-07 with manual alignment. Altogether 84{\%} of the Chinese-English entity mentions and 74{\%} of the Arabic-English entity mentions are perfect aligned. The results of this investigation offer several important insights. Automatic alignment algorithms predicted that perfect alignment for the ET corpus was likely to be no greater than 55{\%}; perfect alignment on the 15 pilot documents was predicted at 62.5{\%}. Our results suggest the actual perfect alignment rate is substantially higher (82{\%} average, 92{\%} for NAM entities). The careful analysis of alignment errors also suggests strategies for human translation to support the ET task; for instance, translators might be given additional guidance about preferred treatments of name versus nominal translation. These results can also contribute to refined methods of evaluating ET systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,643
inproceedings
kiyota-etal-2008-automated
Automated Subject Induction from Query Keywords through {W}ikipedia Categories and Subject Headings
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1255/
Kiyota, Yoji and Tamura, Noriyuki and Sakai, Satoshi and Nakagawa, Hiroshi and Masuda, Hidetaka
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper addresses a novel approach that integrates two different types of information resources: the World Wide Web and libraries. This approach is based on a hypothesis: advantages and disadvantages of the Web and libraries are complemental. The integration is based on correspondent conceptual label names between the Wikipedia categories and subject headings of library materials. The method enables us to find locations of bookshelves in a library easily, using any query keywords. Any keywords which are registered as Wikipedia items are acceptable. The advantages of the method are: the integrative approach makes subject access of library resources have broader coverage than an approach which only uses subject headings; and the approach navigates us to reliable information resources. We implemented the proposed method into an application system, and are now operating the system at several university libraries in Japan. We are planning to evaluate the method based on the query logs collected by the system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,644
inproceedings
sellberg-jonsson-2008-using
Using Random Indexing to improve Singular Value Decomposition for Latent Semantic Analysis
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1256/
Sellberg, Linus and J{\"onsson, Arne
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present results from using Random indexing for Latent Semantic Analysis to handle Singular Value Decomposition tractability issues. In the paper we compare Latent Semantic Analysis, Random Indexing and Latent Semantic Analysis on Random Indexing reduced matrices. Our results show that Latent Semantic Analysis on Random Indexing reduced matrices provide better results on Precision and Recall than Random Indexing only. Furthermore, computation time for Singular Value Decomposition on a Random indexing reduced matrix is almost halved compared to Latent Semantic Analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,645
inproceedings
vintar-fiser-2008-harvesting
Harvesting Multi-Word Expressions from Parallel Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1257/
Vintar, {\v{S}}pela and Fi{\v{s}}er, Darja
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The paper presents a set of approaches to extend the automatically created Slovene wordnet with nominal multi-word expressions. In the first approach multi-word expressions from Princeton WordNet are translated with a technique that is based on word-alignment and lexico-syntactic patterns. This is followed by extracting new terms from a monolingual corpus using keywordness ranking and contextual patterns. Finally, the multi-word expressions are assigned a hypernym and added to our wordnet. Manual evaluation and comparison of the results shows that the translation approach is the most straightforward and accurate. However, it is successfully complemented by the two monolingual approaches which are able to identify more term candidates in the corpus that would otherwise go unnoticed. Some weaknesses of the proposed wordnet extension techniques are also addressed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,646
inproceedings
agili-etal-2008-integration
Integration of a Multilingual Keyword Extractor in a Document Management System
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1258/
Agili, Andrea and Fabbri, Marco and Panunzi, Alessandro and Zini, Manuel
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present a new Document Management System called DrStorage. This DMS is multi-platform, JCR-170 compliant, supports WebDav, versioning, user authentication and authorization and the most widespread file formats (Adobe PDF, Microsoft Office, HTML,...). It is also easy to customize in order to enhance its search capabilities and to support automatic metadata assignment. DrStorage has been integrated with an automatic language guesser and with an automatic keyword extractor: these metadata can be assigned automatically to documents, because the DrStorage’s server part has benn modified to allow that metadata assignment takes place as documents are put in the repository. Metadata can greatly improve the search capabilites and the results quality of a search engine. DrStorage’s client has been customized with two search results view: the first, called timeline view, shows temporal trends of queries as an histogram, the second, keyword cloud, shows which words are correlated and how much are correlated with the results of a particular day.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,647
inproceedings
deksne-etal-2008-dictionary
Dictionary of Multiword Expressions for Translation into highly Inflected Languages
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1259/
Deksne, Daiga and Skadi{\c{n}}{\v{s}}, Raivis and Skadi{\c{n}}a, Inguna
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Treatment of Multiword Expressions (MWEs) is one of the most complicated issues in natural language processing, especially in Machine Translation (MT). The paper presents dictionary of MWEs for a English-Latvian MT system, demonstrating a way how MWEs could be handled for inflected languages with rich morphology and rather free word order. The proposed dictionary of MWEs consists of two constituents: a lexicon of phrases and a set of MWE rules. The lexicon of phrases is rather similar to translation lexicon of the MT system, while MWE rules describe syntactic structure of the source and target sentence allowing correct transformation of different MWE types into the target language and ensuring correct syntactic structure. The paper demonstrates this approach on different MWE types, starting from simple syntactic structures, followed by more complicated cases and including fully idiomatic expressions. Automatic evaluation shows that the described approach increases the quality of translation by 0.6 BLEU points.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,648
inproceedings
vetulani-etal-2008-verb
Verb-Noun Collocation {S}ynt{L}ex Dictionary: Corpus-Based Approach
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1260/
Vetulani, Grazyna and Vetulani, Zygmunt and Obr{\k{e}}bski, Tomasz
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The project presented here is a part of a long term research program aiming at a full lexicon grammar for Polish (SyntLex). The main concern of this project is computer-assisted acquisition and morpho-syntactic description of verb-noun collocations in Polish. We present methodology and resources obtained in three main project phases which are: dictionary-based acquisition of collocation lexicon, feasibility study for corpus-based lexicon enlargement phase, corpus-based lexicon enlargement and collocation description. In this paper we focus on the results of the third phase. The presented here corpus-based approach permitted us to triple the size the verb-noun collocation dictionary for Polish. In the paper we describe the SyntLex Dictionary of Collocations and announce some future research intended to be a separate project continuation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,649
inproceedings
qu-etal-2008-targeting
Targeting {C}hinese Nominal Compounds in Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1261/
Qu, Weiruo and Ringlstetter, Christoph and Goebel, Randy
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
For compounding languages, a great part of the topical semantics is transported via nominal compounds. Various applications of natural language processing can profit from explicit access to these compounds, provided by a lexicon. The best way to acquire such a resource is to harvest corpora that represent the domain in question. For Chinese, a significant difficulty lies in the fact that the text comes as a string of characters, only segmented by sentence boundaries. Extraction algorithms that solely rely on context variety do not perform precisely enough. We propose a pipeline of filters that starts from a candidate set established by accessor variety and then employs several methods to improve precision. For the experiments the Xinhua part of the Chinese Gigaword Corpus was used. We extracted a random sample of 200 story texts with 119,509 Hanzi characters. All compound words of this evaluation corpus were tagged, segmented into their morphemes, and augmented with the POS-information of their segments. A cascade of filters applied to a preliminary set of compound candidates led to a very high precision of over 90{\%}, measured for the types. The result also holds for a small corpus where a solely contextual method introduces too much noise, even for the longer compounds. An introduction of MI into the basic candidacy algorithm led to a much higher recall with still reasonable precision for subsequent manual processing. Especially for the four-character compounds, that in our sample represent over 40{\%} of the target data, the method has sufficient efficacy to support the rapid construction of compound dictionaries from domain corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,650
inproceedings
ramos-etal-2008-using
Using Semantically Annotated Corpora to Build Collocation Resources
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1262/
Ramos, Margarita Alonso and Rambow, Owen and Wanner, Leo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We present an experiment in extracting collocations from the FrameNet corpus, specifically, support verbs such as direct in Environmentalists directed strong criticism at world leaders. Support verbs do not contribute meaning of their own and the meaning of the construction is provided by the noun; the recognition of support verbs is thus useful in text understanding. Having access to a list of support verbs is also useful in applications that can benefit from paraphrasing, such as generation (where paraphrasing can provide variety). This paper starts with a brief presentation of the notion of lexical function in Meaning-Text Theory, where they fall under the notion of lexical function, and then discusses how relevant information is encoded in the FrameNet corpus. We describe the resource extracted from the FrameNet corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,651
inproceedings
kermanidis-etal-2008-eksairesis
{E}ksairesis: A Domain-Adaptable System for Ontology Building from Unstructured Text
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1263/
Kermanidis, Katia Lida and Thanopoulos, Aristomenis and Maragoudakis, Manolis and Fakotakis, Nikos
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes Eksairesis, a system for learning economic domain knowledge automatically from Modern Greek text. The knowledge is in the form of economic terms and the semantic relations that govern them. The entire process in based on the use of minimal language-dependent tools, no external linguistic resources, and merely free, unstructured text. The methodology is thereby easily portable to other domains and other languages. The text is pre-processed with basic morphological annotation, and semantic (named and other) entities are identified using supervised learning techniques. Statistical filtering, i.e. corpora comparison is used to extract domain terms and supervised learning is again employed to detect the semantic relations between pairs of terms. Advanced classification schemata, ensemble learning, and one-sided sampling, are experimented with in order to deal with the noise in the data, which is unavoidable due to the low pre-processing level and the lack of sophisticated resources. An average 68.5{\%} f-score over all the classes is achieved when learning semantic relations. Bearing in mind the use of minimal resources and the highly automated nature of the process, classification performance is very promising, compared to results reported in previous work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,652
inproceedings
montero-etal-2008-conceptual
Conceptual Modeling of Ontology-based Linguistic Resources with a Focus on Semantic Relations
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1264/
Montero, Francisco Alvarez and Sanchez, Antonio Vaquero and Perez, Fernando S{\'a}enz
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Although ontologies and linguistic resources play a key role in applied AI and NLP, they have not been developed in a common and systematic way. The lack of a systematic methodology for their development has lead to the production of resources that exhibit common flaws between them, and that, at least when it come to ontologies, negatively impact their results and reusability. In this paper, we introduce a software-engineering methodology for the construction of ontology-based linguistic resources, and present a sound conceptual schema that takes into account several considerations for the construction of software tools that allow the systematic and controlled construction of ontology-based linguistic resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,653
inproceedings
buitelaar-eigner-2008-ontology
Ontology Search with the {O}nto{S}elect Ontology Library
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1265/
Buitelaar, Paul and Eigner, Thomas
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
OntoSelect is a dynamic web-based ontology library that harvests, analyzes and organizes ontologies published on the Semantic Web. OntoSelect allows searching as well as browsing of ontologies according to size (number of classes, properties), representation format (DAML, RDFS, OWL), connectedness (score over the number of included and referring ontologies) and human languages used for class- and object property-labels. Ontology search in OntoSelect is based on a combined measure of coverage, structure and connectedness. Further, and in contrast to other ontology search engines, OntoSelect provides ontology search based on a complete web document instead of one or more keywords only.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,654
inproceedings
trojahn-etal-2008-framework
A Framework for Multilingual Ontology Mapping
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1266/
Trojahn, C{\'a}ssia and Quaresma, Paulo and Vieira, Renata
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In the field of ontology mapping, multilingual ontology mapping is an issue that is not well explored. This paper proposes a framework for mapping of multilingual Description Logics (DL) ontologies. First, the DL source ontology is translated to the target ontology language, using a lexical database or a dictionary, generating a DL translated ontology. The target and the translated ontologies are then used as input for the mapping process. The mappings are computed by specialized agents using different mapping approaches. Next, these agents use argumentation to exchange their local results, in order to agree on the obtained mappings. Based on their preferences and confidence of the arguments, the agents compute their preferred mapping sets. The arguments in such preferred sets are viewed as the set of globally acceptable arguments. A DL mapping ontology is generated as result of the mapping process. In this paper we focus on the process of generating the DL translated ontology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,655
inproceedings
kassner-etal-2008-acquiring
Acquiring a Taxonomy from the {G}erman {W}ikipedia
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1267/
Kassner, Laura and Nastase, Vivi and Strube, Michael
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents the process of acquiring a large, domain independent, taxonomy from the German Wikipedia. We build upon a previously implemented platform that extracts a semantic network and taxonomy from the English version of the Wikipedia. We describe two accomplishments of our work: the semantic network for the German language in which isa links are identified and annotated, and an expansion of the platform for easy adaptation for a new language. We identify the platform’s strengths and shortcomings, which stem from the scarcity of free processing resources for languages other than English. We show that the taxonomy induction process is highly reliable - evaluated against the German version of WordNet, GermaNet, the resource obtained shows an accuracy of 83.34{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,656
inproceedings
picca-etal-2008-lmm
{LMM}: an {OWL}-{DL} {M}eta{M}odel to Represent Heterogeneous Lexical Knowledge
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1268/
Picca, Davide and Gliozzo, Alfio Massimiliano and Gangemi, Aldo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present a Linguistic Meta-Model (LMM) allowing a semiotic-cognitive representation of knowledge. LMM is freely available and integrates the schemata of linguistic knowledge resources, such as WordNet and FrameNet, as well as foundational ontologies, such as DOLCE and its extensions. In addition, LMM is able to deal with multilinguality and to represent individuals and facts in an open domain perspective.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,657
inproceedings
isahara-etal-2008-development
Development of the {J}apanese {W}ord{N}et
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1269/
Isahara, Hitoshi and Bond, Francis and Uchimoto, Kiyotaka and Utiyama, Masao and Kanzaki, Kyoko
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
After a long history of compilation of our own lexical resources, EDR Japanese/English Electronic Dictionary, and discussions with major players on development of various WordNets, Japanese National Institute of Information and Communications Technology started developing the Japanese WordNet in 2006 and will publicly release the first version, which includes both the synset in Japanese and the annotated Japanese corpus of SemCor, in June 2008. As the first step in compiling the Japanese WordNet, we added Japanese equivalents to synsets of the Princeton WordNet. Of course, we must also add some synsets which do not exist in the Princeton WordNet, and must modify synsets in the Princeton WordNet, in order to make the hierarchical structure of Princeton synsets represent thesaurus-like information found in the Japanese language, however, we will address these tasks in a future study. We then translated English sentences which are used in the SemCor annotation into Japanese and annotated them using our Japanese WordNet. This article describes the overview of our project to compile Japanese WordNet and other resources which relate to our Japanese WordNet.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,658
inproceedings
newbold-etal-2008-lexical
Lexical Ontology Extraction using Terminology Analysis: Automating Video Annotation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1270/
Newbold, Neil and Vrusias, Bogdan and Gillam, Lee
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The majority of work described in this paper was conducted as part of the Recovering Evidence from Video by fusing Video Evidence Thesaurus and Video MetaData (REVEAL) project, sponsored by the UK’s Engineering and Physical Sciences Research Council (EPSRC). REVEAL is concerned with reducing the time-consuming, yet essential, tasks undertaken by UK Police Officers when dealing with terascale collections of video related to crime-scenes. The project is working towards technologies which will archive video that has been annotated automatically based on prior annotations of similar content, enabling rapid access to CCTV archives and providing capabilities for automatic video summarisation. This involves considerations of semantic annotation relating, amongst other things, to content and to temporal reasoning. In this paper, we describe the ontology extraction components of the system in development, and its use in REVEAL for automatically populating a CCTV ontology from analysis of expert transcripts of the video footage.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,659
inproceedings
suktarachan-etal-2008-workbench
Workbench with Authoring Tools for Collaborative Multi-lingual Ontological Knowledge Construction and Maintenance
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1271/
Suktarachan, Mukda and Thamvijit, Dussadee and Noikongka, Daoyos and Yongyuth, Panita and Mahasarakham, Puwarat Pavaputanont Na and Kawtrakul, Asanee and Kawtrakul, Asanee and Sini, Margherita
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
An ontological knowledge management system requires dynamic and encapsulating operation in order to share knowledge among communities. The key to success of knowledge sharing in the field of agriculture is using and sharing agreed terminologies such as ontological knowledge especially in multiple languages. This paper proposes a workbench with three authoring tools for collaborative multilingual ontological knowledge construction and maintenance, in order to add value and support communities in the field of food and agriculture. The framework consists of the multilingual ontological knowledge construction and maintenance workbench platform, which composes of ontological knowledge management and user management, and three ontological knowledge authoring tools. The authoring tools used are two ontology extraction tools, ATOM and KULEX, and one ontology integration tool.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,660
inproceedings
shamsfard-2008-towards
Towards Semi Automatic Construction of a Lexical Ontology for {P}ersian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1272/
Shamsfard, Mehrnoush
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Lexical ontologies and semantic lexicons are important resources in natural language processing. They are used in various tasks and applications, especially where semantic processing is evolved such as question answering, machine translation, text understanding, information retrieval and extraction, content management, text summarization, knowledge acquisition and semantic search engines. Although there are a number of semantic lexicons for English and some other languages, Persian lacks such a complete resource to be used in NLP works. In this paper we introduce an ongoing project on developing a lexical ontology for Persian called FarsNet. We exploited a hybrid semi-automatic approach to acquire lexical and conceptual knowledge from resources such as WordNet, bilingual dictionaries, mono-lingual corpora and morpho-syntactic and semantic templates. FarsNet is an ontology whose elements are lexicalized in Persian. It provides links between various types of words (cross POS relations) and also between words and their corresponding concepts in other ontologies (cross ontologies relations). FarsNet aggregates the power of WordNet on nouns, the power of FrameNet on verbs and the wide range of conceptual relations from ontology community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,661
inproceedings
de-melo-weikum-2008-mapping
Mapping {R}oget`s Thesaurus and {W}ord{N}et to {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1273/
de Melo, Gerard and Weikum, Gerhard
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Roget’s Thesaurus and WordNet are very widely used lexical reference works. We describe an automatic mapping procedure that effectively produces French translations of the terms in these two resources. Our approach to the challenging task of disambiguation is based on structural statistics as well as measures of semantic relatedness that are utilized to learn a classification model for associations between entries in the thesaurus and French terms taken from bilingual dictionaries. By building and applying such models, we have produced French versions of Roget’s Thesaurus and WordNet with a considerable level of accuracy, which can be used for a variety of different purposes, by humans as well as in computational applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,662
inproceedings
jouis-bourdaillet-2008-representation
Representation of Atypical Entities in Ontologies
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1274/
Jouis, Christophe and Bourdaillet, Julien
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper is a contribution to formal ontology study. Some entities belong more or less to a class. In particular, some individual entities are attached to classes whereas they do not check all the properties of the class. To specify whether an individual entity belonging to a class is typical or not, we borrow the topological concepts of interior, border, closure, and exterior. We define a system of relations by adapting these topological operators. A scale of typicality, based on topology, is introduced. It enables to define levels of typicality where individual entities are more or less typical elements of a concept.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,663
inproceedings
chung-etal-2008-extracting
Extracting Concrete Senses of Lexicon through Measurement of Conceptual Similarity in Ontologies
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1275/
Chung, Siaw-Fong and Pr{\'e}vot, Laurent and Xu, Mingwei and Ahrens, Kathleen and Hsieh, Shu-Kai and Huang, Chu-Ren
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The measurement of conceptual similarity in a hierarchical structure has been proposed by studies such as Wu and Palmer (1994) which have been summarized and evaluated in Budanisky and Hirst (2006). The present study applies the measurement of conceptual similarity to conceptual metaphor research by comparing concreteness of ontological resource nodes to several prototypical concrete nodes selected by human subjects. Here, the purpose of comparing conceptual similarity between nodes is to select a concrete sense for a word which is used metaphorically. Through using WordNet-SUMO interface such as SinicaBow (Huang, Chang and Lee, 2004), concrete senses of a lexicon will be selected once its SUMO nodes have been compared in terms of conceptual similarity with the prototypical concrete nodes. This study has strong implications for the interaction of psycholinguistic and computational linguistic fields in conceptual metaphor research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,664
inproceedings
okamoto-etal-2008-contextual
A Contextual Dynamic Network Model for {WSD} Using Associative Concept Dictionary
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1276/
Okamoto, Jun and Uchiyama, Kiyoko and Ishizaki, Shun
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Many of the Japanese ideographs (Chinese characters) have a few meanings. Such ambiguities should be identified by using their contextual information. For example, we have an ideograph which has two pronunciations, /hitai/ and /gaku/, the former means a forehead of the human body and the latter has two meanings, an amount of money and a picture frame. Conventional methods for such a disambiguation problem have been using statistical methods with co-occurrence of words in their context. In this research, Contextual Dynamic Network Model is developed using the Associative Concept Dictionary which includes semantic relations among concepts/words and the relations can be represented with quantitative distances. In this model, an interactive activation method is used to identify a word’s meaning on the Contextual Semantic Network where the activation on the network is calculated using the distances. The proposed method constructs dynamically the Contextual Semantic Network according to the input words sequentially that appear in the sentence including an ambiguous word.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,665
inproceedings
loos-schwarten-2008-semantic
A Semantic Memory for Incremental Ontology Population
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1277/
Loos, Berenike and Schwarten, Lasse
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Generally, ontology learning and population is applied as a semi-automatic approach to knowledge acquisition in natural language understanding systems. That means, after the ontology is created or populated, an expert of the domain can still change or refine the newly acquired knowledge. In an incremental ontology learning framework (as e.g. applied for open-domain dialog systems) this approach is not sufficient as knowledge about the real world is dynamic and, therefore, has to be acquired and updated constantly. In this paper we propose the storing of newly acquired instances of an ontological concept in a separate database instead of integrating them directly into the system’s knowledge base. The advantage is that possibly incorrect knowledge is not part of the system’s ontology but stored aside. Furthermore, information about the confidence about the learned instances can be displayed and used for a final revision as well as a further automatic acquisition.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,666
inproceedings
vivaldi-etal-2008-turning
Turning a Term Extractor into a new Domain: first Experiences
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1278/
Vivaldi, Jorge and Joan, Anna and Lorente, Merc{\`e}
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Computational terminology has notably evolved since the advent of computers. Regarding the extraction of terms in particular, a large number of resources have been developed: from very general tools to other much more specific acquisition methodologies. Such acquisition methodologies range from using simple linguistic patterns or frequency counting methods to using much more evolved strategies combining morphological, syntactical, semantical and contextual information. Researchers usually develop a term extractor to be applied to a given domain and, in some cases, some testing about the tool performance is also done. Afterwards, such tools may also be applied to other domains, though frequently no additional test is made in such cases. Usually, the application of a given tool to other domain does not require any tuning. Recently, some tools using semantic resources have been developed. In such cases, either a domain-specific or a generic resource may be used. In the latter case, some tuning may be necessary in order to adapt the tool to a new domain. In this paper, we present the task started in order to adapt YATE, a term extractor that uses a generic resource as EWN and that is already developed for the medical domain, into the economic one.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,667
inproceedings
anick-etal-2008-similar
Similar Term Discovery using Web Search
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1279/
Anick, Peter and Murthi, Vijay and Sebastian, Shaji
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We present an approach to the discovery of semantically similar terms that utilizes a web search engine as both a source for generating related terms and a tool for estimating the semantic similarity of terms. The system works by associating with each document in the search engine’s index a weighted term vector comprising those phrases that best describe the document’s subject matter. Related terms for a given seed phrase are generated by running the seed as a search query and mining the result vector produced by averaging the weights of terms associated with the top documents of the query result set. The degree of similarity between the seed term and each related term is then computed as the cosine of the angle between their respective result vectors. We test the effectiveness of this approach for building a term recommender system designed to help online advertisers discover additional phrases to describe their product offering. A comparison of its output with that of several alternative methods finds it to be competitive with the best known alternative.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,668
inproceedings
kubo-etal-2008-temporal
Temporal Aspects of Terminology for Automatic Term Recognition: Case Study on Women`s Studies Terms
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1280/
Kubo, Junko and Tsuji, Keita and Sugimoto, Shigeo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The purpose of this paper is to clarify the temporal aspect of terminology focusing on the dictionary’s impact on terms. We used women’s studies terms as data and examined the changes of their values of five automatic term recognition (ATR) measures before and after dictionary publication. The changes of precision and recall of extraction based on these measures were also examined. The measures are TFIDF, C-value, MC-value, Nakagawa’s FLR, and simple document frequencies. We found that being listed in dictionaries gives longevity to terms and prevent them from losing termhood that is represented by these ATR measures. The peripheral or relatively less important terms are more likely to be influenced by dictionaries and their termhood increase after being listed in dictionaries. Among the termhood, the potential of word formation that can be measured by Nakagawa’s FLR seemed to be influenced most and the terms gradually gained it after being listed in dictionaries.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,669
inproceedings
zhang-etal-2008-comparative
A Comparative Evaluation of Term Recognition Algorithms
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1281/
Zhang, Ziqi and Iria, Jose and Brewster, Christopher and Ciravegna, Fabio
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Automatic Term recognition (ATR) is a fundamental processing step preceding more complex tasks such as semantic search and ontology learning. From a large number of methodologies available in the literature only a few are able to handle both single and multi-word terms. In this paper we present a comparison of five such algorithms and propose a combined approach us{\textlnot}ing a voting mechanism. We evaluated the six approaches using two different corpora and show how the voting algo{\textlnot}rithm performs best on one corpus (a collection of texts from Wikipedia) and less well using the Genia corpus (a standard life science corpus). This indicates that choice and design of corpus has a major impact on the evaluation of term recog{\textlnot}nition algorithms. Our experiments also showed that single-word terms can be equally important and occupy a fairly large proportion in certain domains. As a result, algorithms that ignore single-word terms may cause problems to tasks built on top of ATR. Effective ATR systems also need to take into account both the unstructured text and the structured aspects and this means information extraction techniques need to be integrated into the term recognition process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,670
inproceedings
hoste-etal-2008-learning
Learning-based Detection of Scientific Terms in Patient Information
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1282/
Hoste, Veronique and Lefever, Els and Vanopstal, Klaar and Delaere, Isabelle
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper, we investigate the use of a machine-learning based approach to the specific problem of scientific term detection in patient information. Lacking lexical databases which differentiate between the scientific and popular nature of medical terms, we used local context, morphosyntactic, morphological and statistical information to design a learner which accurately detects scientific medical terms. This study is the first step towards the automatic replacement of a scientific term by its popular counterpart, which should have a beneficial effect on readability. We show a F-score of 84{\%} for the prediction of scientific terms in an English and Dutch EPAR corpus. Since recasting the term extraction problem as a classification problem leads to a large skewedness of the resulting data set, we rebalanced the data set through the application of some simple TF-IDF-based and Log-likelihood-based filters. We show that filtering indeed has a beneficial effect on the learner’s performance. However, the results of the filtering approach combined with the learning-based approach remain below those of the learning-based approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,671
inproceedings
pociello-etal-2008-wnterm
{WNTERM}: Enriching the {MCR} with a Terminological Dictionary
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1283/
Pociello, Eli and Gurrutxaga, Antton and Agirre, Eneko and Aldezabal, Izaskun and Rigau, German
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we describe the methodology and the first steps for the creation of WNTERM (from WordNet and Terminology), a specialized lexicon produced from the merger of the EuroWordNet-based Multilingual Central Repository (MCR) and the Basic Encyclopaedic Dictionary of Science and Technology (BDST). As an example, the ecology domain has been used. The final result is a multilingual (Basque and English) light-weight domain ontology, including taxonomic and other semantic relations among its concepts, which is tightly connected to other wordnets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,672
inproceedings
marinelli-etal-2008-encoding
Encoding Terms from a Scientific Domain in a Terminological Database: Methodology and Criteria
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1284/
Marinelli, Rita and Tiberi, Melissa and Bindi, Remo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper reports on the main phases of a research which aims at enhancing a maritime terminological database by means of a set of terms belonging to meteorology. The structure of the terminological database, according to EuroWordNet/ItalWordNet model is described; the criteria used to build corpora of specialized texts are explained as well as the use of the corpora as source for term selection and extraction. The contribution of the semantic databases is taken into account: on the one hand, the most recent version of the Princeton WordNet has been exploited as reference for comparing and evaluating synsets; on the other hand, the Italian WordNet has been employed as source for exporting synsets to be coded in the terminological resource. The set of semantic relations useful to codify new terms belonging to the discipline of meteorology is examined, revising the semantic relations provided by the IWN model, introducing new relations which are more suitably tailored to specific requirements either scientific or pragmatic. The need for a particular relation is highlighted to represent the mental association which is made when a term intuitively recalls another term, but they are neither synonyms nor connected by means of a hyperonymy/hyponymy relation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,673
inproceedings
mandl-etal-2008-evaluation
An Evaluation Resource for Geographic Information Retrieval
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1285/
Mandl, Thomas and Gey, Fredric and Di Nunzio, Giorgio and Ferro, Nicola and Sanderson, Mark and Santos, Diana and Womser-Hacker, Christa
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present an evaluation resource for geographic information retrieval developed within the Cross Language Evaluation Forum (CLEF). The GeoCLEF track is dedicated to the evaluation of geographic information retrieval systems. The resource encompasses more than 600,000 documents, 75 topics so far, and more than 100,000 relevance judgments for these topics. Geographic information retrieval requires an evaluation resource which represents realistic information needs and which is geographically challenging. Some experimental results and analysis are reported
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,674
inproceedings
civera-juan-ciscar-2008-bilingual
Bilingual Text Classification using the {IBM} 1 Translation Model
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1286/
Civera, Jorge and Juan-C{\'i}scar, Alfons
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Manual categorisation of documents is a time-consuming task that has been significantly alleviated with the deployment of automatic and machine-aided text categorisation systems. However, the proliferation of multilingual documentation has become a common phenomenon in many international organisations, while most of the current systems have focused on the categorisation of monolingual text. It has been recently shown that the inherent redundancy in bilingual documents can be effectively exploited by relatively simple, bilingual naive Bayes (multinomial) models. In this work, we present a refined version of these models in which this redundancy is explicitly captured by a combination of a unigram (multinomial) model and the well-known IBM 1 translation model. The proposed model is evaluated on two bilingual classification tasks and compared to previous work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,675
inproceedings
shinnou-sasaki-2008-ping
Ping-pong Document Clustering using {NMF} and Linkage-Based Refinement
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1287/
Shinnou, Hiroyuki and Sasaki, Minoru
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper proposes a ping-pong document clustering method using NMF and the linkage based refinement alternately, in order to improve the clustering result of NMF. The use of NMF in the ping-pong strategy can be expected effective for document clustering. However, NMF in the ping-pong strategy often worsens performance because NMF often fails to improve the clustering result given as the initial values. Our method handles this problem with the stop condition of the ping-pong process. In the experiment, we compared our method with the k-means and NMF by using 16 document data sets. Our method improved the clustering result of NMF significantly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,676
inproceedings
shinnou-sasaki-2008-spectral
Spectral Clustering for a Large Data Set by Reducing the Similarity Matrix Size
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1288/
Shinnou, Hiroyuki and Sasaki, Minoru
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using k-means, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call this data set as “committee”. Data except for committees remain one data. For these data, we construct the similarity matrix. Definitely, the size of this similarity matrix is reduced so much that we can perform spectral clustering using the reduced similarity matrix.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,677
inproceedings
damljanovic-etal-2008-text
A Text-based Query Interface to {OWL} Ontologies
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1289/
Damljanovic, Danica and Tablan, Valentin and Bontcheva, Kalina
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Accessing structured data in the form of ontologies requires training and learning formal query languages (e.g., SeRQL or SPARQL) which poses significant difficulties for non-expert users. One of the ways to lower the learning overhead and make ontology queries more straightforward is through a Natural Language Interface (NLI). While there are existing NLIs to structured data with reasonable performance, they tend to require expensive customisation to each new domain or ontology. Additionally, they often require specific adherence to a pre-defined syntax which, in turn, means that users still have to undergo training. In this paper we present Question-based Interface to Ontologies (QuestIO) - a tool for querying ontologies using unconstrained language-based queries. QuestIO has a very simple interface, requires no user training and can be easily embedded in any system or used with any ontology or knowledge base without prior customisation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,678
inproceedings
ren-etal-2008-research
A Research on Automatic {C}hinese Catchword Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1290/
Ren, Han and Ji, Donghong and Han, Lei
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Catchwords refer to popular words or phrases within certain area in certain period of time. In this paper, we propose a novel approach for automatic Chinese catchwords extraction. At the beginning, we discuss the linguistic definition of catchwords and analyze the features of catchwords by manual evaluation. According to those features of catchwords, we define three aspects to describe Popular Degree of catchwords. To extract terms with maximum meaning, we adopt an effective ATE algorithm for multi-character words and long phrases. Then we use conic fitting in Time Series Analysis to build Popular Degree Curves of extracted terms. To calculate Popular Degree Values of catchwords, a formula is proposed which includes values of Popular Trend, Peak Value and Popular Keeping. Finally, a ranking list of catchword candidates is built according to Popular Degree Values. Experiments show that automatic Chinese catchword extraction is effective and objective in comparison with manual evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,679
inproceedings
councill-etal-2008-parscit
{P}ars{C}it: an Open-source {CRF} Reference String Parsing Package
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1291/
Councill, Isaac and Giles, C. Lee and Kan, Min-Yen
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,680
inproceedings
kozawa-etal-2008-automatic
Automatic Acquisition of Usage Information for Language Resources
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1292/
Kozawa, Shunsuke and Tohyama, Hitomi and Uchimoto, Kiyotaka and Matsubara, Shigeki
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Recently, language resources (LRs) are becoming indispensable for linguistic research. Unfortunately, it is not easy to find their usages by searching the web even though they must be described in the Internet or academic articles. This indicates that the intrinsic value of LRs is not recognized very well. In this research, therefore, we extract a list of usage information for each LR to promote the efficient utilization of LRs. In this paper, we proposed a method for extracting a list of usage information from academic articles by using rules based on syntactic information. The rules are generated by focusing on the syntactic features that are observed in the sentences describing usage information. As a result of experiments, we achieved 72.9{\%} in recall and 78.4{\%} in precision for the closed test and 60.9{\%} in recall and 72.7{\%} in precision for the open test.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,681
inproceedings
wiegand-etal-2008-cost
Cost-Sensitive Learning in Answer Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1293/
Wiegand, Michael and Leidner, Jochen L. and Klakow, Dietrich
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
One problem of data-driven answer extraction in open-domain factoid question answering is that the class distribution of labeled training data is fairly imbalanced. In an ordinary training set, there are far more incorrect answers than correct answers. The class-imbalance is, thus, inherent to the classification task. It has a deteriorating effect on the performance of classifiers trained by standard machine learning algorithms. They usually have a heavy bias towards the majority class, i.e. the class which occurs most often in the training set. In this paper, we propose a method to tackle class imbalance by applying some form of cost-sensitive learning which is preferable to sampling. We present a simple but effective way of estimating the misclassification costs on the basis of class distribution. This approach offers three benefits. Firstly, it maintains the distribution of the classes of the labeled training data. Secondly, this form of meta-learning can be applied to a wide range of common learning algorithms. Thirdly, this approach can be easily implemented with the help of state-of-the-art machine learning software.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,682
inproceedings
degorski-etal-2008-definition
Definition Extraction Using a Sequential Combination of Baseline Grammars and Machine Learning Classifiers
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1294/
Deg{\'o}rski, {\L}ukasz and Marci{\'n}czuk, Micha{\l} and Przepi{\'o}rkowski, Adam
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The paper deals with the task of definition extraction from a small and noisy corpus of instructive texts. Three approaches are presented: Partial Parsing, Machine Learning and a sequential combination of both. We show that applying ML methods with the support of a trivial grammar gives results better than a relatively complicated partial grammar, and much better than pure ML approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,683
inproceedings
fallucchi-zanzotto-2008-yet
Yet another Platform for Extracting Knowledge from Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1295/
Fallucchi, Francesca and Zanzotto, Fabio Massimo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The research field of “extracting knowledge bases from text collections” seems to be mature: its target and its working hypotheses are clear. In this paper we propose a platform, YAPEK, i.e., Yet Another Platform for Extracting Knowledge from corpora, that wants to be the base to collect the majority of algorithms for extracting knowledge bases from corpora. The idea is that, when many knowledge extraction algorithms are collected under the same platform, relative comparisons are clearer and many algorithms can be leveraged to extract more valuable knowledge for final tasks such as Textual Entailment Recognition. As we want to collect many knowledge extraction algorithms, YAPEK is based on the three working hypotheses of the area: the basic hypothesis, the distributional hypothesis, and the point-wise assertion patterns. In YAPEK, these three hypotheses define two spaces: the space of the target textual forms and the space of the contexts. This platform guarantees the possibility of rapidly implementing many models for extracting knowledge from corpora as the platform gives clear entry points to model what is really different in the different algorithms: the feature spaces, the distances in these spaces, and the actual algorithm.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,684
inproceedings
yankova-etal-2008-framework
A Framework for Identity Resolution and Merging for Multi-source Information Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1296/
Yankova, Milena and Saggion, Horacio and Cunningham, Hamish
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In the context of ontology-based information extraction, identity resolution is the process of deciding whether an instance extracted from text refers to a known entity in the target domain (e.g. the ontology). We present an ontology-based framework for identity resolution which can be customized to different application domains and extraction tasks. Rules for identify resolution, which compute similarities between target and source entities based on class information and instance properties and values, can be defined for each class in the ontology. We present a case study of the application of the framework to the problem of multi-source job vacancy extraction
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,685
inproceedings
karlgren-etal-2008-experiments
Experiments to Investigate the Connection between Case Distribution and Topical Relevance of Search Terms in an Information Retrieval Setting
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1297/
Karlgren, Jussi and Dalianis, Hercules and Jongejan, Bart
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We have performed a set of experiments made to investigate the utility of morphological analysis to improve retrieval of documents written in languages with relatively large morphological variation in a practical commercial setting, using the SiteSeeker search system developed and marketed by Euroling Ab. The objective of the experiments was to evaluate different lemmatisers and stemmers to determine which would be the most practical for the task at hand: highly interactive, relatively high precision web searches in commercial customer-oriented document collections. This paper gives an overview of some of the results for Finnish and German, and describes specifically one experiment designed to investigate the case distribution of nouns in a highly inflectional language (Finnish) and the topicality of the nouns in target texts. We find that topical nouns taken from queries are distributed differently over relevant and non-relevant documents depending on their grammatical case.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,686
inproceedings
ibekwe-sanjuan-etal-2008-identifying
Identifying Strategic Information from Scientific Articles through Sentence Classification
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1298/
Ibekwe-SanJuan, Fidelia and Chen, Chaomei and Pinho, Roberto
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We address here the need to assist users in rapidly accessing the most important or strategic information in the text corpus by identifying sentences carrying specific information. More precisely, we want to identify contribution of authors of scientific papers through a categorization of sentences using rhetorical and lexical cues. We built local grammars to annotate sentences in the corpus according to their rhetorical status: objective, new things, results, findings, hypotheses, conclusion, related{\_}word, future work. The annotation is automatically projected automatically onto two other corpora to test their portability across several domains. The local grammars are implemented in the Unitex system. After sentence categorization, the annotated sentences are clustered and users can navigate the result by accessing specific information types. The results can be used for advanced information retrieval purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,687
inproceedings
azeredo-etal-2008-keywords
Keywords, k-{NN} and Neural Networks: a Support for Hierarchical Categorization of Texts in {B}razilian {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1299/
Azeredo, Susana and Moraes, Silvia and Lima, Vera
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
A frequent problem in automatic categorization applications involving Portuguese language is the absence of large corpora of previously classified documents, which permit the validation of experiments carried out. Generally, the available corpora are not classified or, when they are, they contain a very reduced number of documents. The general goal of this study is to contribute to the development of applications which aim at text categorization for Brazilian Portuguese. Specifically, we point out that keywords selection associated with neural networks can improve results in the categorization of Brazilian Portuguese texts. The corpus is composed of 30 thousand texts from the Folha de S{\~a}o Paulo newspaper, organized in 29 sections. In the process of categorization, the k-Nearest Neighbor (k-NN) algorithm and the Multilayer Perceptron neural networks trained with the backpropagation algorithm are used. It is also part of our study to test the identification of keywords parting from the log-likelihood statistical measure and to use them as features in the categorization process. The results clearly show that the precision is better when using neural networks than when using the k-NN.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,688
inproceedings
ibrahim-etal-2008-automatic
Automatic Extraction of Textual Elements from News Web Pages
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1300/
Ibrahim, Hossam and Darwish, Kareem and Madany, Abdel-Rahim
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present an algorithm for automatic extraction of textual elements, namely titles and full text, associated with news stories in news web pages. We propose a supervised machine learning classification technique based on the use of a Support Vector Machine (SVM) classifier to extract the desired textual elements. The technique uses internal structural features of a webpage without relying on the Document Object Model to which many content authors fail to adhere. The classifier uses a set of features which rely on the length of text, the percentage of hypertext, etc. The resulting classifier is nearly perfect on previously unseen news pages from different sites. The proposed technique is successfully employed in Alzoa.com, which is the largest Arabic news aggregator on the web.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,689
inproceedings
yamamoto-etal-2008-extraction
Extraction of Informative Expressions from Domain-specific Documents
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1301/
Yamamoto, Eiko and Isahara, Hitoshi and Terada, Akira and Abe, Yasunori
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
What kinds of lexical resources are helpful for extracting useful information from domain-specific documents? Although domain-specific documents contain much useful knowledge, it is not obvious how to extract such knowledge efficiently from the documents. We need to develop techniques for extracting hidden information from such domain-specific documents. These techniques do not necessarily use state-of-the-art technologies and achieve deep and accurate language understanding, but are based on huge amounts of linguistic resources, such as domain-specific lexical databases. In this paper, we introduce two techniques for extracting informative expressions from documents: the extraction of related words that are not only taxonomically related but also thematically related, and the acquisition of salient terms and phrases. With these techniques we then attempt to automatically and statistically extract domain-specific informative expressions in aviation documents as an example and evaluate the results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,690
inproceedings
saetre-etal-2008-connecting
Connecting Text Mining and Pathways using the {P}ath{T}ext Resource
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1302/
S{\ae}tre, Rune and Kemper, Brian and Oda, Kanae and Okazaki, Naoaki and Matsuoka, Yukiko and Kikuchi, Norihiro and Kitano, Hiroaki and Tsuruoka, Yoshimasa and Ananiadou, Sophia and Tsujii, Jun{'}ichi
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Many systems have been developed in the past few years to assist researchers in the discovery of knowledge published as English text, for example in the PubMed database. At the same time, higher level collective knowledge is often published using a graphical notation representing all the entities in a pathway and their interactions. We believe that these pathway visualizations could serve as an effective user interface for knowledge discovery if they can be linked to the text in publications. Since the graphical elements in a Pathway are of a very different nature than their corresponding descriptions in English text, we developed a prototype system called PathText. The goal of PathText is to serve as a bridge between these two different representations. In this paper, we first describe the overall architecture and the interfaces of the PathText system, and then provide some details about the core Text Mining components.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,691
inproceedings
pomikalek-rychly-2008-detecting
Detecting Co-Derivative Documents in Large Text Collections
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1303/
Pomik{\'a}lek, Jan and Rychl{\'y}, Pavel
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We have analyzed the SPEX algorithm by Bernstein and Zobel (2004) for detecting co-derivative documents using duplicate n-grams. Although we totally agree with the claim that not using unique n-grams can greatly increase the efficiency and scalability of the process of detecting co-derivative documents, we have found serious bottlenecks in the way SPEX finds the duplicate n-grams. While the memory requirements for computing co-derivative documents can be reduced to up to 1{\%} by only using duplicate n-grams, SPEX needs about 40 times more memory for computing the list of duplicate n-grams itself. Therefore the memory requirements of the whole process are not reduced enough to make the algorithm practical for very large collections. We propose a solution for this problem using an external sort with the suffix array in-memory sorting and temporary file compression. The proposed algorithm for computing duplicate n-grams uses a fixed amount of memory for any input size.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,692
inproceedings
lemnitzer-monachesi-2008-extraction
Extraction and Evaluation of Keywords from Learning Objects: a Multilingual Approach
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1304/
Lemnitzer, Lothar and Monachesi, Paola
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We report about a project which brings together Natural Language Processing and eLearning. One of the functionalities developed within this project is the possibility to annotate learning objects semi-automatically with keywords. To this end, a keyword extractor has been created which is able to handle documents in 8 languages. The approach employed is based on a linguistic processing step which is followed by a filtering step of candidate keywords and their subsequent ranking based on frequency criteria. Three tests have been carried out to provide a rough evaluation of the performance of the tool, to measure inter annotator agreement in order to determine the complexity of the task and to evaluate the acceptance of the proposed keywords by users.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,693
inproceedings
zhang-etal-2008-exploiting
Exploiting the Role of Position Feature in {C}hinese Relation Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1305/
Zhang, Peng and Li, Wenjie and Wei, Furu and Lu, Qin and Hou, Yuexian
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Relation extraction is the task of finding pre-defined semantic relations between two entities or entity mentions from text. Many methods, such as feature-based and kernel-based methods, have been proposed in the literature. Among them, feature-based methods draw much attention from researchers. However, to the best of our knowledge, existing feature-based methods did not explicitly incorporate the position feature and no in-depth analysis was conducted in this regard. In this paper, we define and exploit nine types of position information between two named entity mentions and then use it along with other features in a multi-class classification framework for Chinese relation extraction. Experiments on the ACE 2005 data set show that the position feature is more effective than the other recognized features like entity type/subtype and character-based N-gram context. Most important, it can be easily captured and does not require as much effort as applying deep natural language processing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,694
inproceedings
allison-guthrie-2008-authorship
Authorship Attribution of {E}-Mail: Comparing Classifiers over a New Corpus for Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1306/
Allison, Ben and Guthrie, Louise
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The release of the Enron corpus provided a unique resource for studying aspects of email use, because it is largely unfiltered, and therefore presents a relatively complete collection of emails for a reasonably large number of correspondents. This paper describes a newly created subcorpus of the Enron emails which we suggest can be used to test techniqes for authorship attribution, and further shows the application of three different classification methods to this task to present baseline results. Two of the classifiers used are are standard, and have been shown to perform well in the literature, and one of the classifiers is novel and based on concurrent work that proposes a Bayesian hierarchical distribution for word counts in documents. For each of the classifiers, we present results using six text representations, including use of linguistic structures derived from a parser as well as lexical information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,695
inproceedings
kaisser-lowe-2008-creating
Creating a Research Collection of Question Answer Sentence Pairs with {A}mazon`s {M}echanical {T}urk
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1307/
Kaisser, Michael and Lowe, John
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Each year NIST releases a set of question, document id, answer-triples for the factoid questions used in the TREC Question Answering track. While this resource is widely used and proved itself useful for many purposes, it also is too coarse a grain-size for a lot of other purposes. In this paper we describe how we have used Amazon’s Mechanical Turk to have multiple subjects read the documents and identify the sentences themselves which contain the answer. For most of the 1911 questions in the test sets from 2002 to 2006 and each of the documents said to contain an answer, the Question-Answer Sentence Pairs (QASP) corpus introduced in this paper contains the identified answer sentences. We believe that this corpus, which we will make available to the public, can further stimulate research in QA, especially linguistically motivated research, where matching the question to the answer sentence by either syntactic or semantic means is a central concern.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,696
inproceedings
xu-etal-2008-adaptation
Adaptation of Relation Extraction Rules to New Domains
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1308/
Xu, Feiyu and Uszkoreit, Hans and Li, Hong and Felger, Niko
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents various strategies for improving the extraction performance of less prominent relations with the help of the rules learned for similar relations, for which large volumes of data are available that exhibit suitable data properties. The rules are learned via a minimally supervised machine learning system for relation extraction called DARE. Starting from semantic seeds, DARE extracts linguistic grammar rules associated with semantic roles from parsed news texts. The performance analysis with respect to different experiment domains shows that the data property plays an important role for DARE. Especially the redundancy of the data and the connectivity of instances and pattern rules have a strong influence on recall. However, most real-world data sets do not possess the desirable small-world property. Therefore, we propose three scenarios to overcome the data property problem of some domains by exploiting a similar domain with better data properties. The first two strategies stay with the same corpus but try to extract new similar relations with learned rules. The third strategy adapts the learned rules to a new corpus. All three strategies show that frequently mentioned relations can help in the detection of less frequent relations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,697
inproceedings
sumida-etal-2008-boosting
Boosting Precision and Recall of Hyponymy Relation Acquisition from Hierarchical Layouts in {W}ikipedia
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1309/
Sumida, Asuka and Yoshinaga, Naoki and Torisawa, Kentaro
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper proposes an extension of Sumida and Torisawa’s method of acquiring hyponymy relations from hierachical layouts in Wikipedia (Sumida and Torisawa, 2008). We extract hyponymy relation candidates (HRCs) from the hierachical layouts in Wikipedia by regarding all subordinate items of an item x in the hierachical layouts as x’s hyponym candidates, while Sumida and Torisawa (2008) extracted only direct subordinate items of an item x as x’s hyponym candidates. We then select plausible hyponymy relations from the acquired HRCs by running a filter based on machine learning with novel features, which even improve the precision of the resulting hyponymy relations. Experimental results show that we acquired more than 1.34 million hyponymy relations with a precision of 90.1{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,698
inproceedings
mieskes-strube-2008-parameters
Parameters for Topic Boundary Detection in Multi-Party Dialogues
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1310/
Mieskes, Margot and Strube, Michael
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We present a topic boundary detection method that searches for connections between sequences of utterances in multi party dialogues. The connections are established based on word identity. We compare our method to a state-of-the art automatic Topic boundary detection method that was also used on multi party dialogues. We checked various methods of preprocessing of the data, including stemming, lemmatization and stopword filtering with a text-based as well as speech-based stopword lists. Using standard evaluation methods we found that our method outperformed the state-of-the art method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,699
inproceedings
picchi-etal-2008-semantic
Semantic Press
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1311/
Picchi, Eugenio and Sassolini, Eva and Cucurullo, Sebastiana and Bertagna, Francesca and Baroni, Paola
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper Semantic Press, a tool for the automatic press review, is introduced. It is based on Text Mining technologies and is tailored to meet the needs of the eGovernment and eParticipation communities. First, a general description of the application demands emerging from the eParticipation and eGovernment sectors is offered. Then, an introduction to the framework of the automatic analysis and classification of newspaper content is provided, together with a description of the technologies underlying it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,700
inproceedings
xia-iria-2008-approach
An Approach to Modeling Heterogeneous Resources for Information Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1312/
Xia, Lei and Iria, Jos{\'e}
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper, we describe an approach that aims to model heterogeneous resources for information extraction. Document is modeled in graph representation that enables better understanding of multi-media document and its structure which ultimately could result better cross-media information extraction. We also describe our proposed algorithm that segment document-based on the document modeling approach we described in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,701
inproceedings
dinu-2008-classifying
On Classifying Coherent/Incoherent {R}omanian Short Texts
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1313/
Dinu, Anca
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present and discuss the results of a text coherence experiment performed on a small corpus of Romanian text from a number of alternative high school manuals. During the last 10 years, an abundance of alternative manuals for high school was produced and distributed in Romania. Due to the large amount of material and to the relative short time in which it was produced, the question of assessing the quality of this material emerged; this process relied mostly of subjective human personal opinion, given the lack of automatic tools for Romanian. Debates and claims of poor quality of the alternative manuals resulted in a number of examples of incomprehensible / incoherent paragraphs extracted from such manuals. Our goal was to create an automatic tool which may be used as an indication of poor quality of such texts. We created a small corpus of representative texts from Romanian alternative manuals. We manually classified the chosen paragraphs from such manuals into two categories: comprehensible/coherent text and incomprehensible/incoherent text. We then used different machine learning techniques to automatically classify them in a supervised manner. Our approach is rather simple, but the results are encouraging.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,702
inproceedings
goeuriot-etal-2008-characterization
Characterization of Scientific and Popular Science Discourse in {F}rench, {J}apanese and {R}ussian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1314/
Goeuriot, Lorraine and Grabar, Natalia and Daille, B{\'e}atrice
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We aim to characterize the comparability of corpora, we address this issue in the trilingual context through the distinction of expert and non expert documents. We work separately with corpora composed of documents from the medical domain in three languages (French, Japanese and Russian) which present an important linguistic distance between them. In our approach, documents are characterized in each language by their topic and by a discursive typology positioned at three levels of document analysis: structural, modal and lexical. The document typology is implemented with two learning algorithms (SVMlight and C4.5). Evaluation of results shows that the proposed discursive typology can be transposed from one language to another, as it indeed allows to distinguish the two aimed discourses (science and popular science). However, we observe that performances vary a lot according to languages, algorithms and types of discursive characteristics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,703
inproceedings
maleki-ahrenberg-2008-converting
Converting {R}omanized {P}ersian to the {A}rabic Writing Systems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1315/
Maleki, Jalal and Ahrenberg, Lars
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes a syllabification based conversion method for converting romanized Persian text to the traditional Arabic-based writing system. The system is implemented in Xerox XFST and relies on rule based conversion of words rather than using morphological analysis. The paper presents a brief evaluation of the accuracy of the transcriptions generated by the method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,704
inproceedings
abouzakhar-etal-2008-unsupervised
Unsupervised Learning-based Anomalous {A}rabic Text Detection
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1316/
Abouzakhar, Nasser and Allison, Ben and Guthrie, Louise
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The growing dependence of modern society on the Web as a vital source of information and communication has become inevitable. However, the Web has become an ideal channel for various terrorist organisations to publish their misleading information and send unintelligible messages to communicate with their clients as well. The increase in the number of published anomalous misleading information on the Web has led to an increase in security threats. The existing Web security mechanisms and protocols are not appropriately designed to deal with such recently developed problems. Developing technology to detect anomalous textual information has become one of the major challenges within the NLP community. This paper introduces the problem of anomalous text detection by automatically extracting linguistic features from documents and evaluating those features for patterns of suspicious and/or inconsistent information in Arabic documents. In order to achieve that, we defined specific linguistic features that characterise various Arabic writing styles. Also, the paper introduces the main challenges in Arabic processing and describes the proposed unsupervised learning model for detecting anomalous Arabic textual information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,705
inproceedings
prokopidis-etal-2008-condensing
Condensing Sentences for Subtitle Generation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1317/
Prokopidis, Prokopis and Karra, Vassia and Papagianopoulou, Aggeliki and Piperidis, Stelios
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Text condensation aims at shortening the length of an utterance without losing essential textual information. In this paper, we report on the implementation and preliminary evaluation of a sentence condensation tool for Greek using a manually constructed table of 450 lexical paraphrases, and a set of rules that delete syntactic subtrees that carry minor semantic information. Evaluation on two-sentence sets show promising results regarding grammaticality and semantic acceptability of compressed versions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,706
inproceedings
mille-wanner-2008-making
Making Text Resources Accessible to the Reader: the Case of Patent Claims
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1318/
Mille, Simon and Wanner, Leo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Hardly any other kind of text structures is as notoriously difficult to read as patents. This is first of all due to their abstract vocabulary and their very complex syntactic constructions. Especially the claims in a patent are a challenge: in accordance with international patent writing regulations, each claim must be rendered in a single sentence. As a result, sentences with more than 200 words are not uncommon. Therefore, paraphrasing of the claims in terms the user can understand is of high demand. We present a rule-based paraphrasing module that realizes paraphrasing of patent claims in English as a rewriting task. Prior to the rewriting proper, the module implies the stages of simplification and discourse and syntactic analyses. The rewriting makes use of a full-fledged text generator and consists in a number of genuine generation tasks such as aggregation, selection of referring expressions, choice of discourse markers and syntactic generation. As generator, we use the MATE-work bench, which is based on the Meaning-Text Theory of linguistics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,707
inproceedings
halpern-2008-exploiting
Exploiting Lexical Resources for Disambiguating {CJK} and {A}rabic Orthographic Variants
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1319/
Halpern, Jack
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The orthographical complexities of Chinese, Japanese, Korean (CJK) and Arabic pose a special challenge to developers of NLP applications. These difficulties are exacerbated by the lack of a standardized orthography in these languages, especially the highly irregular Japanese orthography and the ambiguities of the Arabic script. This paper focuses on CJK and Arabic orthographic variation and provides a brief analysis of the linguistic issues. The basic premise is that statistical methods by themselves are inadequate, and that linguistic knowledge supported by large-scale lexical databases should play a central role in achieving high accuracy in disambiguating and normalizing orthographic variants.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,708
inproceedings
newbold-gillam-2008-automatic
Automatic Document Quality Control
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1320/
Newbold, Neil and Gillam, Lee
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper focuses on automatically improving the readability of documents. We explore mechanisms relating to content control that could be used (i) by authors to improve the quality and consistency of the language used in authoring; and (ii) to find a means to demonstrate this to readers. To achieve this, we implemented and evaluated a number of software components, including those of the University of Surrey Department of Computing’s content analysis applications (System Quirk). The software integrates these components within the commonly available GATE software and incorporates language resources considered useful within the standards development process: a Plain English thesaurus; lookup of ISO terminology provided from a terminology management system (TMS) via ISO 16642; automatic terminology discovery using statistical and linguistic techniques; and readability metrics. Results lead us to the development of an assistive tool, initially for authors of standards but not considered to be limited only to such authors, and also to a system that provides automatic annotation of texts to help readers to understand them. We describe the system developed and made freely available under the auspices of the EU eContent project LIRICS.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,709
inproceedings
supnithi-etal-2008-openccg
{O}pen{CCG} Workbench and Visualization Tool
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1321/
Supnithi, Thepchai and Singh, Suchinder and Ruangrajitpakorn, Taneth and Boonkwan, Prachya and Boriboon, Monthika
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Combinatorial Category Grammar is (CCG) a lexicalized grammar formalism which is expressed by syntactic category, a logical form representation. There are difficulties in representing CCG without any visualization tools. This paper presents a design framework of OpenCCG workbench and visualization tool which enables linguists to develop CCG based lexicons more easily. Our research is aimed to resolve these gaps by developing a user-friendly tool. OpenCCG Workbench, an open source web-based environment, was developed to enable multiple users to visually create and update grammars for using with the OpenCCG library. It was designed to streamline and speed-up the lexicon building process, and to free the linguists from writing XML files which is both cumbersome and error-prone. The system consists of three sub-systems: grammar management system, grammar validator system, and concordance retrieval system. In this paper we will mainly discuss the most important parts, grammar management and validation systems, which are directly related to a CCG lexicon construction. We support users in three levels; Expert linguists who play a role as lexical entry designer, normal linguists who adds or edits lexicons, and guests who requires an acquisition to the lexicon into their applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,710
inproceedings
hermet-etal-2008-using
Using the Web as a Linguistic Resource to Automatically Correct Lexico-Syntactic Errors
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1322/
Hermet, Matthieu and D{\'e}silets, Alain and Szpakowicz, Stan
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents an algorithm for correcting language errors typical of second-language learners. We focus on preposition errors, which are very common among second-language learners but are not addressed well by current commercial grammar correctors and editing aids. The algorithm takes as input a sentence containing a preposition error (and possibly other errors as well), and outputs the correct preposition for that particular sentence context. We use a two-phase hybrid rule-based and statistical approach. In the first phase, rule-based processing is used to generate a short expression that captures the context of use of the preposition in the input sentence. In the second phase, Web searches are used to evaluate the frequency of this expression, when alternative prepositions are used instead of the original one. We tested this algorithm on a corpus of 133 French sentences written by intermediate second-language learners, and found that it could address 69.9{\%} of those cases. In contrast, we found that the best French grammar and spell checker currently on the market, Antidote, addressed only 3{\%} of those cases. We also showed that performance degrades gracefully when using a corpus of frequent n-grams to evaluate frequencies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,711
inproceedings
quixal-etal-2008-user
User-Centred Design of Error Correction Tools
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1324/
Quixal, Mart{\'i} and Badia, Toni and Benavent, Francesc and Boullosa, Jose R. and Domingo, Judith and Grau, Bernat and Mass{\'o}, Guillem and Valent{\'i}n, Oriol
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents a methodology for the design and implementation of user-centred language checking applications. The methodology is based on the separation of three critical aspects in this kind of application: functional purpose (educational or corrective goal), types of warning messages, and linguistic resources and computational techniques used. We argue that to assure a user-centred design there must be a clear-cut division between the “error” typology underlying the system and the software architecture. The methodology described has been used to implement two different user-driven spell, grammar and style checkers for Catalan. We discuss that this is an issue often neglected in commercial applications, and remark the benefits of such a methodology in the scalability of language checking applications. We evaluate our application in terms of recall, precision and noise, and compare it to the only other existing grammar checker for Catalan, to our knowledge.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,713
inproceedings
liu-etal-2008-professor
Professor or Screaming Beast? Detecting Anomalous Words in {C}hinese
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1325/
Liu, Wei and Allison, Ben and Guthrie, Louise
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The Internet has become the most popular platform for communication. However because most of the modern computer keyboard is Latin-based, Asian languages such as Chinese cannot input its characters (Hanzi) directly with these keyboards. As a result, methods for representing Chinese characters using Latin alphabets were introduced. The most popular method among these is the Pinyin input system. Pinyin is also called “Romanised” Chinese in that it phonetically resembles a Chinese character. Due to the highly ambiguous mapping from Pinyin to Chinese characters, word misuses can occur using standard computer keyboard, and more commonly so in internet chat-rooms or instant messengers where the language used is less formal. In this paper we aim to develop a system that can automatically identify such anomalies, whether they are simple typos or whether they are intentional. After identifying them, the system should suggest the correct word to be used.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,714
inproceedings
alegria-etal-2008-spelling
Spelling Correction: from Two-Level Morphology to Open Source
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1326/
Alegria, I{\~n}aki and Ceberio, Klara and Ezeiza, Nerea and Soroa, Aitor and Hernandez, Gregorio
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Basque is a highly inflected and agglutinative language (Alegria et al., 1996). Two-level morphology has been applied successfully to this kind of languages and there are two-level based descriptions for very different languages. After doing the morphological description for a language, it is easy to develop a spelling checker/corrector for this language. However, what happens if we want to use the speller in the “free world” (OpenOffice, Mozilla, emacs, LaTeX, etc.)? Ispell and similar tools (aspell, hunspell, myspell) are the usual mechanisms for these purposes, but they do not fit the two-level model. In the absence of two-level morphology based mechanisms, an automatic conversion from two-level description to hunspell is described in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,715
inproceedings
hallett-hardcastle-2008-automatic
Automatic Rewriting of Patient Record Narratives
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1327/
Hallett, Catalina and Hardcastle, David
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Patients require access to Electronic Patient Records, however medical language is often too difficult for patients to understand. Explaining records to patients is a time-consuming task, which we attempt to simplify by automating the translation procedure. This paper introduces a research project dealing with the automatic rewriting of medical narratives for the benefit of patients. We are looking at various ways in which technical language can be transposed into patient-friendly language by means of a comparison with patient information materials. The text rewriting procedure we describe could potentially have an impact on the quality of information delivered to patients. We report on some preliminary experiments concerning rewriting at lexical and paragaph level. This is an ongoing project which currently addresses a restricted number of issues, including target text modelling and text rewriting at lexical level.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,716
inproceedings
versley-etal-2008-bart-modular
{BART}: A modular toolkit for coreference resolution
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1328/
Versley, Yannick and Ponzetto, Simone and Poesio, Massimo and Eidelman, Vladimir and Jern, Alan and Smith, Jason and Yang, Xiaofeng and Moschitti, Alessandro
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Developing a full coreference system able to run all the way from raw text to semantic interpretation is a considerable engineering effort. Accordingly, there is very limited availability of off-the shelf tools for researchers whose interests are not primarily in coreference or others who want to concentrate on a specific aspect of the problem. We present BART, a highly modular toolkit for developing coreference applications. In the Johns Hopkins workshop on using lexical and encyclopedic knowledge for entity disambiguation, the toolkit was used to extend a reimplementation of Soon et al.’s proposal with a variety of additional syntactic and knowledge-based features, and experiment with alternative resolution processes, preprocessing tools, and classifiers. BART has been released as open source software and is available from \url{http://www.sfs.uni-tuebingen.de/~versley/BART}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,717
inproceedings
poesio-etal-2008-anawiki
{ANAWIKI}: Creating Anaphorically Annotated Resources through Web Cooperation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1329/
Poesio, Massimo and Kruschwitz, Udo and Chamberlain, Jon
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The ability to make progress in Computational Linguistics depends on the availability of large annotated corpora, but creating such corpora by hand annotation is very expensive and time consuming; in practice, it is unfeasible to think of annotating more than one million words. However, the success of Wikipedia and other projects shows that another approach might be possible: take advantage of the willingness of Web users to contribute to collaborative resource creation. AnaWiki is a recently started project that will develop tools to allow and encourage large numbers of volunteers over the Web to collaborate in the creation of semantically annotated corpora (in the first instance, of a corpus annotated with information about anaphora).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,718
inproceedings
goecke-etal-2008-influence
Influence of Text Type and Text Length on Anaphoric Annotation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1330/
Goecke, Daniela and St{\"uhrenberg, Maik and Witt, Andreas
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We report the results of a study that investigates the agreement of anaphoric annotations. The study focuses on the influence of the factors text length and text type on a corpus of scientific articles and newspaper texts. In order to measure inter-annotator agreement we compare existing approaches and we propose to measure each step of the annotation process separately instead of measuring the resulting anaphoric relations only. A total amount of 3,642 anaphoric relations has been annotated for a corpus of 53,038 tokens (12,327 markables). The results of the study show that text type has more influence on inter-annotator agreement than text length. Furthermore, the definition of well-defined annotation instructions and coder training is a crucial point in order to receive good annotation results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,719
inproceedings
navarretta-olsen-2008-annotating
Annotating Abstract Pronominal Anaphora in the {DAD} Project
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1331/
Navarretta, Costanza and Olsen, Sussi
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present an extension of the MATE/GNOME annotation scheme for anaphora (Poesio, 2004) which accounts for abstract anaphora in Danish and Italian. By abstract anaphora it is here meant pronouns whose linguistic antecedents are verbal phrases, clauses and discourse segments. The extended scheme, which we call the DAD annotation scheme, allows to annotate information about abstract anaphora which is important to investigate their use, see i.a. (Webber, 1988; Gundel et al., 2003; Navarretta, 2004; Navarretta, 2007) and which can influence their automatic treatment. Intercoder agreement scores obtained by applying the DAD annotation scheme on texts and dialogues in the two languages are given and show that the information proposed in the scheme can be recognised in a reliable way.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,720
inproceedings
williams-power-2008-deriving
Deriving Rhetorical Complexity Data from the {RST}-{DT} Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1332/
Williams, Sandra and Power, Richard
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes a study of the levels at which different rhetorical relations occur in rhetorical structure trees. In a previous empirical study (Williams and Reiter, 2003) of the RST-DT (Rhetorical Structure Theory Discourse Treebank) Corpus (Carlson et al., 2003), we noticed that certain rhetorical relations tended to occur more frequently at higher levels in a rhetorical structure tree, whereas others seemed to occur more often at lower levels. The present study takes a closer look at the data, partly to test this observation, and partly to investigate related issues such as the relative complexity of satellite and nucleus for each type of relation. One practical application of this investigation would be to guide discourse planning in Natural Language Generation (NLG), so that it reflects more accurately the structures found in documents written by human authors. We present our preliminary findings and discuss their relevance for discourse planning.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,721
inproceedings
mihaltz-2008-knowledge
Knowledge-based Coreference Resolution for {H}ungarian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1333/
Mih{\'a}ltz, M{\'a}rton
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We present a knowledge-based coreference resolution system for noun phrases in Hungarian texts. The system is used as a module in an automated psychological text processing project. Our system uses rules that rely on knowledge from the morphological, syntactic and semantic output of a deep parser and semantic relations form the Hungarian WordNet ontology. We also use rules that rely on Binding Theory, research results in Hungarian psycholinguistics, current research on proper name coreference identification and our own heuristics. We describe the constraints-and-preferences algorithm in detail that attempts to find coreference information for proper names, common nouns, pronouns and zero pronouns in texts. We present evaluation results for our system on a corpus manually annotated with coreference relations. Precision of the resolution of various coreference types reaches up to 80{\%}, while overall recall is 63{\%}. We also present an investigation of the various error types our system produced along with an analysis of the results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,722
inproceedings
nissim-perboni-2008-italian
The {I}talian Particle {\textquotedblleft}ne{\textquotedblright}: Corpus Construction and Analysis
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1334/
Nissim, Malvina and Perboni, Sara
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The Italian particle “ne” exhibits interesting anaphoric properties that have not been yet explored in depth from a corpus and computational linguistic perspective. We provide: (i) an overview of the phenomenon; (ii) a set of annotation schemes for marking up occurrences of “ne”; (iii) the description of a corpus annotated for this phenomenon ; (iv) a first assessment of the resolution task. We show that the schemes we developed are reliable, and that the actual distribution of partitive and non-partitive uses of “ne” is inversely proportional to the amount of attention that the two different uses have received in the linguistic literature. As an assessment of the complexity of the resolution task, we find that a recency-based baseline yields an accuracy of less than 30{\%} on both development and test data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,723
inproceedings
knight-tennent-2008-introducing
Introducing {DRS} (The Digital Replay System): a Tool for the Future of Corpus Linguistic Research and Analysis
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1335/
Knight, Dawn and Tennent, Paul
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper outlines the new resource technologies, products and applications that have been constructed during the development of a multi-modal (MM hereafter) corpus tool on the DReSS project (Understanding New Forms of the Digital Record for e-Social Science), based at the University of Nottingham, England. The paper provides a brief outline of the DRS (Digital Replay System, the software tool at the heart of the corpus), highlighting its facility to display synchronised video, audio and textual data and, most relevantly, a concordance tool capable of interrogating data constructed from textual transcriptions anchored to video or audio, and from coded annotations of specific features of gesture-in-talk. This is complemented by a real-time demonstration of the DRS interface in-use as part of the LREC 2008 conference. This will serve to show the manner in which a system such as the DRS can be used to facilitate the assembly, storage and analysis of multi modal corpora, supporting both qualitative and quantitative approaches to the analysis of collected data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,724
inproceedings
atterer-schutze-2008-inverted
An Inverted Index for Storing and Retrieving Grammatical Dependencies
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1336/
Atterer, Michaela and Sch{\"utze, Hinrich
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Web count statistics gathered from search engines have been widely used as a resource in a variety of NLP tasks. For some tasks, however, the information they exploit is not fine-grained enough. We propose an inverted index over grammatical relations as a fast and reliable resource to access more general and also more detailed frequency information. To build the index, we use a dependency parser to parse a large corpus. We extract binary dependency relations, such as he-subj-say (“he” is the subject of “say”) as index terms and construct the index using publicly available open-source indexing software. The unit we index over is the sentence. The index can be used to extract grammatical relations and frequency counts for these relations. The framework also provides the possibility to search for partial dependencies (say, the frequency of “he” occurring in subject position), words, strings and a combination of these. One possible application is the disambiguation of syntactic structures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,725
inproceedings
nilsson-nivre-2008-malteval
{M}alt{E}val: an Evaluation and Visualization Tool for Dependency Parsing
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1337/
Nilsson, Jens and Nivre, Joakim
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents a freely available evaluation tool for dependency parsing: MaltEval (\url{http://w3.msi.vxu.se/users/jni/malteval}). It is flexible and extensible, and provides functionality for both quantitative evaluation and visualization of dependency structure. The quantitative evaluation is compatible with other standard evaluation software for dependency structure which does not produce visualization of dependency structure, and can output more details as well as new types of evaluation metrics. In addition, MaltEval has generic support for confusion matrices. It can also produce statistical significance tests when more than one parsed file is specified. The visualization module also has the ability to highlight discrepancies between the gold-standard files and the parsed files, and it comes with an easy to use GUI functionality to search in the dependency structure of the input files.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,726
inproceedings
sato-2008-new
New Functions of {F}rame{SQL} for Multilingual {F}rame{N}ets
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1338/
Sato, Hiroaki
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The Berkeley FrameNet Project (BFN) is making an English lexical database called FrameNet, which describes syntactic and semantic properties of an English lexicon extracted from large electronic text corpora (Baker et al., 1998). Other projects dealing with Spanish, German and Japanese follow a similar approach and annotate large corpora. FrameSQL is a web-based application developed by the author, and it allows the user to search the BFN database in a variety of ways (Sato, 2003). FrameSQL shows a clear view of the headword’s grammar and combinatorial properties offered by the FrameNet database. FrameSQL has been developing and new functions were implemented for processing the Spanish FrameNet data (Subirats and Sato, 2004). FrameSQL is also in the process of incorporating the data of the Japanese FrameNet Project (Ohara et al., 2003) and that of the Saarbr{\"ucken Lexical Semantics Acquisition Project (Erk et al., 2003) into the database and will offer the same user-interface for searching these lexical data. This paper describes new functions of FrameSQL, showing how FrameSQL deals with the lexical data of English, Spanish, Japanese and German seamlessly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,727
inproceedings
shinnou-sasaki-2008-division
Division of Example Sentences Based on the Meaning of a Target Word Using Semi-Supervised Clustering
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1339/
Shinnou, Hiroyuki and Sasaki, Minoru
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper, we describe a system that divides example sentences (data set) into clusters, based on the meaning of the target word, using a semi-supervised clustering technique. In this task, the estimation of the cluster number (the number of the meaning) is critical. Our system primarily concentrates on this aspect. First, a user assigns the system an initial cluster number for the target word. The system then performs general clustering on the data set to obtain small clusters. Next, using constraints given by the user, the system integrates these clusters to obtain the final clustering result. Our system performs this entire procedure with high precision and requiring only a few constraints. In the experiment, we tested the system for 12 Japanese nouns used in the SENSEVAL2 Japanese dictionary task. The experiment proved the effectiveness of our system. In the future, we will improve sentence similarity measurements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,728
inproceedings
saito-etal-2008-japanese
The {J}apanese {F}rame{N}et Software Tools
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1340/
Saito, Hiroaki and Kuboya, Shunta and Sone, Takaaki and Tagami, Hayato and Ohara, Kyoko
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes an ongoing project “Japanese FrameNet (JFN)”, a corpus-based lexicon of Japanese in the FrameNet style. This paper focuses on the set of software tools tailored for the JFN annotation process. As the first step in the annotation, annotators select target sentences from the JFN corpus using the JFN kwic search tool, where they can specify cooccurring words and/or the part of speech of collocates. Our search tool is capable of displaying the parsed tree of a target sentence and its neigbouring sentences. The JFN corpus mainly consists of balanced and copyright-free “Japanese Corpus” which is being built as a national project. After the sentence to be annotated is chosen, the annotator labels syntactic and semantic tags to the appropriate phrases in the sentence. This work is performed on an annotation platform called JFNDesktop, in which the functions of labeling assist and consistency checking of annotations are available. Preliminary evaluation of our platform shows such functions accelerate the annotation process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,729
inproceedings
pazienza-etal-2008-jmwnl
{JMWNL}: an Extensible Multilingual Library for Accessing Wordnets in Different Languages
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1341/
Pazienza, Maria Teresa and Stellato, Armando and Tudorache, Alexandra
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present JMWNL, a multilingual extension of the JWNL java library, which was originally developed for accessing Princeton WordNet dictionaries. JMWNL broadens the range of JWNL’s accessible resources by covering also dictionaries produced inside the EuroWordNet project. Specific resources, such as language-dependent algorithmic stemmers, have been adopted to cover the diversities in the morphological nature of words in the addressed idioms. New semantic and lexical relations have been included to maximize compatibility with new versions of the original Princeton WordNet and to include the whole range of relations from EuroWordNet. Relations from Princeton WordNet on one side and EuroWordNet on the other one have in some cases been mapped to provide a uniform reference for coherent cross-linguistic use of the library.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,730
inproceedings
maynard-2008-benchmarking
Benchmarking Textual Annotation Tools for the Semantic Web
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1342/
Maynard, Diana
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper investigates the state of the art in automatic textual annotation tools, and examines the extent to which they are ready for use in the real world. We define some benchmarking criteria for measuring the usability of annotation tools, and examine those factors which are particularly important for a real user to be able to determine which is the most suitable tool for their use. We discuss factors such as usability, accessibility, interoperability and scalability, and evaluate a set of annotation tools according to these factors. Finally, we draw some conclusions about the current state of research in annotation and make some suggestions for the future.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,731
inproceedings
dinu-etal-2008-authorship
Authorship Identification of {R}omanian Texts with Controversial Paternity
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1343/
Dinu, Liviu and Popescu, Marius and Dinu, Anca
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this work we propose a new strategy for the authorship identification problem and we test it on an example from Romanian literature: did Radu Albala found the continuation of Mateiu Caragiale’s novel Sub pecetea tainei, or did he write himself the respective continuation? The proposed strategy is based on the similarity of rankings of function words; we compare the obtained results with the results obtained by a learning method (namely Support Vector Machines -SVM- with a string kernel).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,732
inproceedings
kemps-snijders-etal-2008-ensuring
Ensuring Semantic Interoperability on Lexical Resources
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1344/
Kemps-Snijders, Marc and Zinn, Claus and Ringersma, Jacquelijn and Windhouwer, Menzo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper, we describe a unifying approach to tackle data heterogeneity issues for lexica and related resources. We present LEXUS, our software that implements the Lexical Markup Framework (LMF) to uniformly describe and manage lexica of different structures. LEXUS also makes use of a central Data Category Registry (DCR) to address terminological issues with regard to linguistic concepts as well as the handling of working and object languages. Finally, we report on ViCoS, a LEXUS extension, providing support for the definition of arbitrary semantic relations between lexical entries or parts thereof.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,733
inproceedings
finthammer-cramer-2008-exploring
Exploring and Navigating: Tools for {G}erma{N}et
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1345/
Finthammer, Marc and Cramer, Irene
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
GermaNet is regarded to be a valuable resource for many German NLP applications, corpus research, and teaching. This demo presents three GUI-based tools meant to facilitate the exploration of and navigation through GermaNet. The GermaNet Explorer exhibits various retrieval, sort, filter and visualization functions for words/synsets and also provides an insight into the modeling of GermaNet’s semantic relations as well as its representation as a graph. The GermaNet-Measure-API and GermaNet Pathfinder offer methods for the calculation of semantic relatedness based on GermaNet as a resource and the visualization of (semantic) paths between words/synsets. The GermaNet-Measure-API furthermore features a flexible interface, which facilitates the integration of all relatedness measures provided into user-defined applications. We have already used the three tools in our research on thematic chaining and thematic indexing, as a tool for the manual annotation of lexical chains, and as a resource in our courses on corpus linguistics and semantics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,734
inproceedings
santaholma-chatzichrisafis-2008-knowledge
A Knowledge-Modeling Approach for Multilingual Regulus Lexica
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1346/
Santaholma, Marianne and Chatzichrisafis, Nikos
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Development of lexical resources is, along with grammar development, one of the main efforts when building multilingual NLP applications. In this paper, we present a tool-based approach for more efficient manual lexicon development for a spoken language translation system. The approach in particular addresses the common problems of multilingual lexica including the redundancy of encoded information and inconsistency of lexica of different languages. The general benefits of this practical tool-based approach are clear and user-friendly lexicon structure, inheritance of information inside of a language and between different system languages, and transparency and consistency of coverage between system languages. The visual tool-based approach is user-friendly to linguistic informants that don’t have previous experience of lexicon development, while at the same time, it still is a powerful tool for expert system developers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,735
inproceedings
rosner-2008-odl
{ODL}: an Object Description Language for Lexical Information
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1347/
Rosner, Michael
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes ODL, a description language for lexical information that is being developed within the context of a national project called MLRS (Maltese Language Resource Server) whose goal is to create a national corpus and computational lexicon for the Maltese language. The main aim of ODL is to make the task of the lexicographer easier by allowing lexical specifications to be set out formally so that actual entries will conform to them. The paper describes some of the background motivation, the ODL language itself, and concludes with a short example of how lexical values expressed in ODL can be mapped to an existing tagset together with some speculations about future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,736
inproceedings
cristea-etal-2008-evaluate
How to Evaluate and Raise the Quality in a Collaborative Lexicographic Approach
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1348/
Cristea, Dan and For{\u{a}}scu, Corina and R{\u{a}}schip, Marius and Zock, Michael
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper focuses on different aspects of collaborative work used to create the electronic version of a dictionary in paper format, edited and printed by the Romanian Academy during the last century. In order to ensure accuracy in a reasonable amount of time, collaborative proofreading of the scanned material, through an on-line interface has been initiated. The paper details the activities and the heuristics used to maximize accuracy, and to evaluate the work of anonymous contributors with diverse backgrounds. Observing the behaviour of the enterprise for a period of 6 months allows estimating the feasibility of the approach till the end of the project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,737
inproceedings
pedersen-etal-2008-merging
Merging a Syntactic Resource with a {W}ord{N}et: a Feasibility Study of a Merge between {STO} and {D}an{N}et
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1349/
Pedersen, Bolette Sandford and Braasch, Anna and Henriksen, Lina and Olsen, Sussi and Povlsen, Claus
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents a feasibility study of a merge between SprogTeknologisk Ordbase (STO), which contains morphological and syntactic information, and DanNet, which is a Danish WordNet containing semantic information in terms of synonym sets and semantic relations. The aim of the merge is to develop a richer, composite resource which we believe will have a broader usage perspective than the two seen in isolation. In STO, the organizing principle is based on the observable syntactic features of a lemma’s near context (labeled syntactic units or SynUs). In contrast, the basic unit in DanNet is constituted by semantic senses or - in wordnet terminology - synonym sets (synsets). The merge of the two resources is thus basically to be understood as a linking between SynUs and synsets. In the paper we discuss which parts of the merge can be performed semi-automatically and which parts require manual linguistic matching procedures. We estimate that this manual work will amount to approx. 39{\%} of the lexicon material.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,738
inproceedings
rizov-2008-hydra
{H}ydra: a Modal Logic Tool for {W}ordnet Development, Validation and Exploration
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1350/
Rizov, Borislav
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents a multipurpose system for wordnet (WN) development, named Hydra. Hydra is an application for data editing and validation, as well as for data retrieval and synchronization between wordnets for different languages. The use of modal language for wordnet, the representation of wordnet as a relational database and the concurrent access are among its main advantages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,739
inproceedings
lujan-etal-2008-evaluation
Evaluation of several Maximum Likelihood Linear Regression Variants for Language Adaptation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1351/
Luj{\'a}n, M{\'i}riam and Mart{\'i}nez, Carlos D. and Alabau, Vicent
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Multilingual Automatic Speech Recognition (ASR) systems are of great interest in multilingual environments. We studied the case of the Comunitat Valenciana where the two official languages are Spanish and Valencian. These two languages share most of their phonemes, and their syntax and vocabulary are also quite similar since they have influenced each other for many years. We constructed a system, and trained its acoustic models with a small corpus of Spanish and Valencian, which has produced poor results due to the lack of data. Adaptation techniques can be used to adapt acoustic models that are trained with a large corpus of a language inr order to obtain acoustic models for a phonetically similar language. This process is known as language adaptation. The Maximum Likelihood Linear Regression (MLLR) technique has commonly been used in speaker adaptation; however we have used MLLR in language adaptation. We compared several MLLR variants (mean square, diagonal matrix and full matrix) for language adaptation in order to choose the best alternative for our system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,740
inproceedings
sitbon-etal-2008-evaluation
Evaluation of Lexical Resources and Semantic Networks on a Corpus of Mental Associations
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1352/
Sitbon, Laurianne and Bellot, Patrice and Blache, Philippe
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
When a user cannot find a word, he may think of semantically related words that could be used into an automatic process to help him. This paper presents an evaluation of lexical resources and semantic networks for modelling mental associations. A corpus of associations has been constructed for its evaluation. It is composed of 20 low frequency target words each associated 5 times by 20 users. In the experiments we look for the target word in propositions made from the associated words thanks to 5 different resources. The results show that even if each resource has a useful specificity, the global recall is low. An experiment to extract common semantic features of several associations showed that we cannot expect to see the target word below a rank of 20 propositions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,741
inproceedings
bieler-dipper-2008-measures
Measures for Term and Sentence Relevances: an Evaluation for {G}erman
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1353/
Bieler, Heike and Dipper, Stefanie
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Terms, term relevances, and sentence relevances are concepts that figure in many NLP applications, such as Text Summarization. These concepts are implemented in various ways, though. In this paper, we want to shed light on the impact that different implementations can have on the overall performance of the systems. In particular, we examine the interplay between term definitions and sentence-scoring functions. For this, we define a gold standard that ranks sentences according to their significance and evaluate a range of relevant parameters with respect to the gold standard.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,742