Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-2015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:28:32.753191Z"
},
"title": "FCICU at SemEval-2017 Task 1: Sense-Based Language Independent Semantic Textual Similarity Approach",
"authors": [
{
"first": "Basma",
"middle": [],
"last": "Hassan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fayoum University",
"location": {
"settlement": "Fayoum",
"country": "Egypt"
}
},
"email": "[email protected]"
},
{
"first": "Samir",
"middle": [],
"last": "Abdelrahman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cairo University",
"location": {
"settlement": "Giza",
"country": "Egypt"
}
},
"email": "[email protected]"
},
{
"first": "Reem",
"middle": [],
"last": "Bahgat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cairo University",
"location": {
"settlement": "Giza",
"country": "Egypt"
}
},
"email": "[email protected]"
},
{
"first": "Ibrahim",
"middle": [],
"last": "Farag",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cairo University",
"location": {
"settlement": "Giza",
"country": "Egypt"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes FCICU team systems that participated in SemEval-2017 Semantic Textual Similarity task (Task1) for monolingual and cross-lingual sentence pairs. A sense-based language independent textual similarity approach is presented, in which a proposed alignment similarity method coupled with new usage of a semantic network (BabelNet) is used. Additionally, a previously proposed integration between sense-based and surface-based semantic textual similarity approach is applied together with our proposed approach. For all the tracks in Task1, Run1 is a string kernel with alignments metric and Run2 is a sense-based alignment similarity method. The first run is ranked 10th, and the second is ranked 12th in the primary track, with correlation 0.619 and 0.617 respectively.",
"pdf_parse": {
"paper_id": "S17-2015",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes FCICU team systems that participated in SemEval-2017 Semantic Textual Similarity task (Task1) for monolingual and cross-lingual sentence pairs. A sense-based language independent textual similarity approach is presented, in which a proposed alignment similarity method coupled with new usage of a semantic network (BabelNet) is used. Additionally, a previously proposed integration between sense-based and surface-based semantic textual similarity approach is applied together with our proposed approach. For all the tracks in Task1, Run1 is a string kernel with alignments metric and Run2 is a sense-based alignment similarity method. The first run is ranked 10th, and the second is ranked 12th in the primary track, with correlation 0.619 and 0.617 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic Textual Similarity (STS) is the task of measuring the similarity between two short texts semantically. STS is very important because a wide range of Natural Language Processing (NLP) applications rely heavily on such task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes our participation in the STS task (Task1) at SemEval 2017 in all the six monolingual and cross-lingual tracks (Cer et al., 2017) . The STS task seeks to calculate a graded similarity score from 0 to 5 between two sentences according to their meaning, i.e. semantically. The monolingual tracks are Arabic, English, and Spanish sentence-pairs (track1, track3, and track5 respectively), while the cross-lingual tracks are Arabic, Spanish, and Turkish sentences paired with English sentences (track2, track4a-4b, and track6 respectively). An additional Primary track is provided that presents the mean score of the results of all the other tracks.",
"cite_spans": [
{
"start": 131,
"end": 149,
"text": "(Cer et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The similarity between two natural language sentences can be inferred from the quantity/quality of aligned constituents in both sentences. Such alignments provide valuable information regarding how and to what extent the two sentences are related or semantically similar, where semantically equivalent text pairs are likely to have a successful alignment between their words. Our proposed sense-based approach employs this aspect to calculate the similarity between sentence-pairs regardless of their language. This is achieved through a proposed word-sense aligner that relies mainly on a new usage of the semantic network BabelNet. BabelNet utilization compensates the need of a machine translation module that is most commonly used to transfer crosslingual STS to monolingual. Besides, the proposed sense-based similarity score is combined with a surface-based similarity score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. Section 2 explains our main multilingual sense-based aligner. Section 3 describes our system that participated in all tracks. Section 4 shows the experiments conducted and analyzes the results achieved. Section 5 concludes the paper and mentions some future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Highly semantically similar sentences should also have a high degree of conceptual alignment between their semantic units: words, tokens, phrases, etc. Several STS methods that use alignments in their calculations have been proposed in literature. Many of those methods were very successful and were among the top performing methods during the last years of SemEval 2013-2016 (Han et al., 2013; Han et al., 2015; H\u00e4nig et al., 2015; Sultan et al., 2014a; Sultan et al., 2014b; Sultan et al., 2015) .",
"cite_spans": [
{
"start": 376,
"end": 394,
"text": "(Han et al., 2013;",
"ref_id": "BIBREF2"
},
{
"start": 395,
"end": 412,
"text": "Han et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 413,
"end": 432,
"text": "H\u00e4nig et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 433,
"end": 454,
"text": "Sultan et al., 2014a;",
"ref_id": "BIBREF11"
},
{
"start": 455,
"end": 476,
"text": "Sultan et al., 2014b;",
"ref_id": "BIBREF12"
},
{
"start": 477,
"end": 497,
"text": "Sultan et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Sense-Based Aligner",
"sec_num": "2"
},
{
"text": "From this point, we present a sense-based STS approach that produces a similarity score between texts by means of a multilingual word-sense aligner. The following subsections describe in detail the main resource utilized in our STS approach, namely BabelNet (details in subsection 2.1), and our proposed word-sense aligner that our sense-based similarity method relies on (subsection 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Sense-Based Aligner",
"sec_num": "2"
},
{
"text": "BabelNet 1 is a rich semantic knowledge resource that covers a wide range of concepts and named entities connected with large numbers of semantic relations (Navigli and Ponzetto, 2010) . Concepts and relations are gathered from different lexical resources such as: WordNet, Wikipedia, Wikidata, Wiktionary, FrameNet, ImageNet, and others.",
"cite_spans": [
{
"start": 156,
"end": 184,
"text": "(Navigli and Ponzetto, 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BabelNet",
"sec_num": "2.1"
},
{
"text": "BabelNet is made up of about 14 million entries called Babel synsets. Each Babel synset is a set of multilingual lexicalizations (each being a Babel Sense) that represents a given meaning, either concept or named entity, and contains all the synonyms which express that meaning in a range of different languages. For example, the concept 'A motor vehicle with four wheels' is represented by the synset {caren, autoen, automobileen, automobilefr, voiturefr, autofr, autom\u00f3viles, autoes, cochees, otomobiltr, arabatr, \u202b\u202aar\u202c\u0633\u064a\u0627\u0631\u0629\u202c , \u202b\u202aar\u202c\u0645\u0631\u0643\u0628\u0629\u202c , \u202b}\u202aar\u202c\u0639\u0631\u0628\u0629\u202c 2 , this synset contains synonyms in English (EN), French (FR), Spanish (ES), Turkish (TR), and Arabic (AR) languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BabelNet",
"sec_num": "2.1"
},
{
"text": "BabelNet semantic knowledge is encoded as a labeled directed graph, where vertices are Babel synset (concepts or named entities), and edges connect pairs of synsets with a label indicating the type of the semantic relation between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BabelNet",
"sec_num": "2.1"
},
{
"text": "Alignment is the task of discovering and aligning similar semantic units in a pair of sentences expressed in a natural language. Our proposed multilingual aligner aligns tokens across two sentences based on the similarity of their corresponding Babel synsets. A token can be in the form of a single word or a multi-words token. When alignment of a single word token fails, its multi-words synonyms are retrieved from BabelNet. The proposed aligner aligns only a token that is neither a stop word nor a punctuation mark. Figure 1 shows an example of alignments between English monolingual sentence-pairs using our aligner. In this figure the idiom \"kicked the bucket\" is considered as a single token of multiple words, and it was successfully aligned with the token \"died\" in the other sentence because both tokens are synonyms to each other in BabelNet. Figure 2 illustrates an example of direct token alignments between English-Arabic cross-lingual sentence pairs without using any machine translation module for translating one sentence language to the other.",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 528,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 854,
"end": 862,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Word-Sense Aligner",
"sec_num": "2.2"
},
{
"text": "Token-pairs are aligned one-to-one in decreasing order of their Babel synsets similarity score (s) using Equation (1). The most commonly used Babel synset of each token is selected. AlS1,S2 = {(t, t', s) : t \uf0ce T1, t' \uf0ce T2, and s > \uf064 };",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Sense Aligner",
"sec_num": "2.2"
},
{
"text": "where Ti is a set of tokens of sentence i, and \uf064 is a threshold parameter for alignment score (\uf064 = 0.5) 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-Sense Aligner",
"sec_num": "2.2"
},
{
"text": "Finding similarity between synsets is a fundamental part of our aligner. Hence, we proposed a synset similarity measure based on the hypothesis that highly semantically similar concepts have high degree of common neighbor synsets. From this standpoint, this measure calculates the similarity between Babel synset pairs (bsi, bsj) based on the overlap between their directly connected synsets. The overlap-coefficient is used, which is defined as the size of the intersection divided by the smaller of the size of the two sets. That is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synset Similarity Measure",
"sec_num": "2.3"
},
{
"text": "|) | |, min(| ) , ( j i j i j i synset NS NS NS NS bs bs sim \uf0c7 \uf03d (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synset Similarity Measure",
"sec_num": "2.3"
},
{
"text": "where NSi and NSj are the sets of all neighbor Babel synsets having a connected edge with bsi and bsj in the BabelNet network respectively. Since synonyms are belong to the same synset, their similarity score is equal to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synset Similarity Measure",
"sec_num": "2.3"
},
{
"text": "Our systems are based on the past successful integrated architecture of sense-based and surfacebased similarity functions presented in SemEval-2015 system (Hassan et al., 2015) . We use the integration in the latter system unchanged Equation 2, where the current results are provided by taking the arithmetic mean of: 1) simproposed : a proposed sentence-pair semantic similarity score (differs in each Run, details in subsection 3.2), and 2) simSC : the surface-based similarity function proposed by Jimenez et al. (2012) ",
"cite_spans": [
{
"start": 155,
"end": 176,
"text": "(Hassan et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 501,
"end": 522,
"text": "Jimenez et al. (2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": ". Hence, 2 ) , ( ) , ( ) , ( 2 1 2 1 2 1 S S sim S S sim S S sim SC proposed \uf02b \uf03d (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "The approach presented in (Jimenez et al., 2012) represents sentence words as sets of qgrams and measures semantic similarity based on soft cardinality computed from sentence q-grams similarity. Our system employs this approach, with the following parameters setup: p=2, bias=0, and \uf061=0.5.",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "(Jimenez et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "In this section, the text preprocessing details is firstly explained in subsection 3.1, and then each submitted Run is described in subsection 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "The given multilingual input sentences are preprocessed beforehand to map the raw natural language text into structured representation that can be processed. This process is including only four different tasks: (1) tokenization, (2) stopwords removal, (3) lemmatization, and (4) sense tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.1"
},
{
"text": "Tokenization: is carried out using Stanford CoreNLP 4 (Manning et al., 2014) , in which the input raw sentence text, in any language, is broken down into a set of tokens.",
"cite_spans": [
{
"start": 54,
"end": 76,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.1"
},
{
"text": "Stopwords removal: is the task of removing all tokens that are either a stop word or a punctuation mark.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.1"
},
{
"text": "Lemmatization: is a language-dependent task, in which each token is annotated with its lemma. English tokens are lemmatized using Stanford CoreNLP (Manning et al., 2014) . Spanish tokens are lemmatized using a freely available lemmatoken pairs dataset 5 . Arabic tokens are lemmatized using Madamira 6 (Pasha et al., 2014) . For Turkish tokens, lemmatization is not carried out.",
"cite_spans": [
{
"start": 147,
"end": 169,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 302,
"end": 322,
"text": "(Pasha et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.1"
},
{
"text": "Sense tagging: is the task of attaching the Babel synsets (bs) to each sentence token (t). It is achieved by retrieving all the Babel synsets of token's lemma.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.1"
},
{
"text": "On completion of the text preprocessing phase, each sentence is represented by a set of tokens (T), in which each token (t) is annotated by its original word (tword), lemma (tlemma), and a set of Babel synsets (bst). This structured representation is then used as an input to our proposed aligner (subsection 2.3), and from which a set of aligned tokens across two sentences S1 and S2 is formed (AlS1,S2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.1"
},
{
"text": "We made two system submissions to participate in all the provided monolingual and cross-lingual tracks, named Run1 and Run2. Each run proposes a new different sense-based similarity method between sentence-pairs. The proposed similarity score is then applied in Equation (2), simproposed, resulting in the final similarity score between two sentences in each run. In the following, each of the two runs is described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": "3.2"
},
{
"text": "A kernel can be interpreted as a similarity measure between two sentences, it is a simple way of computing the inner product of two data points in a feature space directly as a function of their original space variables (Liang et al., 2011) . At SemEval 2015, a string kernel was presented, which relied on the hypothesis that the greater the similarity of word senses between two texts, the higher their semantic equivalence will be (Hassan et al., 2015) . Accordingly, this run employs the string kernel presented in (Hassan et al., 2015) in which the alignments obtained from our proposed aligner is used in mapping a sentence to feature space. The changed kernel mapping function is given by:",
"cite_spans": [
{
"start": 220,
"end": 240,
"text": "(Liang et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 434,
"end": 455,
"text": "(Hassan et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 519,
"end": 540,
"text": "(Hassan et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": "\uf07b \uf07d ) , ( max ) ( 1 i n i t t t sim S \uf0a3 \uf0a3 \uf03d \uf066 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": "where sim(t, ti) is the alignment score s of the two tokens if (t, ti, s) \uf0ce AlS1,S2 , and is equal to 0 otherwise, and n is the number of tokens contained in sentence S, i.e. | T |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": "The normalized string kernel between two sentences S1 and S2 is calculated as follows (Shawe-Taylor and Cristianini, 2004) :",
"cite_spans": [
{
"start": 86,
"end": 122,
"text": "(Shawe-Taylor and Cristianini, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": ") , ( ) , ( ) , ( ) , ( 2 2 1 1 2 1 2 1 S S S S S S S S S S S NS \uf06b \uf06b \uf06b \uf06b \uf03d (4) \uf0e5 \uf0ce \uf0d7 \uf03d \uf03d T t t t S S S S S S S ) ( ) ( ) ( ), ( ) , ( 2 1 2 1 2 1 \uf066 \uf066 \uf066 \uf066 \uf06b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": "where T is the set of all tokens in both S1 and S2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": "Given two sentences, S1 and S2, our similarity score between S1 and S2 proposed by this run is the value of the normalized string kernel function between the two sentences (Equation 4). That is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": ") , ( ) , ( 2 1 2 1 S S S S sim NS proposed \uf06b \uf03d (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run1: String Kernel with Alignments",
"sec_num": null
},
{
"text": "Alignment-based semantic similarity approaches presented in (Sultan et al., 2014a; Sultan et al., 2014b; Sultan et al., 2015) relied only on the proportions of the aligned content words on the two sentences. We hypothesized that alignments are not of the same importance, an alignment of synonym tokens with alignment score 1 is not the same as an alignment of two semantically related tokens with score 0.5. Hence, the proposed similarity score between S1 and S2 proposed for this run is based on the alignment scores as well as their proportion to the number of tokens in both sentences. It is given by:",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "(Sultan et al., 2014a;",
"ref_id": "BIBREF11"
},
{
"start": 83,
"end": 104,
"text": "Sultan et al., 2014b;",
"ref_id": "BIBREF12"
},
{
"start": 105,
"end": 125,
"text": "Sultan et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Run2: Alignment-Based Similarity Metric",
"sec_num": null
},
{
"text": "| | | | . * 2 ) , ( 2 1 2 1 2 , 1 T T s al S S sim S S Al al proposed \uf02b \uf03d \uf0e5 \uf0ce (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run2: Alignment-Based Similarity Metric",
"sec_num": null
},
{
"text": "where Ti is a set of tokens in sentence i, and al.s is the score calculated for the alignment al. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Run2: Alignment-Based Similarity Metric",
"sec_num": null
},
{
"text": "The main evaluation measure selected by the task organizers was the Pearson correlation between the system scores and the gold standard scores. Table 1 presents the official results of our submissions in SemEval2017-Task1 for both Run1 and Run2 in the six tracks as well as the primary track. The best performing score obtained in each track is included as well alongside with the baseline system results announced by the task organizers.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "Our best system (Run1) achieved 0.619 correlation and ranked the 10 th run and the 5 th team out of 84 runs and 31 teams respectively. Although the performance of the two Runs differs slightly, it is noticeable from the table that Run1 (Kernel) performs better with cross-lingual sentence-pairs, while Run2 (Alignments) performs better with monolingual sentence-pairs. Hence, relying on aligned tokens only in crosslingual sentences is insufficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "Experimental results proved that, in spite of the fact that our proposed simple unsupervised approach relies only on BabelNet and token alignments, it is capable of assessing the semantic similarity between two sentences in different languages with good performance, 10 th run rank and 5 th team rank. Also, the proposed approach demonstrates the effectiveness and usefulness of using the BabelNet semantic network in solving the STS task. Some potential future work includes enhancing our proposed synset similarity method, and exploiting the extraction of promising content words in the given sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future work",
"sec_num": "5"
},
{
"text": "http://babelnet.org/ 2 Each word is a Babel sense in the subscripted language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "According to experimental results conducted, we found that the best value for this threshold is 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.stanford.edu/software/corenl p.shtml 5 http://www.lexiconista.com/datasets/lem matization/ 6 http://camel.abudhabi.nyu.edu/madamira/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation",
"authors": [],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation. In Proceed- ings of the 11th International Workshop on Seman- tic Evaluation (SemEval 2017), pages 1-14, Van- couver, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "UMBC EBIQUI-TY-CORE: Semantic textual similarity systems",
"authors": [
{
"first": "Lushan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Abhay",
"middle": [],
"last": "Kashyap",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "44--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lushan Han, Abhay Kashyap, Tim Finin, James May- field, and Jonathan Weese. 2013. UMBC EBIQUI- TY-CORE: Semantic textual similarity systems. In Proceedings of the Second Joint Conference on Lexical and Computational Semantics, pages. 44- 52, Atlanta, Georgia, USA",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Samsung: Align-anddifferentiate approach to semantic textual similarity",
"authors": [
{
"first": "Lushan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Martineau",
"suffix": ""
},
{
"first": "Doreen",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "172--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lushan Han, Justin Martineau, Doreen Cheng, and Christopher Thomas. 2015. Samsung: Align-and- differentiate approach to semantic textual similari- ty. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 172-177, Denver, Colorado, USA",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "ExB Themis: Extensive feature extraction from word alignments for semantic textual similarity",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "H\u00e4nig",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "Xose De La",
"middle": [],
"last": "Puente",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "264--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian H\u00e4nig, Robert Remus, and Xose De La Puente. 2015. ExB Themis: Extensive feature ex- traction from word alignments for semantic textual similarity. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 264-268, Denver, Colorado, USA",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "FCICU: The Integration between Sense-Based Kernel and Surface-Based Methods to Measure Semantic Textual Similarity",
"authors": [
{
"first": "Basma",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Samir",
"middle": [],
"last": "Abdelrahman",
"suffix": ""
},
{
"first": "Reem",
"middle": [],
"last": "Bahgat",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "154--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basma Hassan, Samir AbdelRahman, and Reem Bahgat. (2015). FCICU: The Integration between Sense-Based Kernel and Surface-Based Methods to Measure Semantic Textual Similarity. In Proceed- ings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 154-158, Den- ver, Colorado, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Soft Cardinality: A Parameterized Similarity Function for Text Comparison",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Becerra",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "",
"issue": "",
"pages": "449--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Jimenez, Claudia Becerra, and Alexander Gel- bukh. 2012. Soft Cardinality: A Parameterized Similarity Function for Text Comparison. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM), pages 449- 453, Montreal, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Support Vector Machines and Their Application in Chemistry and Biotechnology",
"authors": [
{
"first": "Yizeng",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Qing-Song",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hong-Dong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dong-Sheng",
"middle": [],
"last": "Cao",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizeng Liang, Qing-Song Xu, Hong-Dong Li, and Dong-Sheng Cao. 2011. Support Vector Machines and Their Application in Chemistry and Biotech- nology. CRC Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Stanford CoreNLP Natural Language Processing Toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bau- er, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstra- tions, pages 55-60, Baltimore, Maryland.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BabelNet: Building a Very Large Multilingual Semantic Network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "216--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2010. BabelNet: Building a Very Large Multilingual Se- mantic Network. In Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics, pages 216-225, Uppsala, Sweden.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "MADAMIRA: A Fast, Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic",
"authors": [
{
"first": "Arfath",
"middle": [],
"last": "Pasha",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Al-Badrashiny",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"El"
],
"last": "Kholy",
"suffix": ""
},
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Manoj",
"middle": [],
"last": "Pooleery",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "1094--1101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arfath Pasha, Mohamed Al-Badrashiny, Mona Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A Fast, Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic. In Proceedings of the 9th International Conference on Language Resources and Evalua- tion (LREC'14), pages 1094-1101, Reykjavik, Ice- land.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Back to Basics for Monolingual Alignment: Exploiting Word Similarity and Contextual Evidence",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Md Arafat Sultan",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "219--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Arafat Sultan, Steven Bethard, and Tamara Sumner. 2014a. Back to Basics for Monolingual Alignment: Exploiting Word Similarity and Con- textual Evidence. Transactions of the Association for Computational Linguistics, 2:219-230.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "DLS@CU: Sentence Similarity from Word Alignment",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Md Arafat Sultan",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "241--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Arafat Sultan, Steven Bethard, and Tamara Sumner. 2014b. DLS@CU: Sentence Similarity from Word Alignment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 241-246, Dublin, Ireland.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "DLS@CU: Sentence Similarity from Word Alignment and Semantic Vector Composition",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Md Arafat Sultan",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "148--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Arafat Sultan, Steven Bethard, and Tamara Sumner. 2015. DLS@CU: Sentence Similarity from Word Alignment and Semantic Vector Com- position. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 148-153, Denver, Colorado, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Kernel Methods for Pattern Analysis",
"authors": [
{
"first": "John",
"middle": [],
"last": "Shawe",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Shawe-Taylor and Nello Cristianini. 2004. Ker- nel Methods for Pattern Analysis. Cambridge Uni- versity Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Token alignments using our aligner between monolingual English -English sentence pair example."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Token alignments using our aligner between cross-lingual English -Arabic sentence pair from SemEval 2017-Track2 dataset."
}
}
}
}