Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S16-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:25:22.810258Z"
},
"title": "Know-Center at SemEval-2016 Task 5: Using Word Vectors with Typed Dependencies for Opinion Target Expression Extraction",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Falk",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Andi",
"middle": [],
"last": "Rexha",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Roman",
"middle": [],
"last": "Kern",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our participation in SemEval-2016 Task 5 for Subtask 1, Slot 2. The challenge demands to find domain specific target expressions on sentence level that refer to reviewed entities. The detection of target words is achieved by using word vectors and their grammatical dependency relationships to classify each word in a sentence into target or non-target. A heuristic based function then expands the classified target words to the whole target phrase. Our system achieved an F1 score of 56.816% for this task.",
"pdf_parse": {
"paper_id": "S16-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our participation in SemEval-2016 Task 5 for Subtask 1, Slot 2. The challenge demands to find domain specific target expressions on sentence level that refer to reviewed entities. The detection of target words is achieved by using word vectors and their grammatical dependency relationships to classify each word in a sentence into target or non-target. A heuristic based function then expands the classified target words to the whole target phrase. Our system achieved an F1 score of 56.816% for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Nowadays, modern technologies allow us to collect customer reviews and opinions in a way that changed the sheer amount of information available to us. For that matter the requirement to extract useful knowledge from this data rose up to a point where machine learning algorithms can help to accomplish this much faster and easier than humanly possible. Natural language processing (NLP) emerges as an interfacing tool between human natural language and many technical fields such as machine learning and information extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This article describes our approach towards Opinion Target Expression (OTE) extraction as defined by Task 5 for Subtask 1, Slot 2 of the SemEval-2016 (Pontiki et al., 2016) challenge. The core goal behind Slot 2 in Subtask 1 of Task 5 is to extract consecutive words which, by means of a natural language, represent the opinion target expression. The opinion target expression is that part of a sentence which stands for the entity towards which an opinion is being expressed. An example could be the word \"waitress\" in the sentence \"The waitress was very nice and courteous the entire evening.\".",
"cite_spans": [
{
"start": 150,
"end": 172,
"text": "(Pontiki et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The evaluation for Slot 2 fell into evaluation phase A, where provided systems were tested in order to return a list of target expressions for each given sentence in a review text. Each target expression was an annotation composed of the index of the starting and end character of the particular expression as well as its corresponding character string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For our system we decided to used word vectors (Mikolov et al., 2013a; Mikolov et al., 2013b) . Word vectors (Bengio et al., 2003) are distributed representations which are designed to carry contextual information of words if their training meets certain criteria. We also used typed grammatical dependencies to extract structural information from sentences. Furthermore we used a sentiment parser to determine the polarity of words.",
"cite_spans": [
{
"start": 47,
"end": 70,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF4"
},
{
"start": 71,
"end": 93,
"text": "Mikolov et al., 2013b)",
"ref_id": "BIBREF5"
},
{
"start": 109,
"end": 130,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system uses Stanford dependencies (Chen and Manning, 2014) and utilizes the Stanford Sentiment Treebank (Socher et al., 2013) for sentiment word detection.",
"cite_spans": [
{
"start": 38,
"end": 62,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF1"
},
{
"start": 108,
"end": 129,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "External Resources",
"sec_num": "2"
},
{
"text": "For the Opinion Target Extraction (OTE) task, in order to extract different features, we followed a supervised approach. We train and test different combinations of these features first at the word level and following on the provided training data 1 on sentence level before using our classifier for the final evaluation. There are two essential steps performed by our system to correctly annotate opinion target expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System for Slot 2: Opinion Target Extraction",
"sec_num": "3"
},
{
"text": "1. Classify each word of a sentence as either target or non-target 2. Given each target word, find the full target phrase",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System for Slot 2: Opinion Target Extraction",
"sec_num": "3"
},
{
"text": "For classification we use a L2-regularized L2-loss support vector dual classification 2 provided by the LIBLINEAR (Fan et al., 2008) library. In the second step we use heuristics, based on observations and statistical information we extracted from the training data. They key obversvation is that target expressions are usually composed of noun phrases and/or proper nouns. In all trials we allow only certain Part of Speech (PoS) tags for target words which are NN, NNS, NNP, NNPS and FW from the Penn Treebank (Marcus et al., 1993) ",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "(Fan et al., 2008)",
"ref_id": "BIBREF2"
},
{
"start": 512,
"end": 533,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System for Slot 2: Opinion Target Extraction",
"sec_num": "3"
},
{
"text": "In this section we describe the different set of features we evaluated and how they can be extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "We obtain tokens by using the Stanford Parser and extract all tokens from the available reviews used for training. We are then able to use tokens as a feature for the classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token",
"sec_num": "4.1"
},
{
"text": "As another feature for words we are using the pretrained word vectors of Google News dataset 3 . Each 1 Using the English data set 2 Implementation of a Support Vector Regression Machine 3 https://code.google.com/archive/p/word2vec/ word vector is a 300-dimensional, real-valued vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Vector Feature",
"sec_num": "4.2"
},
{
"text": "Using Stanford dependencies, we extract for each word in a sentence its typed dependencies to other words in the sentence. Given the sentence \"Machine learning is fun!\", the feature for \"learning\" is compound;nsubj which are the present relations for this word. We extract all typed dependency combinations from all provided words in the training set and use these in a Bag of Words (BoW) sparse vector model. In order to normalize this feature we order the relations alphabetically and remove duplicates. For example det;amod;amod gets normalized to amod;det.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Typed Dependencies Feature",
"sec_num": "4.3"
},
{
"text": "Another approach is to look at the dependencies individually. We use the set of present grammatical relations as feature vector and set corresponding fields to 1 if the word does own such a relation and 0 otherwise. We are testing the two possible options of directed and undirected dependencies to see if this additional information has an impact on the end result. A short overview of a textual representation of these features can be seen in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Individual Typed Dependencies Feature",
"sec_num": "4.4"
},
{
"text": "In the undirected approach we extract the relations of each word from the data and use the resulting set of present relations as feature vector. From the training set we extracted 105 different undirected relations. Here the directional information of the grammatical dependency is lost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Undirected",
"sec_num": "4.4.1"
},
{
"text": "For the directed approach we preserve the direction in terms of incoming or outgoing relations for each grammatical relation. As an example, the word \"learning\" from Figure 1 has an outgoing relation compound+ and an incoming relation nsubjwhere + depicts the outgoing relation andthe incoming respectively. This way we found 164 different relations in the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 174,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Directed",
"sec_num": "4.4.2"
},
{
"text": "Feature no learning coumpund;subj; yes learning coumpund-;subj+; ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typed Dependency Features Directed Word",
"sec_num": null
},
{
"text": "For a given word we determine whether it has a grammatical relation to a sentiment word. A sentiment word is a word that can have a positive or negative meaning for example \"breathtaking\" in \"The food was breathtaking!\". We are not considering a directional approach which makes this a binary feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentimend Dependency Feature",
"sec_num": "4.5"
},
{
"text": "This section describes the results we achieved on the restaurant domain of the SemEval-2016 aspect based sentiment analysis (ABSA) on Task 5, Slot 2. It also explains how we trained and tested our system only on the provided training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We determine how well our different features are performing by splitting the train data available and using 80% training and 20% test data. In Table 3 the performance on the target-word class of the individual features are shown depicting the performance of classifying single words as targets or non-targets. The results for the similarly token-based approach outperforms the other approaches. The weighted average for Token settles at 0.696 and very similar Token + combined typed dependencies at 0.697. None of the word vector approaches outperforms these two.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Word-Level Feature Evaluation",
"sec_num": "5.1"
},
{
"text": "To test our features we use the same training/testing split of the SemEval-2016 training data and utilize it to train the classifier and run the SemEval-2016 evaluation tool respectively. In order to annotate the Opinion Target Expressions (OTE) our system first classifies single tokens of a sentence into target or non-target and further tries to complete the target expression. The completion of the target expression is heuristic based and looks at existing incoming or outgoing compound relations using Stanford dependencies (Chen and Manning, 2014) . Each compound relation is added to the target phrase and correspondingly extended. In Table 4 we can see the results for the evaluation. It shows that despite having a better result on word-level, the token-based approach falls behind the word vector approach. It is interesting to see, that adding the undirected grammatical relations as feature does not improve the F1 score but performs even worse than the pure w2v approach. However, taking directed dependencies into account does improve the results again. We can see that for directed dependencies the recall improves but in contradiction the precision declines resulting in a higher missclassification rate and thus in a lower F1 score than we were hoping to see.",
"cite_spans": [
{
"start": 530,
"end": 554,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 643,
"end": 650,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Testing Features",
"sec_num": "5.2"
},
{
"text": "Our submitted system is using the individual (directed) typed dependencies and the sentiment information combined with word vectors as features. The official results for participating unconstrained systems for Slot 2: Opinion Target Extraction can be seen in Table 5 . The table shows the F1-score for all participating unconstrained systems. Our system was able to outperform the baseline and a few others. Considering only unconstrained systems, Know-Center reached rank 6 out of 10 (excluding the baseline results). ",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Official Evaluation Results: Restaurant domain",
"sec_num": "5.3"
},
{
"text": "In this paper, we presented our approach for SemEval-2016 Task 5 for Subtask 1, Slot 2 in order to introduce ourselves to this particular evaluation task. Our solution might have potential for improvement and might be able to reach a much better ranking than what it achieved in the course of this challenge. Therefore, we will continue our work by focusing on finding the correct target phrase annotation given one or more target words. A drawback of our solution is the heuristic based selection of the full target phrase and we are curious about how we can improve our results with more sophisticated techniques for target phrase labelling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "The Know-Center GmbH Graz is funded within the Austrian COMET Program -Competence Centers for Excellent Technologies -under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. J. Mach. Learn. Res., 3:1137-1155, March.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Fast and Accurate Dependency Parser using Neural Networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A Fast and Accurate Dependency Parser using Neural Networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LIBLINEAR: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A li- brary for large linear classification. Journal of Ma- chine Learning Research, 9:1871-1874.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "COMPUTA-TIONAL LINGUISTICS",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. COMPUTA- TIONAL LINGUISTICS, 19(2):313-330.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their compositional- ity. CoRR, abs/1310.4546.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SemEval-2016 task 5: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitrios",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Al-",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Smadi",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Orph\u00e9e",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "De Clercq",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Tannier",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Loukachevitch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph\u00e9e De Clercq, V\u00e9ronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeny Kotelnikov, Nuria Bel, Salud Mar\u00eda Jim\u00e9nez- Zafra, and G\u00fcl\u015fen Eryigit. 2016. SemEval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evalua- tion, SemEval '16, San Diego, California, June. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christo- pher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1631- 1642, Stroudsburg, PA, October. Association for Com- putational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Shown are typed dependencies from Stanford dependencies visualized with grammarscope.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "",
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"html": null,
"text": "listed in Table1.",
"content": "<table><tr><td colspan=\"2\">PoS-Tag Name NN Noun, singular or mass</td></tr><tr><td>NNS</td><td>Noun, plural</td></tr><tr><td>NNP</td><td>Proper noun, singular</td></tr><tr><td>NNPS</td><td>Proper noun, plural</td></tr><tr><td>FW</td><td>Foreign word</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "",
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "The table shows two example of undirected and di-",
"content": "<table><tr><td>rected typed dependency features for the word \"learning\" in</td></tr><tr><td>the sentence \"Machine learning is fun!\".</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"text": "The resulting F1 scores for the target-word class using",
"content": "<table><tr><td>different features on word-level over a 80/20 training/test split</td></tr><tr><td>of the provided training data.</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"text": "Shown are evaluation F1 scores given by the SemEval-",
"content": "<table><tr><td>2016 evaluation tool for different features and feature combina-</td></tr><tr><td>tions used for training on a 80/20 training/test split of the pro-</td></tr><tr><td>vided training data.</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"num": null,
"html": null,
"text": "Shown are the official evaluation results for Subtask 1, Slot 2 of Task 5 from the SemEval-2016 challenge for the Restaurant domain. The table shows only results for unconstrained systems.",
"content": "<table/>"
}
}
}
}