|
{ |
|
"paper_id": "N10-1036", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:50:09.281050Z" |
|
}, |
|
"title": "", |
|
"authors": [], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe a utility evaluation to determine whether cross-document information extraction (IE) techniques measurably improve user performance in news summary writing. Two groups of subjects were asked to perform the same time-restricted summary writing tasks, reading news under different conditions: with no IE results at all, with traditional singledocument IE results, and with cross-document IE results. Our results show that, in comparison to using source documents only, the quality of summary reports assembled using IE results, especially from cross-document IE, was significantly better and user satisfaction was higher. We also compare the impact of different user groups on the results.", |
|
"pdf_parse": { |
|
"paper_id": "N10-1036", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe a utility evaluation to determine whether cross-document information extraction (IE) techniques measurably improve user performance in news summary writing. Two groups of subjects were asked to perform the same time-restricted summary writing tasks, reading news under different conditions: with no IE results at all, with traditional singledocument IE results, and with cross-document IE results. Our results show that, in comparison to using source documents only, the quality of summary reports assembled using IE results, especially from cross-document IE, was significantly better and user satisfaction was higher. We also compare the impact of different user groups on the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Heng", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Utility Evaluation of Cross-document Information Extraction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Information Extraction (IE) is a task of identifying 'facts' (entities, relations and events) within unstructured documents, and converting them into structured representations (e.g., databases). IE techniques have been effectively applied to different domains (e.g. daily news, Wikipedia, biomedical reports, financial analysis and legal documentations) and different languages. Recently we described a new cross-document IE task to extract events across-documents and track them on a time line. Compared to traditional single-document IE, this new task can extract more salient, accurate and concise event information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, a significant question remains: will the events extracted by IE, especially this new cross-document IE task, actually help end-users to make better use of the large volumes of news? In order to investigate whether we have reached this goal, we performed an extrinsic utility (i.e., usefulness) and usability evaluation on IE results. Two groups of subjects were asked to perform the same time-restricted summary writing tasks, reading news under different conditions: with no IE results at all, with traditional single-document IE results, and with cross-document IE results. Our results show that, in comparison to using source documents only, the quality of summary reports assembled using IE techniques, especially from cross-document IE, was significantly better. Also, as extraction quality increases from no IE at all to single-document IE and then to cross-document IE, user satisfaction increases. We also compare the impact of different user groups on the results. To the best of our knowledge, this is the first systematic evaluation of cross-document IE from a usability perspective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We applied the English single-document IE system (Ji and Grishman, 2008) and cross-document IE system presented in . Both systems were developed for the ACE program 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "(Ji and Grishman, 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of IE Systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The single-document IE system can extract events from individual documents. The core stages include entity extraction, time expression extraction and normalization, relation extraction and event extraction. Events include the 33 distinct types defined in ACE05. The extraction results are presented in tabular form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of IE Systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The cross-document IE system can identify important person entities which are frequently in-volved in events as 'centroid entities'; and then for each centroid entity, link and order the events centered around it on a time line and associate them to a geographical map. The event chains are presented in a user-friendly graphical interface . Both systems link the events back to their context documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of IE Systems", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our measurement challenge is to assess how IE techniques affect users' abilities to perform realworld tasks. We followed the summary writing task described in the Integrated Feasibility Experiment of the DARPA TIDES program (Colbath and Kubala, 2003) and the daily task conducted by intelligence analysts (Bodnar, 2003) . Each task in our evaluation is based on writing a summary of ACE-type events involving a specific centroid entity, using one of three levels of support:", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 250, |
|
"text": "(Colbath and Kubala, 2003)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 319, |
|
"text": "(Bodnar, 2003)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Study Execution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Level (I): Read the news articles, with assistance of keyword based sentence search; \u2022 Level (II): (I) + with assistance from singledocument IE results; \u2022 Level (III): (I) + with assistance from crossdocument IE results. The summary writing task for each entity using any level should be finished in 10 minutes. The users can choose to trust the IE results to create new sentences or select relevant sentences from the source documents. The IE systems were applied to a corpus of 106 articles from ACE 2005 training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Study Execution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We measure user responses in three aspects:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary Scoring", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Observer-based Quantity --How many sentences are extracted in each summary? How many of them are uniquely correct? \u2022 Observer-based Quality--How fluent and coherent are the sentences in each summary? \u2022 User-based Usability --How does the user feel about the system?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary Scoring", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We selected user groups based on the principles that we should run as many tests as we can afford (Nielsen, 1994) , and at least 5 to insure that we detect any major usability problems (Faulkner, 2003) . Two different groups of users were asked to conduct the evaluation:", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 113, |
|
"text": "(Nielsen, 1994)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 201, |
|
"text": "(Faulkner, 2003)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Group Selection", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(1) Hallway Evaluation We chose the first group of users with a \"Hallway Testing\" user-study method described in (Nielsen, 1994) . We randomly asked 11 PhD students in the field of natural language processing to conduct the evaluation. In order to evaluate these three levels independently, each student was asked to write at most one summary, using one of the three levels, for any single centroid entity. To avoid the impact of diverse text comprehension abilities, each student was involved in all of these three levels for different centroid entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 128, |
|
"text": "(Nielsen, 1994)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Group Selection", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(2) Remote Evaluation An effective utility evaluation will require users with a diversity of prior knowledge and computer experience. Therefore we asked the second group of 11 users in a remote usability testing mode (Hammontree et al., 1994) . We sent out the request to university-wide undergraduate student mailing lists and found 11 users to work on the evaluation. The evaluation procedure follows the Hallway Testing method, except that the tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. Also the users didn't meet with the observers and thus they were not aware of any expectations for results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 242, |
|
"text": "(Hammontree et al., 1994)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Group Selection", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section we will focus on reporting the results from Hallway Evaluation, while providing comparisons with Remote Evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The summaries were judged by two annotators and the judgements reconciled. A summary sentence is judged as uniquely correct if it: (1) includes relevant events involving the centroid entity; and (2) the same information was not included in previous sentences in the current summary. This metric can be considered as an approximate com bination of the \"content responsiveness\", \"nonredundancy\"and \"focus\" criteria in the NIST TAC summarization track 2 . ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Observer-based Quantity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Bush 3/1/0 5/1/2 6/0/0 Al-douri 4/3/3 4/2/0 6/0/1 Ba'asyir 3/1/0 3/0/0 5/0/0 Ibrahim 4/0/1 5/0/0 8/0/0 Giuliani 2/0/0 3/2/0 5/0/0 Erdogan 1/0/1 4/0/0 4/0/0 Toefting 0/0/0 7/1/0 4/0/0 Blair 2/0/1 3/0/0 5/0/0 Diller 3/0/0 4/1/0 3/0/0 Putin 2/1/0 4/3/2 7/1/1 Pasko 3/0/0 3/0/0 2/0/0 Overall 27/6/6 45/10/5 55/1/2 Table 1 . # (uniquely correct sentences)/ #(redundant correct sentences)/ #(spurious sentences) in a summary in Hallway Evaluation quantified Hallway Testing results for each centroid separately and the overall score. It shows that overall Level (II) contained 18 more correct sentences than the baseline (I), while (III) achieved 11 further correct sentences. (I) obtained significantly fewer sentences without assistance from IE tools. We conducted the Wilcoxon Matched-Pairs Signed-Ranks Test on a query entity basis for accuracy -number of (uniquely correct sentences)/number of (total extracted sentences in a summary). The results show that (III) is significantly better than (I) at a 99.2% confidence level, and better than (II) at a 96.9% confidence level. (II) is not significantly better than (I). We can also see that for some centroid entities such as \"Putin\", \"Al-douri\" and \"Giuliani\", (II) generated more sentences but also introduced more redundant information. The user feedback has indicated that they did not have enough time to remove redundancy. In contrast, (III) yielded much less redundant information. In fact, the average time the users spent using (III) was only about 7.2 minutes. Therefore we can conclude that crossdocument IE can produce more informative summaries in a more efficient way.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 317, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Observer-based Quantity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Error analysis showed that the major error types propagated from IE to summaries are as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Observer-based Quantity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. Event time errors. For example, the summary sentence \"Toefting was convicted in September 2001 of assaulting a pair of restaurant workers in the capital\" was judged as incorrect because the time argument should be \"October 2002\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Observer-based Quantity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "2. Pronoun resolution errors. When a pronoun is mistakenly linked to an entity, incorrect event arguments will be included in the summaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Observer-based Quantity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "3. Event type errors. When an event is misclassified, the users tend to use incorrect templates and thus generate wrong summaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Observer-based Quantity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "4. Negative events. Sometimes the event attribute classifier makes mistakes and the users include negative events in the summaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Observer-based Quantity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the Remote Testing, the accuracy results from the three levels are as follows: 21/37, 28/37 and 31/36. Thus both user groups benefited from using IE techniques, but the enhancements vary a lot. In the Hallway Testing, the users were better trained and more familiar with IE tools (including the graphical interface of cross-document IE); and thus they can benefit more from the IE techniques. In contrast, in the Remote Evaluation, the users had quite diverse knowledge backgrounds. For example, one remote user was only able to find 1-2 sentences using any of the three levels; while another, more skilled remote user found more than 5 sentences with any level. However the Remote Evaluation is important to gather the feedback of the more subjective usability evaluation in section 4.4. Because the users in Hallway Testing may be aware of the observations that the observer is hoping to achieve, they may provide potentially biased feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Impact of User Groups", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The evaluation also showed that (III) produced summaries with better quality. We asked the observers to give a score between [1, 10] to each summary according to the following TAC summarization quality criteria: Readability/Fluency, Referential Clarity and Structure/Coherence. Table 2 shows the evaluation results for the three different methods. In their detailed feedback, the users indicated that (III) has the following advantages: (1) Better pronoun resolution; (2) More complete and accurate temporal order because (III) Can recover unknown time arguments using cross-document inference. (3) Can generate abstractive summaries. For the biographical events (e.g. employment), some users were able to use specific templates such as \"PER was hired by ORG at TIME\" to write summaries. For example, a sentence \"Bush and Blair met at Camp David and the UK three times in March 2003\" was derived from three different \"Contact-Meeting\" events in the event chains. (4) Can connect related events into more concise summaries. For example, several events were connected to generate the following sentences \"Pasko was appealed for treason crime on April 16, 2003 and then released on June 15, 2003\". The readability scores in Table 2 also indicate that a more effective template generation method should be developed to produce more fluent summaries based on IE results.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 285, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1221, |
|
"end": 1228, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Observer-based Quality", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The user feedback from both evaluations also showed that (II) and (III) results were trusted almost equally, and (III) was claimed to provide the most useful functions. The positive comments about (III) include \"Temporal Linking allows logical reasoning and generalization\", \"Centroid search helps to focus immediately\", \"Spatial Linking allows to browse all the places which a person has visited\", \"Name disambiguation helps to filter irrelevant information\", \"Can find key information from event chains\", \"Timeline helps correlate events\"; and the negative comments include \"Sometimes IE errors mislead locating the sentences\", \"No support of name pair search for meeting events\", \"No color emphasis of events on the original documents\" and \"No suggestions of templates to compose summary sentences\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User-based Usability", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Through a utility evaluation on summary writing we have proved that IE techniques, especially cross-document IE, can aid news browsing, search and analysis. In particular, temporal event tracking across documents helps users perform better at fact-gathering than they do without IE. Users also produced more informative summaries with crossdocument IE than with traditional single-document IE. We also compared and analyzed the differences between two user groups. Such measures of the benefits to the eventual end users also provided feedback on what works well and identified additional research problems, such as to expand the centroid to a pair of entities and to provide confidence metrics in the interface. In the future we aim to set up an online news article analysis system and perform larger and regular utility evaluations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://www.itl.nist.gov/iad/mig/tests/ace/2005/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.nist.gov/tac/2009/Summarization/update.su mm.09.guidelines.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by the U.S. NSF CAREER Award under Grant IIS-0953149, the U.S. Army Research Laboratory under Cooperative Agreement Number W911NF-09-2-0053, Google, Inc., CUNY Research Enhancement Program, Faculty Publication Program and GRTI Program. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Warning Analysis for the Information Age: Rethinking the Intelligence Process", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Bodnar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Center for Strategic Intelligence Research, Joint Military Intelligence College", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John W. Bodnar. 2003. Warning Analysis for the In- formation Age: Rethinking the Intelligence Process. Center for Strategic Intelligence Research, Joint Military Intelligence College, Washington, D.C.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "TAP-XL: An Automated Analyst's Assistant", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Colbath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Kubala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. HLT-NAACL 2003 (demonstrations)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean Colbath and Francis Kubala. 2003. TAP-XL: An Automated Analyst's Assistant. Proc. HLT-NAACL 2003 (demonstrations).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Beyond the five-user assumption: Benefits of increased sample sizes in usability testing", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Faulkner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Behavior Research Methods Instruments and Computers", |
|
"volume": "35", |
|
"issue": "3", |
|
"pages": "379--383", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Faulkner. 2003. Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods Instruments and Com- puters 35(3), 379-383.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Cross-document Temporal and Spatial Person Tracking System Demonstration", |
|
"authors": [ |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heng Ji and Zheng Chen. 2009. Cross-document Tem- poral and Spatial Person Tracking System Demon- stration. Proc. HLT-NAACL 2009.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Cross-document Event Extraction, Ranking and Tracking", |
|
"authors": [ |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prashant", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heng Ji, Ralph Grishman, Zheng Chen and Prashant Gupta. 2009. Cross-document Event Extraction, Ranking and Tracking. Proc. Recent Advances in Natural Language Processing 2009.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Usability Engineering", |
|
"authors": [ |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Nielsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jakob Nielsen. 1994. Usability Engineering. Morgan Kaufmann Publishers.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"content": "<table><tr><td>presents the</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |