|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:31:43.892753Z" |
|
}, |
|
"title": "Handling Comments in Collaborative Documents through Interactions", |
|
"authors": [ |
|
{ |
|
"first": "Madison", |
|
"middle": [], |
|
"last": "Anzelc", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Burkhart", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xuankai", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Wangyou", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yanmin", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Qian", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Shinji", |
|
"middle": [], |
|
"last": "Roux", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Churchill", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Trevor", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sara", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Les", |
|
"middle": [], |
|
"last": "Bly", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Davor", |
|
"middle": [], |
|
"last": "Nelson", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cubranic", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Comments are widely used by users in collaborative documents every day. The documents' comments enable collaborative editing and review dynamics, transforming each document into a context-sensitive communication channel. Understanding the role of comments in communication dynamics within documents is the first step towards automating their management. In this paper we propose the first ever taxonomy for different types of in-document comments based on analysis of a large scale dataset of public documents from the web. We envision that the next generation of intelligent collaborative document experiences allow interactive creation and consumption of content, there We also introduce the components necessary for developing novel tools that automate the handling of comments through natural language interaction with the documents. We identify the commands that users would use to respond to various types of comments. We train machine learning algorithms to recognize the different types of comments and assess their feasibility. We conclude by discussing some of the implications for the design of automatic document management tools. 1 Introduction Comments on collaborative documents serve as a communication channel. This type of contextspecific communication allows dynamics to review and edit content within the document. Collaborative text editors have visual components that allow users to associate a comment with a specific part of the content. This provides additional context in situations where the conversation focuses on a specific part of the document (Churchill et al., 2000). As we can see, the amount of contextualization in communication that document comments permit is too complex and costly to recreate in other communications means outside of a document. For example, a request for changing a certain part of a document's content (e.g. a paragraph's sentence) through email would require much 044 additional information to be provided about all 045 of the context before requesting the change. 046 In this paper, we present a novel taxonomy of 047 the types of comments detected in a collection 048 of public documents. We detect three main cat-049 egories of intents for comments that are Modifi-050 cation, Information Exchange, and Social Com-051 munication. We show that supervised models 052 can successfully be trained to identify the type", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Comments are widely used by users in collaborative documents every day. The documents' comments enable collaborative editing and review dynamics, transforming each document into a context-sensitive communication channel. Understanding the role of comments in communication dynamics within documents is the first step towards automating their management. In this paper we propose the first ever taxonomy for different types of in-document comments based on analysis of a large scale dataset of public documents from the web. We envision that the next generation of intelligent collaborative document experiences allow interactive creation and consumption of content, there We also introduce the components necessary for developing novel tools that automate the handling of comments through natural language interaction with the documents. We identify the commands that users would use to respond to various types of comments. We train machine learning algorithms to recognize the different types of comments and assess their feasibility. We conclude by discussing some of the implications for the design of automatic document management tools. 1 Introduction Comments on collaborative documents serve as a communication channel. This type of contextspecific communication allows dynamics to review and edit content within the document. Collaborative text editors have visual components that allow users to associate a comment with a specific part of the content. This provides additional context in situations where the conversation focuses on a specific part of the document (Churchill et al., 2000). As we can see, the amount of contextualization in communication that document comments permit is too complex and costly to recreate in other communications means outside of a document. For example, a request for changing a certain part of a document's content (e.g. a paragraph's sentence) through email would require much 044 additional information to be provided about all 045 of the context before requesting the change. 046 In this paper, we present a novel taxonomy of 047 the types of comments detected in a collection 048 of public documents. We detect three main cat-049 egories of intents for comments that are Modifi-050 cation, Information Exchange, and Social Com-051 munication. We show that supervised models 052 can successfully be trained to identify the type", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "text, such as the selected text, the paragraph 126 text, and the comment text. In this work, we 127 study intent detection models that use multiple 128 elements of the context. intents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using voice as an input interface is not something novel. In 1976, Reddy reviewed the effectiveness of acoustic, phonetic, syntactic, and semantic subsystems (Reddy, 1976) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 171, |
|
"text": "(Reddy, 1976)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "135", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Some pioneering work detecting commands from audio include techniques where sequences of phonemes (Halle and Stevens, 1962) and prosodemes (Peterson, 1961) were interpreted as commands. The human voice is especially challenging to detect because of the variability among individuals (Radha and Vimala, 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 123, |
|
"text": "(Halle and Stevens, 1962)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 155, |
|
"text": "(Peterson, 1961)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 307, |
|
"text": "(Radha and Vimala, 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "135", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Early work in human voice processing was constrained to a limited set of words (Pieraccini and Director, 2012) . The feature engineering techniques over audio help to identify descriptors that characterize words. Some toolkits that extract a variety of those features emerged, such as SMILE (Eyben et al., 2010) . These enabled some approaches based on classic machine learning techniques such as Support Vector Machines (Kanth and Saraswathi, 2015).", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 110, |
|
"text": "(Pieraccini and Director, 2012)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 311, |
|
"text": "(Eyben et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "135", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The major change in performance and efficiency happened when neural networks were fed large amounts of data. Some early neural network approaches used Hidden Markov Models to detect words in English (Aldarmaki et al., 2021) . Latest work in this field uses Transformers for detecting multi-speaker speech recognition (Chang et al., 2020).", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 223, |
|
"text": "English (Aldarmaki et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "135", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Assistance over document writing is an antique practice. Scribes were people who made copies and wrote letters on behalf of others not only to avoid the need to write for themselves but also because of illiteracy (Anzelc et al., 2021) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 234, |
|
"text": "(Anzelc et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Assisted Document Management", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "This section explains the details of our process for preparing the document comment dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "177", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We have curated a set of documents that contain multiple comments from public sources available on the web. To our knowledge, there are currently no datasets available that have been curated for investigation of in-document comments. It is evident that such dataset is needed for research. Although it is possible to investigate comments on public pages such as Wikipedia, Reddit, Twitter, YouTube, or other web forums, however, their use case of comments on these forums is inherently very different than in-document comments used for collaborative authoring. In-document comments are interactive and conversational and commonly request and result in changes and updates to the content of the document that is shared. In-document comments are intended to be carefully reviewed by the intended recipients, and authors and reviewers tend to resolve and remove them prior to releasing documents to the public readers. This practice makes it very difficult to come across in-document comments in public mature documents. Private files which are earlier in the editing life-cycle are more likely to have threads of comments. We use public documents because releasing private files is not possible due to copyright and privacy concerns. In addition to the challenges mentioned, we observed that only a certain percentage of word documents from recent years (after 2003) support comments and that we were able to extract comments from them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "177", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We used an initial index of 1,000,000 word documents from the web through the Com-monCrawl (Com, a) and filtered them based on the language to obtain English 'en' documents from the index. We also filtered this collection to include only Microsoft Word documents with the '.docx' extension. The reasons behind the decision to use only .docx file were that 1) the non-binary nature of the XML files contained in the .docx bundle make the data extraction easy with common XML tools; and 2) in 2003 (the same year that the .docx format was introduced) the comments were integrated to the document interface.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We observed that Some files were duplicates of one another even though they were indexed at different addresses and had different filenames and URLs. For some instances, this was because of the changes between CommonCrawl index batches. In order to be able to detect 244 duplicates of files and prevent duplicates from 245 reappearing in our dataset, we compared their", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Collection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "MD5 hashes with one another. We then ad-247 dressed the issue for files that were not dupli-248 cates of one another but rather incremental ver-249 sions; in those cases, we kept the document 250 with a higher number of comments. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "246", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Selected text: The text to which the com-333 ment refers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "332", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Paragraph text: The text of the paragraph 335 where the comment belongs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "334", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Thread text: The comments that precede 337 the comment to be evaluated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "336", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The training of the models was carried out at different hierarchical levels of categories. For each model, the text of the comment was evaluated as well as texts located in other regions of the document that correspond to the context. Table 3 shows the top category level performance metrics over all the data across models. From the results, we can see that the Transformer models had a similar overall performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classification Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The models were trained with a combination of context elements. Table 4 shows that there were no major changes to how context items can improve the classification task for comments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 71, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classification Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Transformer-based models accomplished this task with similar results across all models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Performance across categories may vary depending on the hierarchy level of each category. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The study of how users would interact with an interface that addresses document comments management in real settings requires the collection of real documents and the implementation of tools in the workplace. In this section, we explain the processes from documents data collection to the collection of interactions of participants in the field study.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The interaction over documents with comments is not the same for different types of comments. Dabbish et al., 2005) . We then proceed to label the data via KarmaHub crowd The comment is a request for change, a commitment to making a change, or an acknowledgment of a change that was already performed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 115, |
|
"text": "Dabbish et al., 2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scenarios", |
|
"sec_num": "6.1.1" |
|
}, |
|
{ |
|
"text": "Please write the answer in your own words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scenarios", |
|
"sec_num": "6.1.1" |
|
}, |
|
{ |
|
"text": "Asking for a change. I would add it as context for the pre-sales resource (in pink text).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MODIFICATION RE-QUESTED", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The modification is related to the content. This could be rephrased to something like 'Once a study guide is available, all test candidates will be notified' FORMAT MODIFICATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONTENT MODIFICATION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The modification requires a change in formatting. Should be centered throughout the doc EXPLICIT The things to be changed are explicitly defined in the comment. We should remove this part of the statement. I think this is a good point.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONTENT MODIFICATION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The author is acknowledging a comment from a reviewer. I see.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ACKNOWLEDGMENT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The comment is part of a conversation. I'm glad there is an ongoing discussion FEEDBACK The reviewer gives feedback to the author. Great start to this unit.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DISCUSSION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The comment is related to the content. Providing a basic statement of why we're prioritizing these over others will help us negotiate when folks come to us with requests outside of this scope.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONTENT RELATED", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The comment is related to the comment thread. Feel free to add/edit to ensure this point is highlighted throughout the doc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "THREAD RELATED", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After addressing it, it may lead to a change in the document. Who ishe?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POTENTIAL CHANGE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NOT POTENTIAL CHANGE It does not cause any change in the document after addressing it. shared! 6 181 We conducted a crowd-sourced field study on", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POTENTIAL CHANGE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "KarmaHub. We iterated the instructions with 471 the crowd-sourcing provider on three pilots to 472 verify that the goals of the task were understood.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "470", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We asked 50 participants to complete six sce-474 narios each. We got three samples per scenario.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "473", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Participants were asked to give a voice com-476 mand first and then execute it in the interface.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "475", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We collected voice samples and telemetry sam- were part of the contextual information. We the words in the comment were used more often (up to 23% of the words in the command text.) Most of the words (from 62% to 72%)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "477", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "were unique are were not present in the context. Table ? ? shows the top ten trigrams detected on each type of comment. We can see that the most common trigrams correspond to phrases that were used to handle the comment box than phrases used to perform the requested edits. Table 8 shows the duration in seconds of each voice command. The voice commands range from 5 to 7 seconds, the median. Table 9 shows the metrics obtained by analyzing the user actions in the experimentation platform. We can observe that participants spent a median between 7 to 20 seconds across conditions. Participants selected more text than the text that was typed. Not all the participants interacted with the comment box, the scenario with more interaction was social communication with 26%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 56, |
|
"text": "Table ?", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 281, |
|
"text": "Table 8", |
|
"ref_id": "TABREF12" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 400, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "477", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We identified that often the execution of the command took longer than the time to say the command; it ranges from 1 to 15 seconds. The number of selected words was longer than the words dictated by the users; this can be explained because of the use of ranges in the voice commands. Often users mentioned the first and end words of a sentence to mark the position from where to highlight a text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Modal Analysis", |
|
"sec_num": "6.2.4" |
|
}, |
|
{ |
|
"text": "After the command collection, the commands were separated by edition commands and comment management commands. We can identify how the assistant is impersonated, most participants were respectful by saying please before the commands i.e. \"Please remove the text starting from [...] ,\" \"Please remove the text [...] .\" Some other users did not mention that they wanted to delete or resolve a comment; they only said, \"Done.\" We identified some participants that delegated some tasks to the agent instead of retrieving and dictating manually \"Please add the two journal titles that the co-author is asking.\" 7 182 request for deleting, replying, or marking the comment as done;", |
|
"cite_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 281, |
|
"text": "[...]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 314, |
|
"text": "[...]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Analysis", |
|
"sec_num": "6.2.5" |
|
}, |
|
{ |
|
"text": "(2) Dictation, when the action was \"reply,\" then users started dictating the text to reply with.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Analysis", |
|
"sec_num": "6.2.5" |
|
}, |
|
{ |
|
"text": "The findings of this work can help platform designers to enable assistants in the text editors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Comment Management", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "From our results, we can observe that the time spent in dictation and in actually performing the task was similar. The main goal of those tools might not be to improve productivity but to offer hands-free solutions to manage collaborative documents. Tools can also help users triage their comments depending on the type of comment. The data can also be used to infer in which cases the users prefer to delete or to keep the comment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Comment Management", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "The field study was conducted with crowd workers asked to resolve comments in documents that were not of their authorship and with comments left by strangers. The behavior of users that own the document and collaborate with people they know might differ the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations and Future Work", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "The participants did not work in a common text editor; this might cause a delay in their executions due to the lack of familiarity with the tool. manually to map the telemetry with commands.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations and Future Work", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "We identified the main commands used while ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "634", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "were unable to identify the intent (i.e., a mul-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "speech-to-text transcription was performed via", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Microsoft Cognitive Services (Cog). In case a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work shed light on the required steps to au-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Cognitive services-apis for ai so-642 lutions | microsoft azure. https: conference on Human factors in computing systems", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "454--461", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cognitive services-apis for ai so- 642 lutions | microsoft azure. https: conference on Human factors in computing sys- tems, pages 454-461.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Understanding email use: predicting action on a message", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Fussell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kiesler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the SIGCHI conference on Human factors in computing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "691--700", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fussell, and Sara Kiesler. 2005. Understand- ing email use: predicting action on a message. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 691-700.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Opensmile: the munich versatile and fast open-source audio feature extractor", |
|
"authors": [ |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Eyben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "W\u00f6llmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bj\u00f6rn", |
|
"middle": [], |
|
"last": "Schuller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 18th ACM international conference on Multimedia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1459--1462", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Florian Eyben, Martin W\u00f6llmer, and Bj\u00f6rn Schuller. 2010. Opensmile: the munich versa- tile and fast open-source audio feature extractor. In Proceedings of the 18th ACM international conference on Multimedia, pages 1459-1462.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic speech recognition and transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Anil", |
|
"middle": [], |
|
"last": "Kumar Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankita", |
|
"middle": [], |
|
"last": "Kumari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachna", |
|
"middle": [], |
|
"last": "Somkunwar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anil Kumar Gupta, Ankita Kumari, and Rachna Somkunwar. Automatic speech recog- nition and transliteration.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Speech recognition: A model and a program for research. IRE transactions on information theory", |
|
"authors": [ |
|
{ |
|
"first": "Morris", |
|
"middle": [], |
|
"last": "Halle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1962, |
|
"venue": "", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "155--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Morris Halle and Kenneth Stevens. 1962. Speech recognition: A model and a program for research. IRE transactions on information theory, 8(2):155-159.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Efficient speech emotion recognition using binary support vector machines & multiclass svm", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "N Ratna Kanth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Saraswathi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "N Ratna Kanth and S Saraswathi. 2015. Ef- ficient speech emotion recognition using bi- nary support vector machines & multiclass svm. In 2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pages 1-6. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Muhammad Zubair Asghar, Fazli Subhan, Imran Razzak, and Ammara Habib. 2021. Applying deep neural networks for user intention identification", |
|
"authors": [ |
|
{ |
|
"first": "Asad", |
|
"middle": [], |
|
"last": "Khattak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anam", |
|
"middle": [], |
|
"last": "Habib", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Soft Computing", |
|
"volume": "25", |
|
"issue": "3", |
|
"pages": "2191--2220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asad Khattak, Anam Habib, Muham- mad Zubair Asghar, Fazli Subhan, Imran Razzak, and Ammara Habib. 2021. Applying deep neural networks for user intention identi- fication. Soft Computing, 25(3):2191-2220.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "What is web 2.0? XRDS: Crossroads, The ACM Magazine for Students", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "3--3", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Lewis. 2006. What is web 2.0? XRDS: Crossroads, The ACM Magazine for Students, 13(1):3-3.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bart: Denoising sequence-tosequence pre-training for natural language generation, translation, and comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal ; Abdelrahman Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ves", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.13461" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettle- moyer. 2019. Bart: Denoising sequence-to- sequence pre-training for natural language gen- eration, translation, and comprehension. arXiv preprint arXiv:1910.13461.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Automatic speech recognition procedures", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gordon E Peterson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1961, |
|
"venue": "Language and Speech", |
|
"volume": "4", |
|
"issue": "4", |
|
"pages": "200--219", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gordon E Peterson. 1961. Automatic speech recognition procedures. Language and Speech, 4(4):200-219.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "From audrey to siri. Is speech recognition a solved problem", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Pieraccini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Director", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberto Pieraccini and ICSI Director. 2012. From audrey to siri. Is speech recognition a solved problem, 23.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A review on speech recognition challenges and approaches", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Radha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Vimala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "doaj. org", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V Radha and C Vimala. 2012. A review on speech recognition challenges and approaches. doaj. org, 2(1):1-7.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Speech recognition by machine: A review", |
|
"authors": [ |
|
{ |
|
"first": "D Raj", |
|
"middle": [], |
|
"last": "Reddy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "Proceedings of the IEEE", |
|
"volume": "64", |
|
"issue": "4", |
|
"pages": "501--531", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D Raj Reddy. 1976. Speech recognition by machine: A review. Proceedings of the IEEE, 64(4):501-531.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.01108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chau- mond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.03762" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Identifying semantic edit intentions from revisions in wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2000--2010", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard Hovy. 2017. Identifying semantic edit intentions from revisions in wikipedia. In Pro- ceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2000-2010.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "management of document com-132 ments requires a reliable interpretation of voice 133 commands and a clear understanding of user 134", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "The rules that humans use to transcribe text are often implicit and subjective. The automation of this process requires a first standardization effort; this explains why some speech-to-text tools include a commands sheet. There is a trend that dictation tools recognize more and more natural language. The latest approaches in automatic transcription (Gupta et al.) have moved away from providing a list of commands and now try to infer based on context. Nowadays, editing tools are not only designed to share information but also promote collaboration. Exchanging comments in a document is a communication channel widely used in companies and at a personal level. Our work extends on previous work that has enabled mechanisms to understand commands from natural language applied to document comments management.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "via XML parser to extract the 260 Microsoft Word meta-information about the 261 document and each comment. For each com-262 ment, we extracted the information of its an-263 chored paragraph, text selection, comment con-264 tent, and responses to the comments. We once 265 again filtered the documents using the inferred 266 language provided by Microsoft Office to en-267 sure they were in English. We preferred not to 268 have to translate to prevent change in context 269 and meaning through automatic translations.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"text": "three coders. The an-285 notators were sourced through the company ments is depicted in Figure 1. The green text 288 highlights a sentence in the comment to be an-289 notated, while the yellow text highlights the se-290 lected text associated with the comment. Two 291 annotators selected intents and sub-intents for 292", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"text": "Annotation interface that shows the sentence to annotate (green) and its associated text (yellow). Annotators chose intents and sub-intents in the annotation area (orange).each message, and a third annotator served as a tiebreaker, selecting the most accurate labels in cases of disagreement. We obtain a significant Kappa score of 0.65 for the agreement 296 between annotators. The distribution of com-297 ments across sub-intents in the dataset is shown 298 in Table 1. tilingual comment) or when the comment con-302 tained an intent not defined in our list. Only 303 297 (5.9 percent) of comments were classified as deep learning for the train-315 ing of models that can classify intents. For 316 the evaluation of classical models, we use the 317 Supported Vector Machine (SVM) and Logistic 318 Regression (LR) models. Additionally, we im-319 plemented classification models based on the 320 Transformers (Vaswani et al., 2017) architec-321 ture. The distilled versions of BERT (Sanh 322 et al., 2019) RoBERTa (Liu et al., 2019), and 323 BART (Lewis et al., 2019) were fine-tuned with 324", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"text": "Document comment management user interface.workers (Kar). A random sample of 5,000 com-433 ments was labeled by three workers. The inter-434 rater reliability Cohen's kappa value was 0.65, was displayed at the right side of the paragraph. The commands side bar is a collection of transcribed voice commands. The voice command was wrongly transcribed, the 466 participant had the capability to edit in the com-", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF10": { |
|
"text": "that place the cursor or 560 identify the text to be formatted, deleted, or 561 replaced (i.e., \"At the end of the passage [...]\", 562 \"[...] after the word [...]\"); (2) Action Com-563 mand, referees to a command that triggers an 564 action such as format, add, replace, or delete 565 part of the content (i.e., \"Please delete the text 566 [...]\", \"Insert the word [...]\"); (3) Parameter 567 Command, this works as the parameter of the 568 performed action (i.e., \"Replace the highlighted 569 text with Dr. John Smith\", \"please use the word 570 reps instead of representatives\").", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF11": { |
|
"text": "The comment management commands had low 572 variability in the structure; we identified this573 common structure: (1) Action Command, a 574", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF12": { |
|
"text": "interacting with the tool via voice, as well as636 the time spent on resolving each type of com-637 ment. We aim that the findings of this work can 638 empower tools to support document comments 639 management.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>Exchange and their subcategories maintained a similar performance, while the categories of social communication obtained a lower perfor-mance.</td></tr><tr><td>6 Case Study 2 -Voice Commands</td></tr><tr><td>Interacting with documents via voice is not something novel. Voice has enabled for years hands-free interactions while consuming or editing documents. Its usage is not limited to performance or accessibility scenarios; the emergence of virtual voice assistants has en-abled new multi-device and multi-modal inter-actions.</td></tr><tr><td>Using voice to express ideas is a natural interac-tion between humans, but it adds extra complex-ity to machines. Peripherical input devices as keyboards convert electrical impulses to single characters; it reduces errors to user motricity or device mechanical-related issues. Machines rely on speech recognition algorithms to get accurate input from the voice. Even today, with sophisticated algorithms and huge volumes of data, the results are far from perfect. Being able to develop voice-based solutions implies dealing with uncertain information-the vari-ability of ways to express the same concept help applications to be resilient to unexpected inputs.</td></tr><tr><td>Document dictation is one of the tasks that speech recognition enables. Dictation implies transcribing what is said to the document. To get syntactically correct results, these tools</td></tr><tr><td>4 179</td></tr></table>", |
|
"text": "shows the results of how the models perform in the two top levels. The results show that the categories of Modification and Information", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>Main MODIFICATION (1883, 37.7%) INFORMATION EXCHANGE (2477, 49.7%) SOCIAL COMMUNI-CATION (343, 6.8%)</td><td>Level 1 REQUEST (1611, 85.5%) EXECUTION STATUS (272, 14.5%) PROVIDED (1771, 71.5%) REQUESTED (706, 28.5%) ACKNOWLEDGMENT (25, 7.2%) DISCUSSION (143, 41.6%) / FEEDBACK (175, 51.2%)</td><td>Level CONTENT (1209, 75%) / FORMAT (402, 25%) DONE (254, 93.3%) / PROMISE (18, 6.7%) CONTEXT (1420, 80.1%) / REF-ERENCE (351, 19.9%) ASKING DETAILS (554, 78.4%) / REQUESTING CONFIRMATION (152, 21.6%) CONTENT (174, 50.7%) / THREAD (144, 49.3%)</td><td>Level 3 EXPLICIT (1519, 94.2%) / NOT EXPLICIT (92, 5.8%) POTENTIAL CHANGE (1104, 62.3%) / NOT POTENTIAL CHANGE (667, 37.7%) POTENTIAL CHANGE (600, 84.9%) / NOT POTENTIAL CHANGE (106, 15.1%) POTENTIAL CHANGE (117, 36.7%) / NOT POTENTIAL CHANGE (201, 63.3%)</td><td>Level 4 ADD (835, 51.9%) / CHANGE (583, 36.1%) / DELETE (193, 12%)</td></tr><tr><td colspan=\"3\">have to identify punctuation mark words and replace them with symbols. The dictation tools detect the special words as commands and exe-cute specific actions over each command. Users of these tools have learned over the years the available commands of each tool before using it. Although the commands nowadays usually take into account minor variants, they are not usually used for complex instructions due to their main transcription function. Mechanisms that switch from merely transcribing text and executing word-specific commands to incor-porate in-context dialog with the assistant are required to have rich interactions.</td><td/><td/></tr></table>", |
|
"text": "Document comment taxonomy.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>Category MODIFICATION</td><td>Description</td><td>Example</td></tr></table>", |
|
"text": "Intents and sub-intents", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td>LR SVM RoBERTa DeBERTa BART 0.75 0.74 0.85 0.84 0.85 Information Exchange 0.81 0.81 0.80 Modification 0.81 0.82 Social Communication 0.45 0.43 0.69 0.67 0.68 All 0.76 0.75 0.82 0.82 0.82</td></tr></table>", |
|
"text": "Comparing F1 scores over the main level.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table><tr><td/><td>LR SVM RoBERTa DeBERTa BART 0.72 0.70 0.77 0.76 0.76 0.68 0.65 0.70 0.74 0.71 0.76 0.67 0.75 0.77 0.76 0.67 0.63 0.74 0.74 0.76 0.69 0.68 0.80 0.79 0.81 0.76 0.70 0.78 0.78 0.79 0.73 0.73 0.73 0.81 0.79 0.75 0.73 0.81 0.79 0.79 0.75 0.75 0.82 0.82 0.82 0.78 0.75 0.77 0.79 0.80 Sen. & Com. + Paragraph text 0.79 0.76 0.80 Sentences only Sen. + Selected text Sen. + Paragraph text Sen. + Thread text Comments only Com. + Selected text Com. + Paragraph text Com. + Thread text Sentences and Comments Sen. & Com. + Selected text 0.79 0.79 Sen. & Com. + Thread text 0.74 0.75 0.81 0.80 0.80</td></tr><tr><td>469</td><td>6.1.3 Field Study</td></tr></table>", |
|
"text": "Classification F1 results of the main level comparing sentence, comment, and their context.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td>485 486 487</td><td>shows metrics of how voice commands are composed. We found that most of the com-mands are short, and the mean range from 12 to 15 words across comment types. We detected</td></tr></table>", |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td>LR SVM RoBERTa DeBERTa BART 0.73 0.72 0.75 0.75 0.75 Modification -Execution 0.66 0.48 0.79 Modification -Request 0.79 0.79 Info. Exch. -Request 0.64 0.61 0.80 0.79 0.82 Info. Exch. -Provide 0.71 0.70 0.76 0.75 0.77 Social Com. -Feedback 0.44 0.28 0.57 0.62 0.53 Social Com. -Acknow. 0.50 0.50 0.22 0.18 0.80 Social Com. -Discuss. 0.18 0.13 0.29 0.32 0.21 All 0.68 0.66 0.74 0.74 0.75</td></tr></table>", |
|
"text": "Comparing F1 scores over the main level intents and level one sub intents.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF10": { |
|
"content": "<table><tr><td>Scenarios Category 1-5 MODIFICATION & REQUESTED & CONTENT & EXPLICIT & ADD 6-10 MODIFICATION & REQUESTED & CONTENT & EXPLICIT & CHANGE 11-15 MODIFICATION & REQUESTED & CONTENT & EXPLICIT & DELETE 16-20 MODIFICATION & REQUESTED & CONTENT & NOT EXPLICIT & ADD 21-25 MODIFICATION & REQUESTED & CONTENT & NOT EXPLICIT & CHANGE 26-30 MODIFICATION & REQUESTED & CONTENT & NOT EXPLICIT & DELETE 31-35 MODIFICATION & REQUESTED & FORMAT & ADD 36-40 MODIFICATION & REQUESTED & FORMAT & CHANGE 41-45 MODIFICATION & REQUESTED & FORMAT & DELETE 46-50 MODIFICATION & EXECUTION & DONE 51-55 MODIFICATION & EXECUTION & PROMISE 56-60 INFORMATION EXCHANGE & PRO-VIDED CONTEXT 61-65 INFORMATION EXCHANGE & PRO-VIDED REFERENCE 66-70 INFORMATION EXCHANGE & RE-QUESTED & ASKING DETAILS 71-75 INFORMATION EXCHANGE & RE-QUESTED & REQUESTING CONFIR-MATION 76-80 SOCIAL COMMUNICATION & AC-KNOWLEDGMENT 81-85 SOCIAL COMMUNICATION & DIS-CUSSION & CONTENT 86-90 SOCIAL COMMUNICATION & DIS-CUSSION & THREAD 91-95 SOCIAL COMMUNICATION & FEEDBACK & CONTENT 96-100 SOCIAL COMMUNICATION & FEEDBACK THREAD</td><td>Description Comment requesting an explicit addition to the document Comment requesting an explicit change to the document Comment requesting a deletion in the document Comment suggesting something that im-plied the addition of content Comment with a suggestion that can de-rive to a change in the document Comment that suggests that something in the document is not required Comment that asks to add formatting Comment that requests a change in the format Comment that asks to remove some for-matting Comment that confirms that something was done Comment that commits the author to per-form a change Comment that adds context to the select text in the document Comment that adds references to the text See my previous comments on the Team Example please insert \"and the projects added or retired\" between \"baseline\" and \"be-yond\" Change UNIT PRICE to LUMP SUM if appropriate. Delete all document reference red or yel-low highlighted text. Type an introductory sentence to this sec-tion of the report. Not clear. . . please rephrase. Delete what is not applicable All URLs should be live links for the convenience of the reader. Should be in bold You should not use bold for the title of your thesis/dissertation Changed from 6 grades per nine weeks to 10 As you allowed, I will delete this text. Fully agreed. Delivery of all deliverables required by the contract is usually a key requirement for revenue recognition. discussion board Open question to the author What is the border after this paragraph for? Is that a new subsection? Question that requires the author to con-firm something I added this; does that make sense to include as a step? Comment that acknowledges that was Thank you for completing read Comment that is part of a discussion that talk about the content Further work on this to be discussed at the next meeting of AHIEC Comment that is part of a discussion and Same as above. . . is related to the thread Comment that provides feedback about Good summary of what you found the content Comment that provides feedback to a comment in a thread I am glad you folks are addressing these topics. These will be very helpful.</td></tr><tr><td/><td>8 183</td></tr></table>", |
|
"text": "Scenarios", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF11": { |
|
"content": "<table><tr><td/><td colspan=\"2\">Words length (mean) Chars length (mean) Words overlap in comment Words overlap in selection Words overlap in paragraph Words overlap in instructions Unique words in the command</td><td>Modifi. 15 88 22% 10% 3% 11% 62%</td><td>Inf. Exch. 13 84 23% 4% 3% 12% 65%</td><td>Soc. Com. 12 67 16% 6% 2% 9% 72%</td></tr><tr><td/><td>Modification delete the comment no action needed the comment please the highlighted text task completed Delete the selected text end of the completed delete the comment no action HTTP colon forward</td><td colspan=\"3\">Information Exchange delete the comment no action needed thank you for you for your to user one the comment no comment no action comment thank you reply to user end of the</td><td>Social Comm. delete the comment no action needed the comment no comment no action the highlighted text action needed delete I have not needed delete the have not argued Thank you for</td></tr><tr><td>542</td><td colspan=\"2\">7 Discussion</td><td/></tr><tr><td>543 544 545 546</td><td colspan=\"4\">The understanding of how users interact with voice interfaces for comment management can enable the development of smart assistants in the workplace. In this section, we discuss the results we observed in our field study and their</td></tr></table>", |
|
"text": "Insights from text commands.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF12": { |
|
"content": "<table><tr><td>Audio in seconds (mean)</td><td>Modif. 6</td><td>Inf. Exch. 5</td><td>Soc. Com. 7</td></tr></table>", |
|
"text": "Insights from audio commands.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF13": { |
|
"content": "<table><tr><td>Time performing changes (mean) Number of selected words (mean) Number of typed words (mean) Interactions with the comment (%)</td><td>Modif. 7 22 4 22%</td><td>Inf. Exch. 20 16 8 28%</td><td>Soc. Com. 9 13 10 26%</td></tr></table>", |
|
"text": "Insights from audio commands.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF14": { |
|
"content": "<table><tr><td/><td>tomate document comment management. We</td></tr><tr><td>625</td><td/></tr><tr><td>626 627 628 629 630 631 632 633</td><td>alyzing public documents. We identified com-ments related to Modification, Information Ex-change, and Social Communication. A sample of each category is presented to participants in a field study. We developed a platform that mim-ics a regular editor but with audio and activity tracking enabled. The participants were asked to provide voice commands and execute them</td></tr><tr><td/><td>Future work can conduct experiments in com-mon text editors and with real teams to identify differences in the results.</td></tr><tr><td/><td>Automatically handling comments can help people with visual impairment; however, the sample did not include that population, and it might not extrapolate. Future work can explore how people with visual impairments commonly interact with text editors and how they expect to manage document comments.</td></tr><tr><td/><td>Our work focuses on the analysis of patterns in voice commands but does no further in the predictive analysis of the data. Future work can explore machine learning approaches that can automate tasks such as auto-completion, pre-dicting when a comment is going to be resolved and other approaches that can push towards comment automation.</td></tr><tr><td/><td>9 184</td></tr></table>", |
|
"text": "explore how people interact with documents 623 with comments. We first understand the dif-624 ferent uses of comments in documents by an-", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |