bibtex_url
stringlengths 41
50
| proceedings
stringlengths 38
47
| bibtext
stringlengths 709
3.56k
| abstract
stringlengths 17
2.11k
| authors
listlengths 1
72
| title
stringlengths 12
207
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
listlengths 1
1
| paper_page
stringclasses 276
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
14
| num_comments
int64 -1
11
| n_authors
int64 -1
44
| paper_page_exists_pre_conf
int64 0
1
| Models
listlengths 0
100
| Datasets
listlengths 0
14
| Spaces
listlengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.bionlp-1.63.bib
|
https://aclanthology.org/2023.bionlp-1.63/
|
@inproceedings{al-hussaini-etal-2023-pathology,
title = "Pathology Dynamics at {B}io{L}ay{S}umm: the trade-off between Readability, Relevance, and Factuality in Lay Summarization",
author = "Al-Hussaini, Irfan and
Wu, Austin and
Mitchell, Cassie",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.63",
doi = "10.18653/v1/2023.bionlp-1.63",
pages = "592--601",
abstract = "Lay summarization aims to simplify complex scientific information for non-expert audiences. This paper investigates the trade-off between readability and relevance in the lay summarization of long biomedical documents. We introduce a two-stage framework that attains the best readability metrics in the first subtask of BioLaySumm 2023, with 8.924 FleschKincaid Grade Level and 9.188 DaleChall Readability Score. However, this comes at the cost of reduced relevance and factuality, emphasizing the inherent challenges of balancing readability and content preservation in lay summarization. The first stage generates summaries using a large language model, such as BART with LSG attention. The second stage uses a zero-shot sentence simplification method to improve the readability of the summaries. In the second subtask, a hybrid dataset is employed to train a model capable of generating both lay summaries and abstracts. This approach achieves the best readability score and shares the top overall rank with other leading methods. Our study underscores the importance of developing effective methods for creating accessible lay summaries while maintaining information integrity. Future work will integrate simplification and summary generation within a joint optimization framework that generates high-quality lay summaries that effectively communicate scientific content to a broader audience.",
}
|
Lay summarization aims to simplify complex scientific information for non-expert audiences. This paper investigates the trade-off between readability and relevance in the lay summarization of long biomedical documents. We introduce a two-stage framework that attains the best readability metrics in the first subtask of BioLaySumm 2023, with 8.924 FleschKincaid Grade Level and 9.188 DaleChall Readability Score. However, this comes at the cost of reduced relevance and factuality, emphasizing the inherent challenges of balancing readability and content preservation in lay summarization. The first stage generates summaries using a large language model, such as BART with LSG attention. The second stage uses a zero-shot sentence simplification method to improve the readability of the summaries. In the second subtask, a hybrid dataset is employed to train a model capable of generating both lay summaries and abstracts. This approach achieves the best readability score and shares the top overall rank with other leading methods. Our study underscores the importance of developing effective methods for creating accessible lay summaries while maintaining information integrity. Future work will integrate simplification and summary generation within a joint optimization framework that generates high-quality lay summaries that effectively communicate scientific content to a broader audience.
|
[
"Al-Hussaini, Irfan",
"Wu, Austin",
"Mitchell, Cassie"
] |
Pathology Dynamics at BioLaySumm: the trade-off between Readability, Relevance, and Factuality in Lay Summarization
|
bionlp-1.63
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.bionlp-1.64.bib
|
https://aclanthology.org/2023.bionlp-1.64/
|
@inproceedings{wu-etal-2023-ikm,
title = "{IKM}{\_}{L}ab at {B}io{L}ay{S}umm Task 1: Longformer-based Prompt Tuning for Biomedical Lay Summary Generation",
author = "Wu, Yu-Hsuan and
Lin, Ying-Jia and
Kao, Hung-Yu",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.64",
doi = "10.18653/v1/2023.bionlp-1.64",
pages = "602--610",
abstract = "This paper describes the entry by the Intelligent Knowledge Management (IKM) Laboratory in the BioLaySumm 2023 task1. We aim to transform lengthy biomedical articles into concise, reader-friendly summaries that can be easily comprehended by the general public. We utilized a long-text abstractive summarization longformer model and experimented with several prompt methods for this task. Our entry placed 10th overall, but we were particularly proud to achieve a 3rd place score in the readability evaluation metric.",
}
|
This paper describes the entry by the Intelligent Knowledge Management (IKM) Laboratory in the BioLaySumm 2023 task1. We aim to transform lengthy biomedical articles into concise, reader-friendly summaries that can be easily comprehended by the general public. We utilized a long-text abstractive summarization longformer model and experimented with several prompt methods for this task. Our entry placed 10th overall, but we were particularly proud to achieve a 3rd place score in the readability evaluation metric.
|
[
"Wu, Yu-Hsuan",
"Lin, Ying-Jia",
"Kao, Hung-Yu"
] |
IKM_Lab at BioLaySumm Task 1: Longformer-based Prompt Tuning for Biomedical Lay Summary Generation
|
bionlp-1.64
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.bionlp-1.65.bib
|
https://aclanthology.org/2023.bionlp-1.65/
|
@inproceedings{turbitt-etal-2023-mdc,
title = "{MDC} at {B}io{L}ay{S}umm Task 1: Evaluating {GPT} Models for Biomedical Lay Summarization",
author = "Turbitt, Ois{\'\i}n and
Bevan, Robert and
Aboshokor, Mouhamad",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.65",
doi = "10.18653/v1/2023.bionlp-1.65",
pages = "611--619",
abstract = "This paper presents our approach to the BioLaySumm Task 1 shared task, held at the BioNLP 2023 Workshop. The effective communication of scientific knowledge to the general public is often limited by the technical language used in research, making it difficult for non-experts to comprehend. To address this issue, lay summaries can be used to explain research findings to non-experts in an accessible form. We conduct an evaluation of autoregressive language models, both general and specialized for the biomedical domain, to generate lay summaries from biomedical research article abstracts. Our findings demonstrate that a GPT-3.5 model combined with a straightforward few-shot prompt produces lay summaries that achieve significantly relevance and factuality compared to those generated by a fine-tuned BioGPT model. However, the summaries generated by the BioGPT model exhibit better readability. Notably, our submission for the shared task achieved 1st place in the competition.",
}
|
This paper presents our approach to the BioLaySumm Task 1 shared task, held at the BioNLP 2023 Workshop. The effective communication of scientific knowledge to the general public is often limited by the technical language used in research, making it difficult for non-experts to comprehend. To address this issue, lay summaries can be used to explain research findings to non-experts in an accessible form. We conduct an evaluation of autoregressive language models, both general and specialized for the biomedical domain, to generate lay summaries from biomedical research article abstracts. Our findings demonstrate that a GPT-3.5 model combined with a straightforward few-shot prompt produces lay summaries that achieve significantly relevance and factuality compared to those generated by a fine-tuned BioGPT model. However, the summaries generated by the BioGPT model exhibit better readability. Notably, our submission for the shared task achieved 1st place in the competition.
|
[
"Turbitt, Ois{\\'\\i}n",
"Bevan, Robert",
"Aboshokor, Mouhamad"
] |
MDC at BioLaySumm Task 1: Evaluating GPT Models for Biomedical Lay Summarization
|
bionlp-1.65
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.bionlp-1.66.bib
|
https://aclanthology.org/2023.bionlp-1.66/
|
@inproceedings{liu-etal-2023-lhs712ee,
title = "{LHS}712{EE} at {B}io{L}ay{S}umm 2023: Using {BART} and {LED} to summarize biomedical research articles",
author = "Liu, Quancheng and
Ren, Xiheng and
Vydiswaran, V.G.Vinod",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.66",
doi = "10.18653/v1/2023.bionlp-1.66",
pages = "620--624",
abstract = "As part of our participation in BioLaySumm 2023, we explored the use of large language models (LLMs) to automatically generate concise and readable summaries of biomedical research articles. We utilized pre-trained LLMs to fine-tune our summarization models on two provided datasets, and adapt them to the shared task within the constraints of training time and computational power. Our final models achieved very high relevance and factuality scores on the test set, and ranked among the top five models in the overall performance.",
}
|
As part of our participation in BioLaySumm 2023, we explored the use of large language models (LLMs) to automatically generate concise and readable summaries of biomedical research articles. We utilized pre-trained LLMs to fine-tune our summarization models on two provided datasets, and adapt them to the shared task within the constraints of training time and computational power. Our final models achieved very high relevance and factuality scores on the test set, and ranked among the top five models in the overall performance.
|
[
"Liu, Quancheng",
"Ren, Xiheng",
"Vydiswaran, V.G.Vinod"
] |
LHS712EE at BioLaySumm 2023: Using BART and LED to summarize biomedical research articles
|
bionlp-1.66
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.bionlp-1.67.bib
|
https://aclanthology.org/2023.bionlp-1.67/
|
@inproceedings{reddy-etal-2023-iitr,
title = "{IITR} at {B}io{L}ay{S}umm Task 1:Lay Summarization of {B}io{M}edical articles using Transformers",
author = "Reddy, Venkat praneeth and
Reddy, Pinnapu Reddy Harshavardhan and
Sumedh, Karanam Sai and
Sharma, Raksha",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.67",
doi = "10.18653/v1/2023.bionlp-1.67",
pages = "625--628",
abstract = "Initially, we analyzed the datasets in a statistical way so as to learn about various sections{'} contributions to the final summary in both the pros and life datasets. We found that both the datasets have an Introduction and Abstract along with some initial parts of the results contributing to the summary. We considered only these sections in the next stage of analysis. We found the optimal length or no of sentences of each of the Introduction, abstract, and result which contributes best to the summary. After this statistical analysis, we took the pre-trained model Facebook/bart-base and fine-tuned it with both the datasets PLOS and eLife. While fine-tuning and testing the results we have used chunking because the text lengths are huge. So to not lose information due to the number of token constraints of the model, we used chunking. Finally, we saw the eLife model giving more accurate results than PLOS in terms of readability aspect, probably because the PLOS summary is closer to its abstract, we have considered the eLife model as our final model and tuned the hyperparameters. We are ranked 7th overall and 1st in readability",
}
|
Initially, we analyzed the datasets in a statistical way so as to learn about various sections{'} contributions to the final summary in both the pros and life datasets. We found that both the datasets have an Introduction and Abstract along with some initial parts of the results contributing to the summary. We considered only these sections in the next stage of analysis. We found the optimal length or no of sentences of each of the Introduction, abstract, and result which contributes best to the summary. After this statistical analysis, we took the pre-trained model Facebook/bart-base and fine-tuned it with both the datasets PLOS and eLife. While fine-tuning and testing the results we have used chunking because the text lengths are huge. So to not lose information due to the number of token constraints of the model, we used chunking. Finally, we saw the eLife model giving more accurate results than PLOS in terms of readability aspect, probably because the PLOS summary is closer to its abstract, we have considered the eLife model as our final model and tuned the hyperparameters. We are ranked 7th overall and 1st in readability
|
[
"Reddy, Venkat praneeth",
"Reddy, Pinnapu Reddy Harshavardhan",
"Sumedh, Karanam Sai",
"Sharma, Raksha"
] |
IITR at BioLaySumm Task 1:Lay Summarization of BioMedical articles using Transformers
|
bionlp-1.67
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.bionlp-1.68.bib
|
https://aclanthology.org/2023.bionlp-1.68/
|
@inproceedings{sim-etal-2023-csiro,
title = "{CSIRO} {D}ata61 Team at {B}io{L}ay{S}umm Task 1: Lay Summarisation of Biomedical Research Articles Using Generative Models",
author = "Sim, Mong Yuan and
Dai, Xiang and
Rybinski, Maciej and
Karimi, Sarvnaz",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.68",
doi = "10.18653/v1/2023.bionlp-1.68",
pages = "629--635",
abstract = "Lay summarisation aims at generating a summary for non-expert audience which allows them to keep updated with latest research in a specific field. Despite the significant advancements made in the field of text summarisation, lay summarisation remains relatively under-explored. We present a comprehensive set of experiments and analysis to investigate the effectiveness of existing pre-trained language models in generating lay summaries. When evaluate our models using a BioNLP Shared Task, BioLaySumm, our submission ranked second for the relevance criteria and third overall among 21 competing teams.",
}
|
Lay summarisation aims at generating a summary for non-expert audience which allows them to keep updated with latest research in a specific field. Despite the significant advancements made in the field of text summarisation, lay summarisation remains relatively under-explored. We present a comprehensive set of experiments and analysis to investigate the effectiveness of existing pre-trained language models in generating lay summaries. When evaluate our models using a BioNLP Shared Task, BioLaySumm, our submission ranked second for the relevance criteria and third overall among 21 competing teams.
|
[
"Sim, Mong Yuan",
"Dai, Xiang",
"Rybinski, Maciej",
"Karimi, Sarvnaz"
] |
CSIRO Data61 Team at BioLaySumm Task 1: Lay Summarisation of Biomedical Research Articles Using Generative Models
|
bionlp-1.68
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.bionlp-1.69.bib
|
https://aclanthology.org/2023.bionlp-1.69/
|
@inproceedings{colak-karadeniz-2023-isiksumm,
title = "{ISIKS}umm at {B}io{L}ay{S}umm Task 1: {BART}-based Summarization System Enhanced with Bio-Entity Labels",
author = "Colak, Cagla and
Karadeniz, Lknur",
editor = "Demner-fushman, Dina and
Ananiadou, Sophia and
Cohen, Kevin",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.69",
doi = "10.18653/v1/2023.bionlp-1.69",
pages = "636--640",
abstract = "Communicating scientific research to the general public is an essential yet challenging task. Lay summaries, which provide a simplified version of research findings, can bridge the gapbetween scientific knowledge and public understanding. The BioLaySumm task (Goldsack et al., 2023) is a shared task that seeks to automate this process by generating lay summaries from biomedical articles. Two different datasets that have been created from curating two biomedical journals (PLOS and eLife) are provided by the task organizers. As a participant in this shared task, we developed a system to generate a lay summary from an article{'}s abstract and main text.",
}
|
Communicating scientific research to the general public is an essential yet challenging task. Lay summaries, which provide a simplified version of research findings, can bridge the gapbetween scientific knowledge and public understanding. The BioLaySumm task (Goldsack et al., 2023) is a shared task that seeks to automate this process by generating lay summaries from biomedical articles. Two different datasets that have been created from curating two biomedical journals (PLOS and eLife) are provided by the task organizers. As a participant in this shared task, we developed a system to generate a lay summary from an article{'}s abstract and main text.
|
[
"Colak, Cagla",
"Karadeniz, Lknur"
] |
ISIKSumm at BioLaySumm Task 1: BART-based Summarization System Enhanced with Bio-Entity Labels
|
bionlp-1.69
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.1.bib
|
https://aclanthology.org/2023.cawl-1.1/
|
@inproceedings{gorman-sproat-2023-myths,
title = "Myths about Writing Systems in Speech {\&} Language Technology",
author = "Gorman, Kyle and
Sproat, Richard",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.1",
doi = "10.18653/v1/2023.cawl-1.1",
pages = "1--5",
abstract = "Natural language processing is largely focused on written text processing. However, many computational linguists tacitly endorse myths about the nature of writing. We highlight two of these myths{---}the conflation of language and writing, and the notion that Chinese, Japanese, and Korean writing is ideographic{---}and suggest how the community can dispel them.",
}
|
Natural language processing is largely focused on written text processing. However, many computational linguists tacitly endorse myths about the nature of writing. We highlight two of these myths{---}the conflation of language and writing, and the notion that Chinese, Japanese, and Korean writing is ideographic{---}and suggest how the community can dispel them.
|
[
"Gorman, Kyle",
"Sproat, Richard"
] |
Myths about Writing Systems in Speech & Language Technology
|
cawl-1.1
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.2.bib
|
https://aclanthology.org/2023.cawl-1.2/
|
@inproceedings{agirrezabal-etal-2023-hidden,
title = "The Hidden Folk: Linguistic Properties Encoded in Multilingual Contextual Character Representations",
author = "Agirrezabal, Manex and
Boldsen, Sidsel and
Hollenstein, Nora",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.2",
doi = "10.18653/v1/2023.cawl-1.2",
pages = "6--13",
abstract = "To gain a better understanding of the linguistic information encoded in character-based language models, we probe the multilingual contextual CANINE model. We design a range of phonetic probing tasks in six Nordic languages, including Faroese as an additional zero-shot instance. We observe that some phonetic information is indeed encoded in the character representations, as consonants and vowels can be well distinguished using a linear classifier. Furthermore, results for the Danish and Norwegian language seem to be worse for the consonant/vowel distinction in comparison to other languages. The information encoded in these representations can also be learned in a zero-shot scenario, as Faroese shows a reasonably good performance in the same vowel/consonant distinction task.",
}
|
To gain a better understanding of the linguistic information encoded in character-based language models, we probe the multilingual contextual CANINE model. We design a range of phonetic probing tasks in six Nordic languages, including Faroese as an additional zero-shot instance. We observe that some phonetic information is indeed encoded in the character representations, as consonants and vowels can be well distinguished using a linear classifier. Furthermore, results for the Danish and Norwegian language seem to be worse for the consonant/vowel distinction in comparison to other languages. The information encoded in these representations can also be learned in a zero-shot scenario, as Faroese shows a reasonably good performance in the same vowel/consonant distinction task.
|
[
"Agirrezabal, Manex",
"Boldsen, Sidsel",
"Hollenstein, Nora"
] |
The Hidden Folk: Linguistic Properties Encoded in Multilingual Contextual Character Representations
|
cawl-1.2
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.3.bib
|
https://aclanthology.org/2023.cawl-1.3/
|
@inproceedings{gold-etal-2023-preserving,
title = "Preserving the Authenticity of Handwritten Learner Language: Annotation Guidelines for Creating Transcripts Retaining Orthographic Features",
author = "Gold, Christian and
Laarmann-quante, Ronja and
Zesch, Torsten",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.3",
doi = "10.18653/v1/2023.cawl-1.3",
pages = "14--21",
abstract = "Handwritten texts produced by young learners often contain orthographic features like spelling errors, capitalization errors, punctuation errors, and impurities such as strikethroughs, inserts, and smudges. All of those are typically normalized or ignored in existing transcriptions. For applications like handwriting recognition with the goal of automatically analyzing a learner{'}s language performance, however, retaining such features would be necessary. To address this, we present transcription guidelines that retain the features addressed above. Our guidelines were developed iteratively and include numerous example images to illustrate the various issues. On a subset of about 90 double-transcribed texts, we compute inter-annotator agreement and show that our guidelines can be applied with high levels of percentage agreement of about .98. Overall, we transcribed 1,350 learner texts, which is about the same size as the widely adopted handwriting recognition datasets IAM (1,500 pages) and CVL (1,600 pages). Our final corpus can be used to train a handwriting recognition system that transcribes closely to the real productions by young learners. Such a system is a prerequisite for applying automatic orthography feedback systems to handwritten texts in the future.",
}
|
Handwritten texts produced by young learners often contain orthographic features like spelling errors, capitalization errors, punctuation errors, and impurities such as strikethroughs, inserts, and smudges. All of those are typically normalized or ignored in existing transcriptions. For applications like handwriting recognition with the goal of automatically analyzing a learner{'}s language performance, however, retaining such features would be necessary. To address this, we present transcription guidelines that retain the features addressed above. Our guidelines were developed iteratively and include numerous example images to illustrate the various issues. On a subset of about 90 double-transcribed texts, we compute inter-annotator agreement and show that our guidelines can be applied with high levels of percentage agreement of about .98. Overall, we transcribed 1,350 learner texts, which is about the same size as the widely adopted handwriting recognition datasets IAM (1,500 pages) and CVL (1,600 pages). Our final corpus can be used to train a handwriting recognition system that transcribes closely to the real productions by young learners. Such a system is a prerequisite for applying automatic orthography feedback systems to handwritten texts in the future.
|
[
"Gold, Christian",
"Laarmann-quante, Ronja",
"Zesch, Torsten"
] |
Preserving the Authenticity of Handwritten Learner Language: Annotation Guidelines for Creating Transcripts Retaining Orthographic Features
|
cawl-1.3
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.4.bib
|
https://aclanthology.org/2023.cawl-1.4/
|
@inproceedings{micallef-etal-2023-exploring,
title = "Exploring the Impact of Transliteration on {NLP} Performance: Treating {M}altese as an {A}rabic Dialect",
author = "Micallef, Kurt and
Eryani, Fadhl and
Habash, Nizar and
Bouamor, Houda and
Borg, Claudia",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.4",
doi = "10.18653/v1/2023.cawl-1.4",
pages = "22--32",
abstract = "Multilingual models such as mBERT have been demonstrated to exhibit impressive crosslingual transfer for a number of languages. Despite this, the performance drops for lowerresourced languages, especially when they are not part of the pre-training setup and when there are script differences. In this work we consider Maltese, a low-resource language of Arabic and Romance origins written in Latin script. Specifically, we investigate the impact of transliterating Maltese into Arabic scipt on a number of downstream tasks: Part-of-Speech Tagging, Dependency Parsing, and Sentiment Analysis. We compare multiple transliteration pipelines ranging from deterministic character maps to more sophisticated alternatives, including manually annotated word mappings and non-deterministic character mappings. For the latter, we show that selection techniques using n-gram language models of Tunisian Arabic, the dialect with the highest degree of mutual intelligibility to Maltese, yield better results on downstream tasks. Moreover, our experiments highlight that the use of an Arabic pre-trained model paired with transliteration outperforms mBERT. Overall, our results show that transliterating Maltese can be considered an option to improve the cross-lingual transfer capabilities.",
}
|
Multilingual models such as mBERT have been demonstrated to exhibit impressive crosslingual transfer for a number of languages. Despite this, the performance drops for lowerresourced languages, especially when they are not part of the pre-training setup and when there are script differences. In this work we consider Maltese, a low-resource language of Arabic and Romance origins written in Latin script. Specifically, we investigate the impact of transliterating Maltese into Arabic scipt on a number of downstream tasks: Part-of-Speech Tagging, Dependency Parsing, and Sentiment Analysis. We compare multiple transliteration pipelines ranging from deterministic character maps to more sophisticated alternatives, including manually annotated word mappings and non-deterministic character mappings. For the latter, we show that selection techniques using n-gram language models of Tunisian Arabic, the dialect with the highest degree of mutual intelligibility to Maltese, yield better results on downstream tasks. Moreover, our experiments highlight that the use of an Arabic pre-trained model paired with transliteration outperforms mBERT. Overall, our results show that transliterating Maltese can be considered an option to improve the cross-lingual transfer capabilities.
|
[
"Micallef, Kurt",
"Eryani, Fadhl",
"Habash, Nizar",
"Bouamor, Houda",
"Borg, Claudia"
] |
Exploring the Impact of Transliteration on NLP Performance: Treating Maltese as an Arabic Dialect
|
cawl-1.4
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.5.bib
|
https://aclanthology.org/2023.cawl-1.5/
|
@inproceedings{nielsen-etal-2023-distinguishing,
title = "Distinguishing {R}omanized {H}indi from {R}omanized {U}rdu",
author = "Nielsen, Elizabeth and
Kirov, Christo and
Roark, Brian",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.5",
doi = "10.18653/v1/2023.cawl-1.5",
pages = "33--42",
abstract = "We examine the task of distinguishing between Hindi and Urdu when those languages are romanized, i.e., written in the Latin script. Both languages are widely informally romanized, and to the extent that they are identified in the Latin script by language identification systems, they are typically conflated. In the absence of large labeled collections of such text, we consider methods for generating training data. Beginning with a small set of seed words, each of which are strongly indicative of one of the languages versus the other, we prompt a pretrained large language model (LLM) to generate romanized text. Treating text generated from an Urdu prompt as one class and text generated from a Hindi prompt as the other class, we build a binary language identification (LangID) classifier. We demonstrate that the resulting classifier distinguishes manually romanized Urdu Wikipedia text from manually romanized Hindi Wikipedia text far better than chance. We use this classifier to estimate the prevalence of Urdu in a large collection of text labeled as romanized Hindi that has been used to train large language models. These techniques can be applied to bootstrap classifiers in other cases where a dataset is known to contain multiple distinct but related classes, such as different dialects of the same language, but for which labels cannot easily be obtained.",
}
|
We examine the task of distinguishing between Hindi and Urdu when those languages are romanized, i.e., written in the Latin script. Both languages are widely informally romanized, and to the extent that they are identified in the Latin script by language identification systems, they are typically conflated. In the absence of large labeled collections of such text, we consider methods for generating training data. Beginning with a small set of seed words, each of which are strongly indicative of one of the languages versus the other, we prompt a pretrained large language model (LLM) to generate romanized text. Treating text generated from an Urdu prompt as one class and text generated from a Hindi prompt as the other class, we build a binary language identification (LangID) classifier. We demonstrate that the resulting classifier distinguishes manually romanized Urdu Wikipedia text from manually romanized Hindi Wikipedia text far better than chance. We use this classifier to estimate the prevalence of Urdu in a large collection of text labeled as romanized Hindi that has been used to train large language models. These techniques can be applied to bootstrap classifiers in other cases where a dataset is known to contain multiple distinct but related classes, such as different dialects of the same language, but for which labels cannot easily be obtained.
|
[
"Nielsen, Elizabeth",
"Kirov, Christo",
"Roark, Brian"
] |
Distinguishing Romanized Hindi from Romanized Urdu
|
cawl-1.5
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.6.bib
|
https://aclanthology.org/2023.cawl-1.6/
|
@inproceedings{ren-2023-back,
title = "Back-Transliteration of {E}nglish Loanwords in {J}apanese",
author = "Ren, Yuying",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.6",
doi = "10.18653/v1/2023.cawl-1.6",
pages = "43--49",
abstract = "We propose methods for transliterating English loanwords in Japanese from their Japanese written form (katakana/romaji) to their original English written form. Our data is a Japanese-English loanwords dictionary that we have created ourselves. We employ two approaches: direct transliteration, which directly converts words from katakana to English, and indirect transliteration, which utilizes the English pronunciation as a means to convert katakana words into their corresponding English sound representations, which are subsequently converted into English words. Additionally, we compare the effectiveness of using katakana versus romaji as input characters. We develop 6 models of 2 types for our experiments: one with an English lexicon-filter, and the other without. For each type, we built 3 models, including a pair n-gram based on WFSTs and two sequence-to-sequence models leveraging LSTM and transformer. Our best performing model was the pair n-gram model with a lexicon-filter, directly transliterating from katakana to English.",
}
|
We propose methods for transliterating English loanwords in Japanese from their Japanese written form (katakana/romaji) to their original English written form. Our data is a Japanese-English loanwords dictionary that we have created ourselves. We employ two approaches: direct transliteration, which directly converts words from katakana to English, and indirect transliteration, which utilizes the English pronunciation as a means to convert katakana words into their corresponding English sound representations, which are subsequently converted into English words. Additionally, we compare the effectiveness of using katakana versus romaji as input characters. We develop 6 models of 2 types for our experiments: one with an English lexicon-filter, and the other without. For each type, we built 3 models, including a pair n-gram based on WFSTs and two sequence-to-sequence models leveraging LSTM and transformer. Our best performing model was the pair n-gram model with a lexicon-filter, directly transliterating from katakana to English.
|
[
"Ren, Yuying"
] |
Back-Transliteration of English Loanwords in Japanese
|
cawl-1.6
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.7.bib
|
https://aclanthology.org/2023.cawl-1.7/
|
@inproceedings{zhang-2023-pronunciation,
title = "Pronunciation Ambiguities in {J}apanese Kanji",
author = "Zhang, Wen",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.7",
doi = "10.18653/v1/2023.cawl-1.7",
pages = "50--60",
abstract = "Japanese writing is a complex system, and a large part of the complexity resides in the use of kanji. A single kanji character in modern Japanese may have multiple pronunciations, either as native vocabulary or as words borrowed from Chinese. This causes a problem for text-to-speech synthesis (TTS) because the system has to predict which pronunciation of each kanji character is appropriate in the context. The problem is called homograph disambiguation. To solve the problem, this research provides a new annotated Japanese single kanji character pronunciation data set and describes an experiment using the logistic regression (LR) classifier. A baseline is computed to compare with the LR classifier accuracy. This experiment provides the first experimental research in Japanese single kanji homograph disambiguation. The annotated Japanese data is freely released to the public to support further work.",
}
|
Japanese writing is a complex system, and a large part of the complexity resides in the use of kanji. A single kanji character in modern Japanese may have multiple pronunciations, either as native vocabulary or as words borrowed from Chinese. This causes a problem for text-to-speech synthesis (TTS) because the system has to predict which pronunciation of each kanji character is appropriate in the context. The problem is called homograph disambiguation. To solve the problem, this research provides a new annotated Japanese single kanji character pronunciation data set and describes an experiment using the logistic regression (LR) classifier. A baseline is computed to compare with the LR classifier accuracy. This experiment provides the first experimental research in Japanese single kanji homograph disambiguation. The annotated Japanese data is freely released to the public to support further work.
|
[
"Zhang, Wen"
] |
Pronunciation Ambiguities in Japanese Kanji
|
cawl-1.7
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.8.bib
|
https://aclanthology.org/2023.cawl-1.8/
|
@inproceedings{karita-etal-2023-lenient,
title = "Lenient Evaluation of {J}apanese Speech Recognition: Modeling Naturally Occurring Spelling Inconsistency",
author = "Karita, Shigeki and
Sproat, Richard and
Ishikawa, Haruko",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.8",
doi = "10.18653/v1/2023.cawl-1.8",
pages = "61--70",
abstract = "Word error rate (WER) and character error rate (CER) are standard metrics in Speech Recognition (ASR), but one problem has always been alternative spellings: If one{'}s system transcribes adviser whereas the ground truth has advisor, this will count as an error even though the two spellings really represent the same word. Japanese is notorious for {``}lacking orthography{''}: most words can be spelled in multiple ways, presenting a problem for accurate ASR evaluation. In this paper we propose a new lenient evaluation metric as a more defensible CER measure for Japanese ASR. We create a lattice of plausible respellings of the reference transcription, using a combination of lexical resources, a Japanese text-processing system, and a neural machine translation model for reconstructing kanji from hiragana or katakana. In a manual evaluation, raters rated 95.4{\%} of the proposed spelling variants as plausible. ASR results show that our method, which does not penalize the system for choosing a valid alternate spelling of a word, affords a 2.4{\%}{--}3.1{\%} absolute reduction in CER depending on the task.",
}
|
Word error rate (WER) and character error rate (CER) are standard metrics in Speech Recognition (ASR), but one problem has always been alternative spellings: If one{'}s system transcribes adviser whereas the ground truth has advisor, this will count as an error even though the two spellings really represent the same word. Japanese is notorious for {``}lacking orthography{''}: most words can be spelled in multiple ways, presenting a problem for accurate ASR evaluation. In this paper we propose a new lenient evaluation metric as a more defensible CER measure for Japanese ASR. We create a lattice of plausible respellings of the reference transcription, using a combination of lexical resources, a Japanese text-processing system, and a neural machine translation model for reconstructing kanji from hiragana or katakana. In a manual evaluation, raters rated 95.4{\%} of the proposed spelling variants as plausible. ASR results show that our method, which does not penalize the system for choosing a valid alternate spelling of a word, affords a 2.4{\%}{--}3.1{\%} absolute reduction in CER depending on the task.
|
[
"Karita, Shigeki",
"Sproat, Richard",
"Ishikawa, Haruko"
] |
Lenient Evaluation of Japanese Speech Recognition: Modeling Naturally Occurring Spelling Inconsistency
|
cawl-1.8
|
Poster
|
2306.04530
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.cawl-1.9.bib
|
https://aclanthology.org/2023.cawl-1.9/
|
@inproceedings{born-etal-2023-disambiguating,
title = "Disambiguating Numeral Sequences to Decipher Ancient Accounting Corpora",
author = "Born, Logan and
Monroe, M. Willis and
Kelley, Kathryn and
Sarkar, Anoop",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.9",
doi = "10.18653/v1/2023.cawl-1.9",
pages = "71--81",
abstract = "A numeration system encodes abstract numeric quantities as concrete strings of written characters. The numeration systems used by modern scripts tend to be precise and unambiguous, but this was not so for the ancient and partially-deciphered proto-Elamite (PE) script, where written numerals can have up to four distinct readings depending on the system that is used to read them. We consider the task of disambiguating between these readings in order to determine the values of the numeric quantities recorded in this corpus. We algorithmically extract a list of possible readings for each PE numeral notation, and contribute two disambiguation techniques based on structural properties of the original documents and classifiers learned with the bootstrapping algorithm. We also contribute a test set for evaluating disambiguation techniques, as well as a novel approach to cautious rule selection for bootstrapped classifiers. Our analysis confirms existing intuitions about this script and reveals previously-unknown correlations between tablet content and numeral magnitude. This work is crucial to understanding and deciphering PE, as the corpus is heavily accounting-focused and contains many more numeric tokens than tokens of text.",
}
|
A numeration system encodes abstract numeric quantities as concrete strings of written characters. The numeration systems used by modern scripts tend to be precise and unambiguous, but this was not so for the ancient and partially-deciphered proto-Elamite (PE) script, where written numerals can have up to four distinct readings depending on the system that is used to read them. We consider the task of disambiguating between these readings in order to determine the values of the numeric quantities recorded in this corpus. We algorithmically extract a list of possible readings for each PE numeral notation, and contribute two disambiguation techniques based on structural properties of the original documents and classifiers learned with the bootstrapping algorithm. We also contribute a test set for evaluating disambiguation techniques, as well as a novel approach to cautious rule selection for bootstrapped classifiers. Our analysis confirms existing intuitions about this script and reveals previously-unknown correlations between tablet content and numeral magnitude. This work is crucial to understanding and deciphering PE, as the corpus is heavily accounting-focused and contains many more numeric tokens than tokens of text.
|
[
"Born, Logan",
"Monroe, M. Willis",
"Kelley, Kathryn",
"Sarkar, Anoop"
] |
Disambiguating Numeral Sequences to Decipher Ancient Accounting Corpora
|
cawl-1.9
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.10.bib
|
https://aclanthology.org/2023.cawl-1.10/
|
@inproceedings{tamburini-2023-decipherment,
title = "Decipherment of Lost Ancient Scripts as Combinatorial Optimisation Using Coupled Simulated Annealing",
author = "Tamburini, Fabio",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.10",
doi = "10.18653/v1/2023.cawl-1.10",
pages = "82--91",
abstract = "This paper presents a new approach to the ancient scripts decipherment problem based on combinatorial optimisation and coupled simulated annealing, an advanced non-convex optimisation procedure. Solutions are encoded by using k-permutations allowing for null, oneto-many, and many-to-one mappings between signs. The proposed system is able to produce enhanced results in cognate identification when compared to the state-of-the-art systems on standard evaluation benchmarks used in literature.",
}
|
This paper presents a new approach to the ancient scripts decipherment problem based on combinatorial optimisation and coupled simulated annealing, an advanced non-convex optimisation procedure. Solutions are encoded by using k-permutations allowing for null, oneto-many, and many-to-one mappings between signs. The proposed system is able to produce enhanced results in cognate identification when compared to the state-of-the-art systems on standard evaluation benchmarks used in literature.
|
[
"Tamburini, Fabio"
] |
Decipherment of Lost Ancient Scripts as Combinatorial Optimisation Using Coupled Simulated Annealing
|
cawl-1.10
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.11.bib
|
https://aclanthology.org/2023.cawl-1.11/
|
@inproceedings{born-etal-2023-learning,
title = "Learning the Character Inventories of Undeciphered Scripts Using Unsupervised Deep Clustering",
author = "Born, Logan and
Monroe, M. Willis and
Kelley, Kathryn and
Sarkar, Anoop",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.11",
doi = "10.18653/v1/2023.cawl-1.11",
pages = "92--104",
abstract = "A crucial step in deciphering a text is to identify what set of characters were used to write it. This requires grouping character tokens according to visual and contextual features, which can be challenging for human analysts when the number of tokens or underlying types is large. Prior work has shown that this process can be automated by clustering dense representations of character images, in a task which we call {``}script clustering{''}. In this work, we present novel architectures which exploit varying degrees of contextual and visual information to learn representations for use in script clustering. We evaluate on a range of modern and ancient scripts, and find that our models produce representations which are more effective for script recovery than the current state-of-the-art, despite using just {\textasciitilde}2{\%} as many parameters. Our analysis fruitfully applies these models to assess hypotheses about the character inventory of the partially-deciphered proto-Elamite script.",
}
|
A crucial step in deciphering a text is to identify what set of characters were used to write it. This requires grouping character tokens according to visual and contextual features, which can be challenging for human analysts when the number of tokens or underlying types is large. Prior work has shown that this process can be automated by clustering dense representations of character images, in a task which we call {``}script clustering{''}. In this work, we present novel architectures which exploit varying degrees of contextual and visual information to learn representations for use in script clustering. We evaluate on a range of modern and ancient scripts, and find that our models produce representations which are more effective for script recovery than the current state-of-the-art, despite using just {\textasciitilde}2{\%} as many parameters. Our analysis fruitfully applies these models to assess hypotheses about the character inventory of the partially-deciphered proto-Elamite script.
|
[
"Born, Logan",
"Monroe, M. Willis",
"Kelley, Kathryn",
"Sarkar, Anoop"
] |
Learning the Character Inventories of Undeciphered Scripts Using Unsupervised Deep Clustering
|
cawl-1.11
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.cawl-1.12.bib
|
https://aclanthology.org/2023.cawl-1.12/
|
@inproceedings{hermalin-2023-mutual,
title = "A Mutual Information-based Approach to Quantifying Logography in {J}apanese and {S}umerian",
author = "Hermalin, Noah",
editor = "Gorman, Kyle and
Sproat, Richard and
Roark, Brian",
booktitle = "Proceedings of the Workshop on Computation and Written Language (CAWL 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.cawl-1.12",
doi = "10.18653/v1/2023.cawl-1.12",
pages = "105--110",
abstract = "Writing systems have traditionally been classified by whether they prioritize encoding phonological information (phonographic) versus morphological or semantic information (logographic). Recent work has broached the question of how membership in these categories can be quantified. We aim to contribute to this line of research by treating a definition of logography which directly incorporates morphological identity. Our methods compare mutual information between graphic forms and phonological forms and between graphic forms and morphological identity. We report on preliminary results here for two case studies, written Sumerian and written Japanese. The results suggest that our methods present a promising means of classifying the degree to which a writing system is logographic or phonographic.",
}
|
Writing systems have traditionally been classified by whether they prioritize encoding phonological information (phonographic) versus morphological or semantic information (logographic). Recent work has broached the question of how membership in these categories can be quantified. We aim to contribute to this line of research by treating a definition of logography which directly incorporates morphological identity. Our methods compare mutual information between graphic forms and phonological forms and between graphic forms and morphological identity. We report on preliminary results here for two case studies, written Sumerian and written Japanese. The results suggest that our methods present a promising means of classifying the degree to which a writing system is logographic or phonographic.
|
[
"Hermalin, Noah"
] |
A Mutual Information-based Approach to Quantifying Logography in Japanese and Sumerian
|
cawl-1.12
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.1.bib
|
https://aclanthology.org/2023.clinicalnlp-1.1/
|
@inproceedings{shor-etal-2023-clinical,
title = "Clinical {BERTS}core: An Improved Measure of Automatic Speech Recognition Performance in Clinical Settings",
author = "Shor, Joel and
Bi, Ruyue Agnes and
Venugopalan, Subhashini and
Ibara, Steven and
Goldenberg, Roman and
Rivlin, Ehud",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.1",
doi = "10.18653/v1/2023.clinicalnlp-1.1",
pages = "1--7",
abstract = "Automatic Speech Recognition (ASR) in medical contexts has the potential to save time, cut costs, increase report accuracy, and reduce physician burnout. However, the healthcare industry has been slower to adopt this technology, in part due to the importance of avoiding medically-relevant transcription mistakes. In this work, we present the Clinical BERTScore (CBERTScore), an ASR metric that penalizes clinically-relevant mistakes more than others. We collect a benchmark of 18 clinician preferences on 149 realistic medical sentences called the Clinician Transcript Preference benchmark (CTP) and make it publicly available for the community to further develop clinically-aware ASR metrics. To our knowledge, this is the first public dataset of its kind. We demonstrate that our metric more closely aligns with clinician preferences on medical sentences as compared to other metrics (WER, BLUE, METEOR, etc), sometimes by wide margins.",
}
|
Automatic Speech Recognition (ASR) in medical contexts has the potential to save time, cut costs, increase report accuracy, and reduce physician burnout. However, the healthcare industry has been slower to adopt this technology, in part due to the importance of avoiding medically-relevant transcription mistakes. In this work, we present the Clinical BERTScore (CBERTScore), an ASR metric that penalizes clinically-relevant mistakes more than others. We collect a benchmark of 18 clinician preferences on 149 realistic medical sentences called the Clinician Transcript Preference benchmark (CTP) and make it publicly available for the community to further develop clinically-aware ASR metrics. To our knowledge, this is the first public dataset of its kind. We demonstrate that our metric more closely aligns with clinician preferences on medical sentences as compared to other metrics (WER, BLUE, METEOR, etc), sometimes by wide margins.
|
[
"Shor, Joel",
"Bi, Ruyue Agnes",
"Venugopalan, Subhashini",
"Ibara, Steven",
"Goldenberg, Roman",
"Rivlin, Ehud"
] |
Clinical BERTScore: An Improved Measure of Automatic Speech Recognition Performance in Clinical Settings
|
clinicalnlp-1.1
|
Poster
|
2303.05737
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.2.bib
|
https://aclanthology.org/2023.clinicalnlp-1.2/
|
@inproceedings{yanaka-etal-2023-medical,
title = "Medical Visual Textual Entailment for Numerical Understanding of Vision-and-Language Models",
author = "Yanaka, Hitomi and
Nakamura, Yuta and
Chida, Yuki and
Kurosawa, Tomoya",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.2",
doi = "10.18653/v1/2023.clinicalnlp-1.2",
pages = "8--18",
abstract = "Assessing the capacity of numerical understanding of vision-and-language models over images and texts is crucial for real vision-and-language applications, such as systems for automated medical image analysis. We provide a visual reasoning dataset focusing on numerical understanding in the medical domain. The experiments using our dataset show that current vision-and-language models fail to perform numerical inference in the medical domain. However, the data augmentation with only a small amount of our dataset improves the model performance, while maintaining the performance in the general domain.",
}
|
Assessing the capacity of numerical understanding of vision-and-language models over images and texts is crucial for real vision-and-language applications, such as systems for automated medical image analysis. We provide a visual reasoning dataset focusing on numerical understanding in the medical domain. The experiments using our dataset show that current vision-and-language models fail to perform numerical inference in the medical domain. However, the data augmentation with only a small amount of our dataset improves the model performance, while maintaining the performance in the general domain.
|
[
"Yanaka, Hitomi",
"Nakamura, Yuta",
"Chida, Yuki",
"Kurosawa, Tomoya"
] |
Medical Visual Textual Entailment for Numerical Understanding of Vision-and-Language Models
|
clinicalnlp-1.2
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.3.bib
|
https://aclanthology.org/2023.clinicalnlp-1.3/
|
@inproceedings{youssef-etal-2023-privacy,
title = "Privacy-Preserving Knowledge Transfer through Partial Parameter Sharing",
author = {Youssef, Paul and
Schl{\"o}tterer, J{\"o}rg and
Seifert, Christin},
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.3",
doi = "10.18653/v1/2023.clinicalnlp-1.3",
pages = "19--23",
abstract = "Valuable datasets that contain sensitive information are not shared due to privacy and copyright concerns. This hinders progress in many areas and prevents the use of machine learning solutions to solve relevant tasks. One possible solution is sharing models that are trained on such datasets. However, this is also associated with potential privacy risks due to data extraction attacks. In this work, we propose a solution based on sharing parts of the model{'}s parameters, and using a proxy dataset for complimentary knowledge transfer. Our experiments show encouraging results, and reduced risk to potential training data identification attacks. We present a viable solution to sharing knowledge with data-disadvantaged parties, that do not have the resources to produce high-quality data, with reduced privacy risks to the sharing parties. We make our code publicly available.",
}
|
Valuable datasets that contain sensitive information are not shared due to privacy and copyright concerns. This hinders progress in many areas and prevents the use of machine learning solutions to solve relevant tasks. One possible solution is sharing models that are trained on such datasets. However, this is also associated with potential privacy risks due to data extraction attacks. In this work, we propose a solution based on sharing parts of the model{'}s parameters, and using a proxy dataset for complimentary knowledge transfer. Our experiments show encouraging results, and reduced risk to potential training data identification attacks. We present a viable solution to sharing knowledge with data-disadvantaged parties, that do not have the resources to produce high-quality data, with reduced privacy risks to the sharing parties. We make our code publicly available.
|
[
"Youssef, Paul",
"Schl{\\\"o}tterer, J{\\\"o}rg",
"Seifert, Christin"
] |
Privacy-Preserving Knowledge Transfer through Partial Parameter Sharing
|
clinicalnlp-1.3
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.4.bib
|
https://aclanthology.org/2023.clinicalnlp-1.4/
|
@inproceedings{rauniyar-etal-2023-breaking,
title = "Breaking Barriers: Exploring the Diagnostic Potential of Speech Narratives in {H}indi for {A}lzheimer{'}s Disease",
author = "Rauniyar, Kritesh and
Shiwakoti, Shuvam and
Poudel, Sweta and
Thapa, Surendrabikram and
Naseem, Usman and
Nasim, Mehwish",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.4",
doi = "10.18653/v1/2023.clinicalnlp-1.4",
pages = "24--30",
abstract = "Alzheimer{'}s Disease (AD) is a neurodegenerative disorder that affects cognitive abilities and memory, especially in older adults. One of the challenges of AD is that it can be difficult to diagnose in its early stages. However, recent research has shown that changes in language, including speech decline and difficulty in processing information, can be important indicators of AD and may help with early detection. Hence, the speech narratives of the patients can be useful in diagnosing the early stages of Alzheimer{'}s disease. While the previous works have presented the potential of using speech narratives to diagnose AD in high-resource languages, this work explores the possibility of using a low-resourced language, i.e., Hindi language, to diagnose AD. In this paper, we present a dataset specifically for analyzing AD in the Hindi language, along with experimental results using various state-of-the-art algorithms to assess the diagnostic potential of speech narratives in Hindi. Our analysis suggests that speech narratives in the Hindi language have the potential to aid in the diagnosis of AD. Our dataset and code are made publicly available at \url{https://github.com/rkritesh210/DementiaBankHindi}.",
}
|
Alzheimer{'}s Disease (AD) is a neurodegenerative disorder that affects cognitive abilities and memory, especially in older adults. One of the challenges of AD is that it can be difficult to diagnose in its early stages. However, recent research has shown that changes in language, including speech decline and difficulty in processing information, can be important indicators of AD and may help with early detection. Hence, the speech narratives of the patients can be useful in diagnosing the early stages of Alzheimer{'}s disease. While the previous works have presented the potential of using speech narratives to diagnose AD in high-resource languages, this work explores the possibility of using a low-resourced language, i.e., Hindi language, to diagnose AD. In this paper, we present a dataset specifically for analyzing AD in the Hindi language, along with experimental results using various state-of-the-art algorithms to assess the diagnostic potential of speech narratives in Hindi. Our analysis suggests that speech narratives in the Hindi language have the potential to aid in the diagnosis of AD. Our dataset and code are made publicly available at \url{https://github.com/rkritesh210/DementiaBankHindi}.
|
[
"Rauniyar, Kritesh",
"Shiwakoti, Shuvam",
"Poudel, Sweta",
"Thapa, Surendrabikram",
"Naseem, Usman",
"Nasim, Mehwish"
] |
Breaking Barriers: Exploring the Diagnostic Potential of Speech Narratives in Hindi for Alzheimer's Disease
|
clinicalnlp-1.4
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.5.bib
|
https://aclanthology.org/2023.clinicalnlp-1.5/
|
@inproceedings{han-etal-2023-investigating,
title = "Investigating Massive Multilingual Pre-Trained Machine Translation Models for Clinical Domain via Transfer Learning",
author = "Han, Lifeng and
Erofeev, Gleb and
Sorokina, Irina and
Gladkoff, Serge and
Nenadic, Goran",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.5",
doi = "10.18653/v1/2023.clinicalnlp-1.5",
pages = "31--40",
abstract = "Massively multilingual pre-trained language models (MMPLMs) are developed in recent years demonstrating superpowers and the pre-knowledge they acquire for downstream tasks. This work investigates whether MMPLMs can be applied to clinical domain machine translation (MT) towards entirely unseen languages via transfer learning. We carry out an experimental investigation using Meta-AI{'}s MMPLMs {``}wmt21-dense-24-wide-en-X and X-en (WMT21fb){''} which were pre-trained on 7 language pairs and 14 translation directions including English to Czech, German, Hausa, Icelandic, Japanese, Russian, and Chinese, and the opposite direction. We fine-tune these MMPLMs towards English-\textit{Spanish} language pair which \textit{did not exist at all} in their original pre-trained corpora both implicitly and explicitly.We prepare carefully aligned \textit{clinical} domain data for this fine-tuning, which is different from their original mixed domain knowledge.Our experimental result shows that the fine-tuning is very successful using just 250k well-aligned in-domain EN-ES segments for three sub-task translation testings: clinical cases, clinical terms, and ontology concepts. It achieves very close evaluation scores to another MMPLM NLLB from Meta-AI, which included Spanish as a high-resource setting in the pre-training.To the best of our knowledge, this is the first work on using MMPLMs towards \textit{clinical domain transfer-learning NMT} successfully for totally unseen languages during pre-training.",
}
|
Massively multilingual pre-trained language models (MMPLMs) are developed in recent years demonstrating superpowers and the pre-knowledge they acquire for downstream tasks. This work investigates whether MMPLMs can be applied to clinical domain machine translation (MT) towards entirely unseen languages via transfer learning. We carry out an experimental investigation using Meta-AI{'}s MMPLMs {``}wmt21-dense-24-wide-en-X and X-en (WMT21fb){''} which were pre-trained on 7 language pairs and 14 translation directions including English to Czech, German, Hausa, Icelandic, Japanese, Russian, and Chinese, and the opposite direction. We fine-tune these MMPLMs towards English-\textit{Spanish} language pair which \textit{did not exist at all} in their original pre-trained corpora both implicitly and explicitly.We prepare carefully aligned \textit{clinical} domain data for this fine-tuning, which is different from their original mixed domain knowledge.Our experimental result shows that the fine-tuning is very successful using just 250k well-aligned in-domain EN-ES segments for three sub-task translation testings: clinical cases, clinical terms, and ontology concepts. It achieves very close evaluation scores to another MMPLM NLLB from Meta-AI, which included Spanish as a high-resource setting in the pre-training.To the best of our knowledge, this is the first work on using MMPLMs towards \textit{clinical domain transfer-learning NMT} successfully for totally unseen languages during pre-training.
|
[
"Han, Lifeng",
"Erofeev, Gleb",
"Sorokina, Irina",
"Gladkoff, Serge",
"Nenadic, Goran"
] |
Investigating Massive Multilingual Pre-Trained Machine Translation Models for Clinical Domain via Transfer Learning
|
clinicalnlp-1.5
|
Poster
|
2210.06068
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.6.bib
|
https://aclanthology.org/2023.clinicalnlp-1.6/
|
@inproceedings{coelho-da-silva-etal-2023-tracking,
title = "Tracking the Evolution of Covid-19 Symptoms through Clinical Conversations",
author = "Coelho Da Silva, Ticiana and
Fernandes De Mac{\^e}do, Jos{\'e} and
Magalh{\~a}es, R{\'e}gis",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.6",
doi = "10.18653/v1/2023.clinicalnlp-1.6",
pages = "41--47",
abstract = "The Coronavirus pandemic has heightened the demand for technological solutions capable of gathering and monitoring data automatically, quickly, and securely. To achieve this need, the Plant{\~a}o Coronavirus chatbot has been made available to the population of Cear{\'a} State in Brazil. This chatbot employs automated symptom detection technology through Natural Language Processing (NLP). The proposal of this work is a symptom tracker, which is a neural network that processes texts and captures symptoms in messages exchanged between citizens of the state and the Plant{\~a}o Coronavirus nurse/doctor, i.e., clinical conversations. The model has the ability to recognize new patterns and has identified a high incidence of altered psychological behaviors, including anguish, anxiety, and sadness, among users who tested positive or negative for Covid-19. As a result, the tool has emphasized the importance of expanding coverage through community mental health services in the state.",
}
|
The Coronavirus pandemic has heightened the demand for technological solutions capable of gathering and monitoring data automatically, quickly, and securely. To achieve this need, the Plant{\~a}o Coronavirus chatbot has been made available to the population of Cear{\'a} State in Brazil. This chatbot employs automated symptom detection technology through Natural Language Processing (NLP). The proposal of this work is a symptom tracker, which is a neural network that processes texts and captures symptoms in messages exchanged between citizens of the state and the Plant{\~a}o Coronavirus nurse/doctor, i.e., clinical conversations. The model has the ability to recognize new patterns and has identified a high incidence of altered psychological behaviors, including anguish, anxiety, and sadness, among users who tested positive or negative for Covid-19. As a result, the tool has emphasized the importance of expanding coverage through community mental health services in the state.
|
[
"Coelho Da Silva, Ticiana",
"Fern",
"es De Mac{\\^e}do, Jos{\\'e}",
"Magalh{\\~a}es, R{\\'e}gis"
] |
Tracking the Evolution of Covid-19 Symptoms through Clinical Conversations
|
clinicalnlp-1.6
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.7.bib
|
https://aclanthology.org/2023.clinicalnlp-1.7/
|
@inproceedings{tang-etal-2023-aligning,
title = "Aligning Factual Consistency for Clinical Studies Summarization through Reinforcement Learning",
author = "Tang, Xiangru and
Cohan, Arman and
Gerstein, Mark",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.7",
doi = "10.18653/v1/2023.clinicalnlp-1.7",
pages = "48--58",
abstract = "In the rapidly evolving landscape of medical research, accurate and concise summarization of clinical studies is crucial to support evidence-based practice. This paper presents a novel approach to clinical studies summarization, leveraging reinforcement learning to enhance factual consistency and align with human annotator preferences. Our work focuses on two tasks: Conclusion Generation and Review Generation. We train a CONFIT summarization model that outperforms GPT-3 and previous state-of-the-art models on the same datasets and collects expert and crowd-worker annotations to evaluate the quality and factual consistency of the generated summaries. These annotations enable us to measure the correlation of various automatic metrics, including modern factual evaluation metrics like QAFactEval, with human-assessed factual consistency. By employing top-correlated metrics as objectives for a reinforcement learning model, we demonstrate improved factuality in generated summaries that are preferred by human annotators.",
}
|
In the rapidly evolving landscape of medical research, accurate and concise summarization of clinical studies is crucial to support evidence-based practice. This paper presents a novel approach to clinical studies summarization, leveraging reinforcement learning to enhance factual consistency and align with human annotator preferences. Our work focuses on two tasks: Conclusion Generation and Review Generation. We train a CONFIT summarization model that outperforms GPT-3 and previous state-of-the-art models on the same datasets and collects expert and crowd-worker annotations to evaluate the quality and factual consistency of the generated summaries. These annotations enable us to measure the correlation of various automatic metrics, including modern factual evaluation metrics like QAFactEval, with human-assessed factual consistency. By employing top-correlated metrics as objectives for a reinforcement learning model, we demonstrate improved factuality in generated summaries that are preferred by human annotators.
|
[
"Tang, Xiangru",
"Cohan, Arman",
"Gerstein, Mark"
] |
Aligning Factual Consistency for Clinical Studies Summarization through Reinforcement Learning
|
clinicalnlp-1.7
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.8.bib
|
https://aclanthology.org/2023.clinicalnlp-1.8/
|
@inproceedings{min-etal-2023-navigating,
title = "Navigating Data Scarcity: Pretraining for Medical Utterance Classification",
author = "Min, Do June and
Perez-Rosas, Veronica and
Mihalcea, Rada",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.8",
doi = "10.18653/v1/2023.clinicalnlp-1.8",
pages = "59--68",
abstract = "Pretrained language models leverage self-supervised learning to use large amounts of unlabeled text for learning contextual representations of sequences. However, in the domain of medical conversations, the availability of large, public datasets is limited due to issues of privacy and data management. In this paper, we study the effectiveness of dialog-aware pretraining objectives and multiphase training in using unlabeled data to improve LMs training for medical utterance classification. The objectives of pretraining for dialog awareness involve tasks that take into account the structure of conversations, including features such as turn-taking and the roles of speakers. The multiphase training process uses unannotated data in a sequence that prioritizes similarities and connections between different domains. We empirically evaluate these methods on conversational dialog classification tasks in the medical and counseling domains, and find that multiphase training can help achieve higher performance than standard pretraining or finetuning.",
}
|
Pretrained language models leverage self-supervised learning to use large amounts of unlabeled text for learning contextual representations of sequences. However, in the domain of medical conversations, the availability of large, public datasets is limited due to issues of privacy and data management. In this paper, we study the effectiveness of dialog-aware pretraining objectives and multiphase training in using unlabeled data to improve LMs training for medical utterance classification. The objectives of pretraining for dialog awareness involve tasks that take into account the structure of conversations, including features such as turn-taking and the roles of speakers. The multiphase training process uses unannotated data in a sequence that prioritizes similarities and connections between different domains. We empirically evaluate these methods on conversational dialog classification tasks in the medical and counseling domains, and find that multiphase training can help achieve higher performance than standard pretraining or finetuning.
|
[
"Min, Do June",
"Perez-Rosas, Veronica",
"Mihalcea, Rada"
] |
Navigating Data Scarcity: Pretraining for Medical Utterance Classification
|
clinicalnlp-1.8
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.9.bib
|
https://aclanthology.org/2023.clinicalnlp-1.9/
|
@inproceedings{mishra-etal-2023-hindi,
title = "{H}indi Chatbot for Supporting Maternal and Child Health Related Queries in Rural {I}ndia",
author = "Mishra, Ritwik and
Singh, Simranjeet and
Kaur, Jasmeet and
Singh, Pushpendra and
Shah, Rajiv",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.9",
doi = "10.18653/v1/2023.clinicalnlp-1.9",
pages = "69--77",
abstract = "In developing countries like India, doctors and healthcare professionals working in public health spend significant time answering health queries that are fact-based and repetitive. Therefore, we propose an automated way to answer maternal and child health-related queries. A database of Frequently Asked Questions (FAQs) and their corresponding answers generated by experts is curated from rural health workers and young mothers. We develop a Hindi chatbot that identifies k relevant Question and Answer (QnA) pairs from the database in response to a healthcare query (q) written in Devnagri script or Hindi-English (Hinglish) code-mixed script. The curated database covers 80{\%} of all the queries that a user of our study is likely to ask. We experimented with (i) rule-based methods, (ii) sentence embeddings, and (iii) a paraphrasing classifier, to calculate the q-Q similarity. We observed that paraphrasing classifier gives the best result when trained first on an open-domain text and then on the healthcare domain. Our chatbot uses an ensemble of all three approaches. We observed that if a given q can be answered using the database, then our chatbot can provide at least one relevant QnA pair among its top three suggestions for up to 70{\%} of the queries.",
}
|
In developing countries like India, doctors and healthcare professionals working in public health spend significant time answering health queries that are fact-based and repetitive. Therefore, we propose an automated way to answer maternal and child health-related queries. A database of Frequently Asked Questions (FAQs) and their corresponding answers generated by experts is curated from rural health workers and young mothers. We develop a Hindi chatbot that identifies k relevant Question and Answer (QnA) pairs from the database in response to a healthcare query (q) written in Devnagri script or Hindi-English (Hinglish) code-mixed script. The curated database covers 80{\%} of all the queries that a user of our study is likely to ask. We experimented with (i) rule-based methods, (ii) sentence embeddings, and (iii) a paraphrasing classifier, to calculate the q-Q similarity. We observed that paraphrasing classifier gives the best result when trained first on an open-domain text and then on the healthcare domain. Our chatbot uses an ensemble of all three approaches. We observed that if a given q can be answered using the database, then our chatbot can provide at least one relevant QnA pair among its top three suggestions for up to 70{\%} of the queries.
|
[
"Mishra, Ritwik",
"Singh, Simranjeet",
"Kaur, Jasmeet",
"Singh, Pushpendra",
"Shah, Rajiv"
] |
Hindi Chatbot for Supporting Maternal and Child Health Related Queries in Rural India
|
clinicalnlp-1.9
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.10.bib
|
https://aclanthology.org/2023.clinicalnlp-1.10/
|
@inproceedings{sharma-etal-2023-multi,
title = "Multi-Task Training with In-Domain Language Models for Diagnostic Reasoning",
author = "Sharma, Brihat and
Gao, Yanjun and
Miller, Timothy and
Churpek, Matthew and
Afshar, Majid and
Dligach, Dmitriy",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.10",
doi = "10.18653/v1/2023.clinicalnlp-1.10",
pages = "78--85",
abstract = "Generative artificial intelligence (AI) is a promising direction for augmenting clinical diagnostic decision support and reducing diagnostic errors, a leading contributor to medical errors. To further the development of clinical AI systems, the Diagnostic Reasoning Benchmark (DR.BENCH) was introduced as a comprehensive generative AI framework, comprised of six tasks representing key components in clinical reasoning. We present a comparative analysis of in-domain versus out-of-domain language models as well as multi-task versus single task training with a focus on the problem summarization task in DR.BENCH. We demonstrate that a multi-task, clinically-trained language model outperforms its general domain counterpart by a large margin, establishing a new state-of-the-art performance, with a ROUGE-L score of 28.55. This research underscores the value of domain-specific training for optimizing clinical diagnostic reasoning tasks.",
}
|
Generative artificial intelligence (AI) is a promising direction for augmenting clinical diagnostic decision support and reducing diagnostic errors, a leading contributor to medical errors. To further the development of clinical AI systems, the Diagnostic Reasoning Benchmark (DR.BENCH) was introduced as a comprehensive generative AI framework, comprised of six tasks representing key components in clinical reasoning. We present a comparative analysis of in-domain versus out-of-domain language models as well as multi-task versus single task training with a focus on the problem summarization task in DR.BENCH. We demonstrate that a multi-task, clinically-trained language model outperforms its general domain counterpart by a large margin, establishing a new state-of-the-art performance, with a ROUGE-L score of 28.55. This research underscores the value of domain-specific training for optimizing clinical diagnostic reasoning tasks.
|
[
"Sharma, Brihat",
"Gao, Yanjun",
"Miller, Timothy",
"Churpek, Matthew",
"Afshar, Majid",
"Dligach, Dmitriy"
] |
Multi-Task Training with In-Domain Language Models for Diagnostic Reasoning
|
clinicalnlp-1.10
|
Poster
|
2306.04551
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.11.bib
|
https://aclanthology.org/2023.clinicalnlp-1.11/
|
@inproceedings{salek-faramarzi-etal-2023-context,
title = "Context-aware Medication Event Extraction from Unstructured Text",
author = "Salek Faramarzi, Noushin and
Patel, Meet and
Bandarupally, Sai Harika and
Banerjee, Ritwik",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.11",
doi = "10.18653/v1/2023.clinicalnlp-1.11",
pages = "86--95",
abstract = "Accurately capturing medication history is crucial in delivering high-quality medical care. The extraction of medication events from unstructured clinical notes, however, is challenging because the information is presented in complex narratives. We address this challenge by leveraging the newly released Contextualized Medication Event Dataset (CMED) as part of our participation in the 2022 National NLP Clinical Challenges (n2c2) shared task. Our study evaluates the performance of various pretrained language models in this task. Further, we find that data augmentation coupled with domain-specific training provides notable improvements. With experiments, we also underscore the importance of careful data preprocessing in medical event detection.",
}
|
Accurately capturing medication history is crucial in delivering high-quality medical care. The extraction of medication events from unstructured clinical notes, however, is challenging because the information is presented in complex narratives. We address this challenge by leveraging the newly released Contextualized Medication Event Dataset (CMED) as part of our participation in the 2022 National NLP Clinical Challenges (n2c2) shared task. Our study evaluates the performance of various pretrained language models in this task. Further, we find that data augmentation coupled with domain-specific training provides notable improvements. With experiments, we also underscore the importance of careful data preprocessing in medical event detection.
|
[
"Salek Faramarzi, Noushin",
"Patel, Meet",
"B",
"arupally, Sai Harika",
"Banerjee, Ritwik"
] |
Context-aware Medication Event Extraction from Unstructured Text
|
clinicalnlp-1.11
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.12.bib
|
https://aclanthology.org/2023.clinicalnlp-1.12/
|
@inproceedings{jeong-etal-2023-improving,
title = "Improving Automatic {KCD} Coding: Introducing the {K}o{DAK} and an Optimized Tokenization Method for {K}orean Clinical Documents",
author = "Jeong, Geunyeong and
Sun, Juoh and
Jeong, Seokwon and
Shin, Hyunjin and
Kim, Harksoo",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.12",
doi = "10.18653/v1/2023.clinicalnlp-1.12",
pages = "96--101",
abstract = "International Classification of Diseases (ICD) coding is the task of assigning a patient{'}s electronic health records into standardized codes, which is crucial for enhancing medical services and reducing healthcare costs. In Korea, automatic Korean Standard Classification of Diseases (KCD) coding has been hindered by limited resources, differences in ICD systems, and language-specific characteristics. Therefore, we construct the Korean Dataset for Automatic KCD coding (KoDAK) by collecting and preprocessing Korean clinical documents. In addition, we propose a tokenization method optimized for Korean clinical documents. Our experiments show that our proposed method outperforms Korean Medical BERT (KM-BERT) in Macro-F1 performance by 0.14{\%}p while using fewer model parameters, demonstrating its effectiveness in Korean clinical documents.",
}
|
International Classification of Diseases (ICD) coding is the task of assigning a patient{'}s electronic health records into standardized codes, which is crucial for enhancing medical services and reducing healthcare costs. In Korea, automatic Korean Standard Classification of Diseases (KCD) coding has been hindered by limited resources, differences in ICD systems, and language-specific characteristics. Therefore, we construct the Korean Dataset for Automatic KCD coding (KoDAK) by collecting and preprocessing Korean clinical documents. In addition, we propose a tokenization method optimized for Korean clinical documents. Our experiments show that our proposed method outperforms Korean Medical BERT (KM-BERT) in Macro-F1 performance by 0.14{\%}p while using fewer model parameters, demonstrating its effectiveness in Korean clinical documents.
|
[
"Jeong, Geunyeong",
"Sun, Juoh",
"Jeong, Seokwon",
"Shin, Hyunjin",
"Kim, Harksoo"
] |
Improving Automatic KCD Coding: Introducing the KoDAK and an Optimized Tokenization Method for Korean Clinical Documents
|
clinicalnlp-1.12
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.13.bib
|
https://aclanthology.org/2023.clinicalnlp-1.13/
|
@inproceedings{taghibeyglou-rudzicz-2023-needs,
title = "Who needs context? Classical techniques for {A}lzheimer{'}s disease detection",
author = "Taghibeyglou, Behrad and
Rudzicz, Frank",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.13",
doi = "10.18653/v1/2023.clinicalnlp-1.13",
pages = "102--107",
abstract = "Natural language processing (NLP) has shown great potential for Alzheimer{'}s disease (AD) detection, particularly due to the adverse effect of AD on spontaneous speech. The current body of literature has directed attention toward context-based models, especially Bidirectional Encoder Representations from Transformers (BERTs), owing to their exceptional abilities to integrate contextual information in a wide range of NLP tasks. This comes at the cost of added model opacity and computational requirements. Taking this into consideration, we propose a Word2Vec-based model for AD detection in 108 age- and sex-matched participants who were asked to describe the Cookie Theft picture. We also investigate the effectiveness of our model by fine-tuning BERT-based sequence classification models, as well as incorporating linguistic features. Our results demonstrate that our lightweight and easy-to-implement model outperforms some of the state-of-the-art models available in the literature, as well as BERT models.",
}
|
Natural language processing (NLP) has shown great potential for Alzheimer{'}s disease (AD) detection, particularly due to the adverse effect of AD on spontaneous speech. The current body of literature has directed attention toward context-based models, especially Bidirectional Encoder Representations from Transformers (BERTs), owing to their exceptional abilities to integrate contextual information in a wide range of NLP tasks. This comes at the cost of added model opacity and computational requirements. Taking this into consideration, we propose a Word2Vec-based model for AD detection in 108 age- and sex-matched participants who were asked to describe the Cookie Theft picture. We also investigate the effectiveness of our model by fine-tuning BERT-based sequence classification models, as well as incorporating linguistic features. Our results demonstrate that our lightweight and easy-to-implement model outperforms some of the state-of-the-art models available in the literature, as well as BERT models.
|
[
"Taghibeyglou, Behrad",
"Rudzicz, Frank"
] |
Who needs context? Classical techniques for Alzheimer's disease detection
|
clinicalnlp-1.13
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.14.bib
|
https://aclanthology.org/2023.clinicalnlp-1.14/
|
@inproceedings{murakami-etal-2023-knowledge,
title = "Knowledge Injection for Disease Names in Logical Inference between {J}apanese Clinical Texts",
author = "Murakami, Natsuki and
Ishida, Mana and
Takahashi, Yuta and
Yanaka, Hitomi and
Bekki, Daisuke",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.14",
doi = "10.18653/v1/2023.clinicalnlp-1.14",
pages = "108--117",
abstract = "In the medical field, there are many clinical texts such as electronic medical records, and research on Japanese natural language processing using these texts has been conducted. One such research involves Recognizing Textual Entailment (RTE) in clinical texts using a semantic analysis and logical inference system, ccg2lambda. However, it is difficult for existing inference systems to correctly determine the entailment relations , if the input sentence contains medical domain specific paraphrases such as disease names. In this study, we propose a method to supplement the equivalence relations of disease names as axioms by identifying candidates for paraphrases that lack in theorem proving. Candidates of paraphrases are identified by using a model for the NER task for disease names and a disease name dictionary. We also construct an inference test set that requires knowledge injection of disease names and evaluate our inference system. Experiments showed that our inference system was able to correctly infer for 106 out of 149 inference test sets.",
}
|
In the medical field, there are many clinical texts such as electronic medical records, and research on Japanese natural language processing using these texts has been conducted. One such research involves Recognizing Textual Entailment (RTE) in clinical texts using a semantic analysis and logical inference system, ccg2lambda. However, it is difficult for existing inference systems to correctly determine the entailment relations , if the input sentence contains medical domain specific paraphrases such as disease names. In this study, we propose a method to supplement the equivalence relations of disease names as axioms by identifying candidates for paraphrases that lack in theorem proving. Candidates of paraphrases are identified by using a model for the NER task for disease names and a disease name dictionary. We also construct an inference test set that requires knowledge injection of disease names and evaluate our inference system. Experiments showed that our inference system was able to correctly infer for 106 out of 149 inference test sets.
|
[
"Murakami, Natsuki",
"Ishida, Mana",
"Takahashi, Yuta",
"Yanaka, Hitomi",
"Bekki, Daisuke"
] |
Knowledge Injection for Disease Names in Logical Inference between Japanese Clinical Texts
|
clinicalnlp-1.14
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.15.bib
|
https://aclanthology.org/2023.clinicalnlp-1.15/
|
@inproceedings{abdelhalim-etal-2023-training,
title = "Training Models on Oversampled Data and a Novel Multi-class Annotation Scheme for Dementia Detection",
author = "Abdelhalim, Nadine and
Abdelhalim, Ingy and
Batista-Navarro, Riza",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.15",
doi = "10.18653/v1/2023.clinicalnlp-1.15",
pages = "118--124",
abstract = "This work introduces a novel three-class annotation scheme for text-based dementia classification in patients, based on their recorded visit interactions. Multiple models were developed utilising BERT, RoBERTa and DistilBERT. Two approaches were employed to improve the representation of dementia samples: oversampling the underrepresented data points in the original Pitt dataset and combining the Pitt with the Holland and Kempler datasets. The DistilBERT models trained on either an oversampled Pitt dataset or the combined dataset performed best in classifying the dementia class. Specifically, the model trained on the oversampled Pitt dataset and the one trained on the combined dataset obtained state-of-the-art performance with 98.8{\%} overall accuracy and 98.6{\%} macro-averaged F1-score, respectively. The models{'} outputs were manually inspected through saliency highlighting, using Local Interpretable Model-agnostic Explanations (LIME), to provide a better understanding of its predictions.",
}
|
This work introduces a novel three-class annotation scheme for text-based dementia classification in patients, based on their recorded visit interactions. Multiple models were developed utilising BERT, RoBERTa and DistilBERT. Two approaches were employed to improve the representation of dementia samples: oversampling the underrepresented data points in the original Pitt dataset and combining the Pitt with the Holland and Kempler datasets. The DistilBERT models trained on either an oversampled Pitt dataset or the combined dataset performed best in classifying the dementia class. Specifically, the model trained on the oversampled Pitt dataset and the one trained on the combined dataset obtained state-of-the-art performance with 98.8{\%} overall accuracy and 98.6{\%} macro-averaged F1-score, respectively. The models{'} outputs were manually inspected through saliency highlighting, using Local Interpretable Model-agnostic Explanations (LIME), to provide a better understanding of its predictions.
|
[
"Abdelhalim, Nadine",
"Abdelhalim, Ingy",
"Batista-Navarro, Riza"
] |
Training Models on Oversampled Data and a Novel Multi-class Annotation Scheme for Dementia Detection
|
clinicalnlp-1.15
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.16.bib
|
https://aclanthology.org/2023.clinicalnlp-1.16/
|
@inproceedings{zhou-etal-2023-improving-transferability,
title = "Improving the Transferability of Clinical Note Section Classification Models with {BERT} and Large Language Model Ensembles",
author = "Zhou, Weipeng and
Afshar, Majid and
Dligach, Dmitriy and
Gao, Yanjun and
Miller, Timothy",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.16",
doi = "10.18653/v1/2023.clinicalnlp-1.16",
pages = "125--130",
abstract = "Text in electronic health records is organized into sections, and classifying those sections into section categories is useful for downstream tasks. In this work, we attempt to improve the transferability of section classification models by combining the dataset-specific knowledge in supervised learning models with the world knowledge inside large language models (LLMs). Surprisingly, we find that zero-shot LLMs out-perform supervised BERT-based models applied to out-of-domain data. We also find that their strengths are synergistic, so that a simple ensemble technique leads to additional performance gains.",
}
|
Text in electronic health records is organized into sections, and classifying those sections into section categories is useful for downstream tasks. In this work, we attempt to improve the transferability of section classification models by combining the dataset-specific knowledge in supervised learning models with the world knowledge inside large language models (LLMs). Surprisingly, we find that zero-shot LLMs out-perform supervised BERT-based models applied to out-of-domain data. We also find that their strengths are synergistic, so that a simple ensemble technique leads to additional performance gains.
|
[
"Zhou, Weipeng",
"Afshar, Majid",
"Dligach, Dmitriy",
"Gao, Yanjun",
"Miller, Timothy"
] |
Improving the Transferability of Clinical Note Section Classification Models with BERT and Large Language Model Ensembles
|
clinicalnlp-1.16
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.17.bib
|
https://aclanthology.org/2023.clinicalnlp-1.17/
|
@inproceedings{chowdhury-etal-2023-large,
title = "Can Large Language Models Safely Address Patient Questions Following Cataract Surgery?",
author = "Chowdhury, Mohita and
Lim, Ernest and
Higham, Aisling and
McKinnon, Rory and
Ventoura, Nikoletta and
He, Yajie and
De Pennington, Nick",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.17",
doi = "10.18653/v1/2023.clinicalnlp-1.17",
pages = "131--137",
abstract = "Recent advances in large language models (LLMs) have generated significant interest in their application across various domains including healthcare. However, there is limited data on their safety and performance in real-world scenarios. This study uses data collected using an autonomous telemedicine clinical assistant. The assistant asks symptom-based questions to elicit patient concerns and allows patients to ask questions about their post-operative recovery. We utilise real-world postoperative questions posed to the assistant by a cohort of 120 patients to examine the safety and appropriateness of responses generated by a recent popular LLM by OpenAI, ChatGPT. We demonstrate that LLMs have the potential to helpfully address routine patient queries following routine surgery. However, important limitations around the safety of today{'}s models exist which must be considered.",
}
|
Recent advances in large language models (LLMs) have generated significant interest in their application across various domains including healthcare. However, there is limited data on their safety and performance in real-world scenarios. This study uses data collected using an autonomous telemedicine clinical assistant. The assistant asks symptom-based questions to elicit patient concerns and allows patients to ask questions about their post-operative recovery. We utilise real-world postoperative questions posed to the assistant by a cohort of 120 patients to examine the safety and appropriateness of responses generated by a recent popular LLM by OpenAI, ChatGPT. We demonstrate that LLMs have the potential to helpfully address routine patient queries following routine surgery. However, important limitations around the safety of today{'}s models exist which must be considered.
|
[
"Chowdhury, Mohita",
"Lim, Ernest",
"Higham, Aisling",
"McKinnon, Rory",
"Ventoura, Nikoletta",
"He, Yajie",
"De Pennington, Nick"
] |
Can Large Language Models Safely Address Patient Questions Following Cataract Surgery?
|
clinicalnlp-1.17
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.18.bib
|
https://aclanthology.org/2023.clinicalnlp-1.18/
|
@inproceedings{singh-etal-2023-large,
title = "Large Scale Sequence-to-Sequence Models for Clinical Note Generation from Patient-Doctor Conversations",
author = "Singh, Gagandeep and
Pan, Yue and
Andres-Ferrer, Jesus and
Del-Agua, Miguel and
Diehl, Frank and
Pinto, Joel and
Vozila, Paul",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.18",
doi = "10.18653/v1/2023.clinicalnlp-1.18",
pages = "138--143",
abstract = "We present our work on building large scale sequence-to-sequence models for generating clinical note from patient-doctor conversation. This is formulated as an abstractive summarization task for which we use encoder-decoder transformer model with pointer-generator. We discuss various modeling enhancements to this baseline model which include using subword and multiword tokenization scheme, prefixing the targets with a chain-of-clinical-facts, and training with contrastive loss that is defined over various candidate summaries. We also use flash attention during training and query chunked attention during inference to be able to process long input and output sequences and to improve computational efficiency. Experiments are conducted on a dataset containing about 900K encounters from around 1800 healthcare providers covering 27 specialties. The results are broken down into primary care and non-primary care specialties. Consistent accuracy improvements are observed across both of these categories.",
}
|
We present our work on building large scale sequence-to-sequence models for generating clinical note from patient-doctor conversation. This is formulated as an abstractive summarization task for which we use encoder-decoder transformer model with pointer-generator. We discuss various modeling enhancements to this baseline model which include using subword and multiword tokenization scheme, prefixing the targets with a chain-of-clinical-facts, and training with contrastive loss that is defined over various candidate summaries. We also use flash attention during training and query chunked attention during inference to be able to process long input and output sequences and to improve computational efficiency. Experiments are conducted on a dataset containing about 900K encounters from around 1800 healthcare providers covering 27 specialties. The results are broken down into primary care and non-primary care specialties. Consistent accuracy improvements are observed across both of these categories.
|
[
"Singh, Gag",
"eep",
"Pan, Yue",
"Andres-Ferrer, Jesus",
"Del-Agua, Miguel",
"Diehl, Frank",
"Pinto, Joel",
"Vozila, Paul"
] |
Large Scale Sequence-to-Sequence Models for Clinical Note Generation from Patient-Doctor Conversations
|
clinicalnlp-1.18
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.19.bib
|
https://aclanthology.org/2023.clinicalnlp-1.19/
|
@inproceedings{ozler-bethard-2023-clulab,
title = "clulab at {MEDIQA}-Chat 2023: Summarization and classification of medical dialogues",
author = "Ozler, Kadir Bulut and
Bethard, Steven",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.19",
doi = "10.18653/v1/2023.clinicalnlp-1.19",
pages = "144--149",
abstract = "Clinical Natural Language Processing has been an increasingly popular research area in the NLP community. With the rise of large language models (LLMs) and their impressive abilities in NLP tasks, it is crucial to pay attention to their clinical applications. Sequence to sequence generative approaches with LLMs have been widely used in recent years. To be a part of the research in clinical NLP with recent advances in the field, we participated in task A of MEDIQA-Chat at ACL-ClinicalNLP Workshop 2023. In this paper, we explain our methods and findings as well as our comments on our results and limitations.",
}
|
Clinical Natural Language Processing has been an increasingly popular research area in the NLP community. With the rise of large language models (LLMs) and their impressive abilities in NLP tasks, it is crucial to pay attention to their clinical applications. Sequence to sequence generative approaches with LLMs have been widely used in recent years. To be a part of the research in clinical NLP with recent advances in the field, we participated in task A of MEDIQA-Chat at ACL-ClinicalNLP Workshop 2023. In this paper, we explain our methods and findings as well as our comments on our results and limitations.
|
[
"Ozler, Kadir Bulut",
"Bethard, Steven"
] |
clulab at MEDIQA-Chat 2023: Summarization and classification of medical dialogues
|
clinicalnlp-1.19
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.20.bib
|
https://aclanthology.org/2023.clinicalnlp-1.20/
|
@inproceedings{liu-etal-2023-leveraging,
title = "Leveraging Natural Language Processing and Clinical Notes for Dementia Detection",
author = "Liu, Ming and
Beare, Richard and
Collyer, Taya and
Andrew, Nadine and
Srikanth, Velandai",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.20",
doi = "10.18653/v1/2023.clinicalnlp-1.20",
pages = "150--155",
abstract = "Early detection and automated classification of dementia has recently gained considerable attention using neuroimaging data and spontaneous speech. In this paper, we validate the possibility of dementia detection with in-hospital clinical notes. We collected 954 patients{'} clinical notes from a local hospital and assign dementia/non-dementia labels to those patients based on clinical assessment and telephone interview. Given the labeled dementia data sets, we fine tune a ClinicalBioBERT based on some filtered clinical notes and conducted experiments on both binary and three class dementia classification. Our experiment results show that the fine tuned ClinicalBioBERT achieved satisfied performance on binary classification but failed on three class dementia classification. Further analysis suggests that more human prior knowledge should be considered.",
}
|
Early detection and automated classification of dementia has recently gained considerable attention using neuroimaging data and spontaneous speech. In this paper, we validate the possibility of dementia detection with in-hospital clinical notes. We collected 954 patients{'} clinical notes from a local hospital and assign dementia/non-dementia labels to those patients based on clinical assessment and telephone interview. Given the labeled dementia data sets, we fine tune a ClinicalBioBERT based on some filtered clinical notes and conducted experiments on both binary and three class dementia classification. Our experiment results show that the fine tuned ClinicalBioBERT achieved satisfied performance on binary classification but failed on three class dementia classification. Further analysis suggests that more human prior knowledge should be considered.
|
[
"Liu, Ming",
"Beare, Richard",
"Collyer, Taya",
"Andrew, Nadine",
"Srikanth, Vel",
"ai"
] |
Leveraging Natural Language Processing and Clinical Notes for Dementia Detection
|
clinicalnlp-1.20
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.21.bib
|
https://aclanthology.org/2023.clinicalnlp-1.21/
|
@inproceedings{ohtsuka-etal-2023-automated,
title = "Automated Orthodontic Diagnosis from a Summary of Medical Findings",
author = "Ohtsuka, Takumi and
Kajiwara, Tomoyuki and
Tanikawa, Chihiro and
Shimizu, Yuujin and
Nagahara, Hajime and
Ninomiya, Takashi",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.21",
doi = "10.18653/v1/2023.clinicalnlp-1.21",
pages = "156--160",
abstract = "We propose a method to automate orthodontic diagnosis with natural language processing. It is worthwhile to assist dentists with such technology to prevent errors by inexperienced dentists and to reduce the workload of experienced ones. However, text length and style inconsistencies in medical findings make an automated orthodontic diagnosis with deep-learning models difficult. In this study, we improve the performance of automatic diagnosis utilizing short summaries of medical findings written in a consistent style by experienced dentists. Experimental results on 970 Japanese medical findings show that summarization consistently improves the performance of various machine learning models for automated orthodontic diagnosis. Although BERT is the model that gains the most performance with the proposed method, the convolutional neural network achieved the best performance.",
}
|
We propose a method to automate orthodontic diagnosis with natural language processing. It is worthwhile to assist dentists with such technology to prevent errors by inexperienced dentists and to reduce the workload of experienced ones. However, text length and style inconsistencies in medical findings make an automated orthodontic diagnosis with deep-learning models difficult. In this study, we improve the performance of automatic diagnosis utilizing short summaries of medical findings written in a consistent style by experienced dentists. Experimental results on 970 Japanese medical findings show that summarization consistently improves the performance of various machine learning models for automated orthodontic diagnosis. Although BERT is the model that gains the most performance with the proposed method, the convolutional neural network achieved the best performance.
|
[
"Ohtsuka, Takumi",
"Kajiwara, Tomoyuki",
"Tanikawa, Chihiro",
"Shimizu, Yuujin",
"Nagahara, Hajime",
"Ninomiya, Takashi"
] |
Automated Orthodontic Diagnosis from a Summary of Medical Findings
|
clinicalnlp-1.21
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.22.bib
|
https://aclanthology.org/2023.clinicalnlp-1.22/
|
@inproceedings{turkmen-etal-2023-harnessing,
title = "Harnessing the Power of {BERT} in the {T}urkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios",
author = {T{\"u}rkmen, Hazal and
Dikenelli, Oguz and
Eraslan, Cenk and
Calli, Mehmet and
Ozbek, Suha},
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.22",
doi = "10.18653/v1/2023.clinicalnlp-1.22",
pages = "161--170",
abstract = "Recent advancements in natural language processing (NLP) have been driven by large language models (LLMs), thereby revolutionizing the field. Our study investigates the impact of diverse pre-training strategies on the performance of Turkish clinical language models in a multi-label classification task involving radiology reports, with a focus on overcoming language resource limitations. Additionally, for the first time, we evaluated the simultaneous pre-training approach by utilizing limited clinical task data. We developed four models: TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our results revealed superior performance from BERTurk and TurkRadBERT-task v1, both of which leverage a broad general-domain corpus. Although task-adaptive pre-training is capable of identifying domain-specific patterns, it may be prone to overfitting because of the constraints of the task-specific corpus. Our findings highlight the importance of domain-specific vocabulary during pre-training to improve performance. They also affirmed that a combination of general domain knowledge and task-specific fine-tuning is crucial for optimal performance across various categories. This study offers key insights for future research on pre-training techniques in the clinical domain, particularly for low-resource languages.",
}
|
Recent advancements in natural language processing (NLP) have been driven by large language models (LLMs), thereby revolutionizing the field. Our study investigates the impact of diverse pre-training strategies on the performance of Turkish clinical language models in a multi-label classification task involving radiology reports, with a focus on overcoming language resource limitations. Additionally, for the first time, we evaluated the simultaneous pre-training approach by utilizing limited clinical task data. We developed four models: TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our results revealed superior performance from BERTurk and TurkRadBERT-task v1, both of which leverage a broad general-domain corpus. Although task-adaptive pre-training is capable of identifying domain-specific patterns, it may be prone to overfitting because of the constraints of the task-specific corpus. Our findings highlight the importance of domain-specific vocabulary during pre-training to improve performance. They also affirmed that a combination of general domain knowledge and task-specific fine-tuning is crucial for optimal performance across various categories. This study offers key insights for future research on pre-training techniques in the clinical domain, particularly for low-resource languages.
|
[
"T{\\\"u}rkmen, Hazal",
"Dikenelli, Oguz",
"Eraslan, Cenk",
"Calli, Mehmet",
"Ozbek, Suha"
] |
Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios
|
clinicalnlp-1.22
|
Poster
|
2305.03788
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.23.bib
|
https://aclanthology.org/2023.clinicalnlp-1.23/
|
@inproceedings{llorca-etal-2023-meta,
title = "A Meta-dataset of {G}erman Medical Corpora: Harmonization of Annotations and Cross-corpus {NER} Evaluation",
author = "Llorca, Ignacio and
Borchert, Florian and
Schapranow, Matthieu-P.",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.23",
doi = "10.18653/v1/2023.clinicalnlp-1.23",
pages = "171--181",
abstract = "Over the last years, an increasing number of publicly available, semantically annotated medical corpora have been released for the German language. While their annotations cover comparable semantic classes, the synergies of such efforts have not been explored, yet. This is due to substantial differences in the data schemas (syntax) and annotated entities (semantics), which hinder the creation of common meta-datasets. For instance, it is unclear whether named entity recognition (NER) taggers trained on one or more of such datasets are useful to detect entities in any of the other datasets. In this work, we create harmonized versions of German medical corpora using the BigBIO framework, and make them available to the community. Using these as a meta-dataset, we perform a series of cross-corpus evaluation experiments on two settings of aligned labels. These consist in fine-tuning various pre-trained Transformers on different combinations of training sets, and testing them against each dataset separately. We find that a) trained NER models generalize poorly, with F1 scores dropping approx. 20 pp. on unseen test data, and b) current pre-trained Transformer models for the German language do not systematically alleviate this issue. However, our results suggest that models benefit from additional training corpora in most cases, even if these belong to different medical fields or text genres.",
}
|
Over the last years, an increasing number of publicly available, semantically annotated medical corpora have been released for the German language. While their annotations cover comparable semantic classes, the synergies of such efforts have not been explored, yet. This is due to substantial differences in the data schemas (syntax) and annotated entities (semantics), which hinder the creation of common meta-datasets. For instance, it is unclear whether named entity recognition (NER) taggers trained on one or more of such datasets are useful to detect entities in any of the other datasets. In this work, we create harmonized versions of German medical corpora using the BigBIO framework, and make them available to the community. Using these as a meta-dataset, we perform a series of cross-corpus evaluation experiments on two settings of aligned labels. These consist in fine-tuning various pre-trained Transformers on different combinations of training sets, and testing them against each dataset separately. We find that a) trained NER models generalize poorly, with F1 scores dropping approx. 20 pp. on unseen test data, and b) current pre-trained Transformer models for the German language do not systematically alleviate this issue. However, our results suggest that models benefit from additional training corpora in most cases, even if these belong to different medical fields or text genres.
|
[
"Llorca, Ignacio",
"Borchert, Florian",
"Schapranow, Matthieu-P."
] |
A Meta-dataset of German Medical Corpora: Harmonization of Annotations and Cross-corpus NER Evaluation
|
clinicalnlp-1.23
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.24.bib
|
https://aclanthology.org/2023.clinicalnlp-1.24/
|
@inproceedings{sanguedolce-etal-2023-uncovering,
title = "Uncovering the Potential for a Weakly Supervised End-to-End Model in Recognising Speech from Patient with Post-Stroke Aphasia",
author = "Sanguedolce, Giulia and
Naylor, Patrick A. and
Geranmayeh, Fatemeh",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.24",
doi = "10.18653/v1/2023.clinicalnlp-1.24",
pages = "182--190",
abstract = "Post-stroke speech and language deficits (aphasia) significantly impact patients{'} quality of life. Many with mild symptoms remain undiagnosed, and the majority do not receive the intensive doses of therapy recommended, due to healthcare costs and/or inadequate services. Automatic Speech Recognition (ASR) may help overcome these difficulties by improving diagnostic rates and providing feedback during tailored therapy. However, its performance is often unsatisfactory due to the high variability in speech errors and scarcity of training datasets. This study assessed the performance of Whisper, a recently released end-to-end model, in patients with post-stroke aphasia (PWA). We tuned its hyperparameters to achieve the lowest word error rate (WER) on aphasic speech. WER was significantly higher in PWA compared to age-matched controls (10.3{\%} vs 38.5{\%}, $p<0.001$). We demonstrated that worse WER was related to the more severe aphasia as measured by expressive (overt naming, and spontaneous speech production) and receptive (written and spoken comprehension) language assessments. Stroke lesion size did not affect the performance of Whisper. Linear mixed models accounting for demographic factors, therapy duration, and time since stroke, confirmed worse Whisper performance with left hemispheric frontal lesions.We discuss the implications of these findings for how future ASR can be improved in PWA.",
}
|
Post-stroke speech and language deficits (aphasia) significantly impact patients{'} quality of life. Many with mild symptoms remain undiagnosed, and the majority do not receive the intensive doses of therapy recommended, due to healthcare costs and/or inadequate services. Automatic Speech Recognition (ASR) may help overcome these difficulties by improving diagnostic rates and providing feedback during tailored therapy. However, its performance is often unsatisfactory due to the high variability in speech errors and scarcity of training datasets. This study assessed the performance of Whisper, a recently released end-to-end model, in patients with post-stroke aphasia (PWA). We tuned its hyperparameters to achieve the lowest word error rate (WER) on aphasic speech. WER was significantly higher in PWA compared to age-matched controls (10.3{\%} vs 38.5{\%}, $p<0.001$). We demonstrated that worse WER was related to the more severe aphasia as measured by expressive (overt naming, and spontaneous speech production) and receptive (written and spoken comprehension) language assessments. Stroke lesion size did not affect the performance of Whisper. Linear mixed models accounting for demographic factors, therapy duration, and time since stroke, confirmed worse Whisper performance with left hemispheric frontal lesions.We discuss the implications of these findings for how future ASR can be improved in PWA.
|
[
"Sanguedolce, Giulia",
"Naylor, Patrick A.",
"Geranmayeh, Fatemeh"
] |
Uncovering the Potential for a Weakly Supervised End-to-End Model in Recognising Speech from Patient with Post-Stroke Aphasia
|
clinicalnlp-1.24
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.25.bib
|
https://aclanthology.org/2023.clinicalnlp-1.25/
|
@inproceedings{yao-etal-2023-textual,
title = "Textual Entailment for Temporal Dependency Graph Parsing",
author = "Yao, Jiarui and
Bethard, Steven and
Wright-Bettner, Kristin and
Goldner, Eli and
Harris, David and
Savova, Guergana",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.25",
doi = "10.18653/v1/2023.clinicalnlp-1.25",
pages = "191--199",
abstract = "We explore temporal dependency graph (TDG) parsing in the clinical domain. We leverage existing annotations on the THYME dataset to semi-automatically construct a TDG corpus. Then we propose a new natural language inference (NLI) approach to TDG parsing, and evaluate it both on general domain TDGs from wikinews and the newly constructed clinical TDG corpus. We achieve competitive performance on general domain TDGs with a much simpler model than prior work. On the clinical TDGs, our method establishes the first result of TDG parsing on clinical data with 0.79/0.88 micro/macro F1.",
}
|
We explore temporal dependency graph (TDG) parsing in the clinical domain. We leverage existing annotations on the THYME dataset to semi-automatically construct a TDG corpus. Then we propose a new natural language inference (NLI) approach to TDG parsing, and evaluate it both on general domain TDGs from wikinews and the newly constructed clinical TDG corpus. We achieve competitive performance on general domain TDGs with a much simpler model than prior work. On the clinical TDGs, our method establishes the first result of TDG parsing on clinical data with 0.79/0.88 micro/macro F1.
|
[
"Yao, Jiarui",
"Bethard, Steven",
"Wright-Bettner, Kristin",
"Goldner, Eli",
"Harris, David",
"Savova, Guergana"
] |
Textual Entailment for Temporal Dependency Graph Parsing
|
clinicalnlp-1.25
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.26.bib
|
https://aclanthology.org/2023.clinicalnlp-1.26/
|
@inproceedings{nair-etal-2023-generating,
title = "Generating medically-accurate summaries of patient-provider dialogue: A multi-stage approach using large language models",
author = "Nair, Varun and
Schumacher, Elliot and
Kannan, Anitha",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.26",
doi = "10.18653/v1/2023.clinicalnlp-1.26",
pages = "200--217",
abstract = "A medical provider{'}s summary of a patient visit serves several critical purposes, including clinical decision-making, facilitating hand-offs between providers, and as a reference for the patient. An effective summary is required to be coherent and accurately capture all the medically relevant information in the dialogue, despite the complexity of patient-generated language. Even minor inaccuracies in visit summaries (for example, summarizing {``}patient does not have a fever{''} when a fever is present) can be detrimental to the outcome of care for the patient. This paper tackles the problem of medical conversation summarization by discretizing the task into several smaller dialogue-understanding tasks that are sequentially built upon. First, we identify medical entities and their affirmations within the conversation to serve as building blocks. We study dynamically constructing few-shot prompts for tasks by conditioning on relevant patient information and use GPT-3 as the backbone for our experiments. We also develop GPT-derived summarization metrics to measure performance against reference summaries quantitatively. Both our human evaluation study and metrics for medical correctness show that summaries generated using this approach are clinically accurate and outperform the baseline approach of summarizing the dialog in a zero-shot, single-prompt setting.",
}
|
A medical provider{'}s summary of a patient visit serves several critical purposes, including clinical decision-making, facilitating hand-offs between providers, and as a reference for the patient. An effective summary is required to be coherent and accurately capture all the medically relevant information in the dialogue, despite the complexity of patient-generated language. Even minor inaccuracies in visit summaries (for example, summarizing {``}patient does not have a fever{''} when a fever is present) can be detrimental to the outcome of care for the patient. This paper tackles the problem of medical conversation summarization by discretizing the task into several smaller dialogue-understanding tasks that are sequentially built upon. First, we identify medical entities and their affirmations within the conversation to serve as building blocks. We study dynamically constructing few-shot prompts for tasks by conditioning on relevant patient information and use GPT-3 as the backbone for our experiments. We also develop GPT-derived summarization metrics to measure performance against reference summaries quantitatively. Both our human evaluation study and metrics for medical correctness show that summaries generated using this approach are clinically accurate and outperform the baseline approach of summarizing the dialog in a zero-shot, single-prompt setting.
|
[
"Nair, Varun",
"Schumacher, Elliot",
"Kannan, Anitha"
] |
Generating medically-accurate summaries of patient-provider dialogue: A multi-stage approach using large language models
|
clinicalnlp-1.26
|
Poster
|
2305.05982
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.27.bib
|
https://aclanthology.org/2023.clinicalnlp-1.27/
|
@inproceedings{ehghaghi-etal-2023-factors,
title = "Factors Affecting the Performance of Automated Speaker Verification in {A}lzheimer{'}s Disease Clinical Trials",
author = "Ehghaghi, Malikeh and
Stanojevic, Marija and
Akram, Ali and
Novikova, Jekaterina",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.27",
doi = "10.18653/v1/2023.clinicalnlp-1.27",
pages = "218--227",
abstract = "Detecting duplicate patient participation in clinical trials is a major challenge because repeated patients can undermine the credibility and accuracy of the trial{'}s findings and result in significant health and financial risks. Developing accurate automated speaker verification (ASV) models is crucial to verify the identity of enrolled individuals and remove duplicates, but the size and quality of data influence ASV performance. However, there has been limited investigation into the factors that can affect ASV capabilities in clinical environments. In this paper, we bridge the gap by conducting analysis of how participant demographic characteristics, audio quality criteria, and severity level of Alzheimer{'}s disease (AD) impact the performance of ASV utilizing a dataset of speech recordings from 659 participants with varying levels of AD, obtained through multiple speech tasks. Our results indicate that ASV performance: 1) is slightly better on male speakers than on female speakers; 2) degrades for individuals who are above 70 years old; 3) is comparatively better for non-native English speakers than for native English speakers; 4) is negatively affected by clinician interference, noisy background, and unclear participant speech; 5) tends to decrease with an increase in the severity level of AD. Our study finds that voice biometrics raise fairness concerns as certain subgroups exhibit different ASV performances owing to their inherent voice characteristics. Moreover, the performance of ASV is influenced by the quality of speech recordings, which underscores the importance of improving the data collection settings in clinical trials.",
}
|
Detecting duplicate patient participation in clinical trials is a major challenge because repeated patients can undermine the credibility and accuracy of the trial{'}s findings and result in significant health and financial risks. Developing accurate automated speaker verification (ASV) models is crucial to verify the identity of enrolled individuals and remove duplicates, but the size and quality of data influence ASV performance. However, there has been limited investigation into the factors that can affect ASV capabilities in clinical environments. In this paper, we bridge the gap by conducting analysis of how participant demographic characteristics, audio quality criteria, and severity level of Alzheimer{'}s disease (AD) impact the performance of ASV utilizing a dataset of speech recordings from 659 participants with varying levels of AD, obtained through multiple speech tasks. Our results indicate that ASV performance: 1) is slightly better on male speakers than on female speakers; 2) degrades for individuals who are above 70 years old; 3) is comparatively better for non-native English speakers than for native English speakers; 4) is negatively affected by clinician interference, noisy background, and unclear participant speech; 5) tends to decrease with an increase in the severity level of AD. Our study finds that voice biometrics raise fairness concerns as certain subgroups exhibit different ASV performances owing to their inherent voice characteristics. Moreover, the performance of ASV is influenced by the quality of speech recordings, which underscores the importance of improving the data collection settings in clinical trials.
|
[
"Ehghaghi, Malikeh",
"Stanojevic, Marija",
"Akram, Ali",
"Novikova, Jekaterina"
] |
Factors Affecting the Performance of Automated Speaker Verification in Alzheimer's Disease Clinical Trials
|
clinicalnlp-1.27
|
Poster
|
2306.12444
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.28.bib
|
https://aclanthology.org/2023.clinicalnlp-1.28/
|
@inproceedings{sharma-etal-2023-team,
title = "Team Cadence at {MEDIQA}-Chat 2023: Generating, augmenting and summarizing clinical dialogue with large language models",
author = "Sharma, Ashwyn and
Feldman, David and
Jain, Aneesh",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.28",
doi = "10.18653/v1/2023.clinicalnlp-1.28",
pages = "228--235",
abstract = "This paper describes Team Cadence{'}s winning submission to Task C of the MEDIQA-Chat 2023 shared tasks. We also present the set of methods, including a novel N-pass strategy to summarize a mix of clinical dialogue and an incomplete summarized note, used to complete Task A and Task B, ranking highly on the leaderboard amongst stable and reproducible code submissions. The shared tasks invited participants to summarize, classify and generate patient-doctor conversations. Considering the small volume of training data available, we took a data-augmentation-first approach to the three tasks by focusing on the dialogue generation task, i.e., Task C. It proved effective in improving our models{'} performance on Task A and Task B. We also found the BART architecture to be highly versatile, as it formed the base for all our submissions. Finally, based on the results shared by the organizers, we note that Team Cadence was the only team to submit stable and reproducible runs to all three tasks.",
}
|
This paper describes Team Cadence{'}s winning submission to Task C of the MEDIQA-Chat 2023 shared tasks. We also present the set of methods, including a novel N-pass strategy to summarize a mix of clinical dialogue and an incomplete summarized note, used to complete Task A and Task B, ranking highly on the leaderboard amongst stable and reproducible code submissions. The shared tasks invited participants to summarize, classify and generate patient-doctor conversations. Considering the small volume of training data available, we took a data-augmentation-first approach to the three tasks by focusing on the dialogue generation task, i.e., Task C. It proved effective in improving our models{'} performance on Task A and Task B. We also found the BART architecture to be highly versatile, as it formed the base for all our submissions. Finally, based on the results shared by the organizers, we note that Team Cadence was the only team to submit stable and reproducible runs to all three tasks.
|
[
"Sharma, Ashwyn",
"Feldman, David",
"Jain, Aneesh"
] |
Team Cadence at MEDIQA-Chat 2023: Generating, augmenting and summarizing clinical dialogue with large language models
|
clinicalnlp-1.28
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.29.bib
|
https://aclanthology.org/2023.clinicalnlp-1.29/
|
@inproceedings{yan-etal-2023-method,
title = "Method for Designing Semantic Annotation of Sepsis Signs in Clinical Text",
author = "Yan, Melissa and
Gustad, Lise and
H{\o}vik, Lise and
Nytr{\o}, {\O}ystein",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.29",
doi = "10.18653/v1/2023.clinicalnlp-1.29",
pages = "236--246",
abstract = "Annotated clinical text corpora are essential for machine learning studies that model and predict care processes and disease progression. However, few studies describe the necessary experimental design of the annotation guideline and annotation phases. This makes replication, reuse, and adoption challenging. Using clinical questions about sepsis, we designed a semantic annotation guideline to capture sepsis signs from clinical text. The clinical questions aid guideline design, application, and evaluation. Our method incrementally evaluates each change in the guideline by testing the resulting annotated corpus using clinical questions. Additionally, our method uses inter-annotator agreement to judge the annotator compliance and quality of the guideline. We show that the method, combined with controlled design increments, is simple and allows the development and measurable improvement of a purpose-built semantic annotation guideline. We believe that our approach is useful for incremental design of semantic annotation guidelines in general.",
}
|
Annotated clinical text corpora are essential for machine learning studies that model and predict care processes and disease progression. However, few studies describe the necessary experimental design of the annotation guideline and annotation phases. This makes replication, reuse, and adoption challenging. Using clinical questions about sepsis, we designed a semantic annotation guideline to capture sepsis signs from clinical text. The clinical questions aid guideline design, application, and evaluation. Our method incrementally evaluates each change in the guideline by testing the resulting annotated corpus using clinical questions. Additionally, our method uses inter-annotator agreement to judge the annotator compliance and quality of the guideline. We show that the method, combined with controlled design increments, is simple and allows the development and measurable improvement of a purpose-built semantic annotation guideline. We believe that our approach is useful for incremental design of semantic annotation guidelines in general.
|
[
"Yan, Melissa",
"Gustad, Lise",
"H{\\o}vik, Lise",
"Nytr{\\o}, {\\O}ystein"
] |
Method for Designing Semantic Annotation of Sepsis Signs in Clinical Text
|
clinicalnlp-1.29
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.30.bib
|
https://aclanthology.org/2023.clinicalnlp-1.30/
|
@inproceedings{lu-etal-2023-prompt,
title = "Prompt Discriminative Language Models for Domain Adaptation",
author = "Lu, Keming and
Potash, Peter and
Lin, Xihui and
Sun, Yuwen and
Qian, Zihan and
Yuan, Zheng and
Naumann, Tristan and
Cai, Tianxi and
Lu, Junwei",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.30",
doi = "10.18653/v1/2023.clinicalnlp-1.30",
pages = "247--258",
abstract = "Prompt tuning offers an efficient approach to domain adaptation for pretrained language models, which predominantly focus on masked language modeling or generative objectives. However, the potential of discriminative language models in biomedical tasks remains underexplored.To bridge this gap, we develop BioDLM, a method tailored for biomedical domain adaptation of discriminative language models that incorporates prompt-based continual pretraining and prompt tuning for downstream tasks. BioDLM aims to maximize the potential of discriminative language models in low-resource scenarios by reformulating these tasks as span-level corruption detection, thereby enhancing performance on domain-specific tasks and improving the efficiency of continual pertaining. In this way, BioDLM provides a data-efficient domain adaptation method for discriminative language models, effectively enhancing performance on discriminative tasks within the biomedical domain.",
}
|
Prompt tuning offers an efficient approach to domain adaptation for pretrained language models, which predominantly focus on masked language modeling or generative objectives. However, the potential of discriminative language models in biomedical tasks remains underexplored.To bridge this gap, we develop BioDLM, a method tailored for biomedical domain adaptation of discriminative language models that incorporates prompt-based continual pretraining and prompt tuning for downstream tasks. BioDLM aims to maximize the potential of discriminative language models in low-resource scenarios by reformulating these tasks as span-level corruption detection, thereby enhancing performance on domain-specific tasks and improving the efficiency of continual pertaining. In this way, BioDLM provides a data-efficient domain adaptation method for discriminative language models, effectively enhancing performance on discriminative tasks within the biomedical domain.
|
[
"Lu, Keming",
"Potash, Peter",
"Lin, Xihui",
"Sun, Yuwen",
"Qian, Zihan",
"Yuan, Zheng",
"Naumann, Tristan",
"Cai, Tianxi",
"Lu, Junwei"
] |
Prompt Discriminative Language Models for Domain Adaptation
|
clinicalnlp-1.30
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.31.bib
|
https://aclanthology.org/2023.clinicalnlp-1.31/
|
@inproceedings{liang-etal-2023-cross,
title = "Cross-domain {G}erman Medical Named Entity Recognition using a Pre-Trained Language Model and Unified Medical Semantic Types",
author = "Liang, Siting and
Hartmann, Mareike and
Sonntag, Daniel",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.31",
doi = "10.18653/v1/2023.clinicalnlp-1.31",
pages = "259--271",
abstract = "Information extraction from clinical text has the potential to facilitate clinical research and personalized clinical care, but annotating large amounts of data for each set of target tasks is prohibitive. We present a German medical Named Entity Recognition (NER) system capable of cross-domain knowledge transferring. The system builds on a pre-trained German language model and a token-level binary classifier, employing semantic types sourced from the Unified Medical Language System (UMLS) as entity labels to identify corresponding entity spans within the input text. To enhance the system{'}s performance and robustness, we pre-train it using a medical literature corpus that incorporates UMLS semantic term annotations. We evaluate the system{'}s effectiveness on two German annotated datasets obtained from different clinics in zero- and few-shot settings. The results show that our approach outperforms task-specific Condition Random Fields (CRF) classifiers in terms of accuracy. Our work contributes to developing robust and transparent German medical NER models that can support the extraction of information from various clinical texts.",
}
|
Information extraction from clinical text has the potential to facilitate clinical research and personalized clinical care, but annotating large amounts of data for each set of target tasks is prohibitive. We present a German medical Named Entity Recognition (NER) system capable of cross-domain knowledge transferring. The system builds on a pre-trained German language model and a token-level binary classifier, employing semantic types sourced from the Unified Medical Language System (UMLS) as entity labels to identify corresponding entity spans within the input text. To enhance the system{'}s performance and robustness, we pre-train it using a medical literature corpus that incorporates UMLS semantic term annotations. We evaluate the system{'}s effectiveness on two German annotated datasets obtained from different clinics in zero- and few-shot settings. The results show that our approach outperforms task-specific Condition Random Fields (CRF) classifiers in terms of accuracy. Our work contributes to developing robust and transparent German medical NER models that can support the extraction of information from various clinical texts.
|
[
"Liang, Siting",
"Hartmann, Mareike",
"Sonntag, Daniel"
] |
Cross-domain German Medical Named Entity Recognition using a Pre-Trained Language Model and Unified Medical Semantic Types
|
clinicalnlp-1.31
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.32.bib
|
https://aclanthology.org/2023.clinicalnlp-1.32/
|
@inproceedings{naseem-etal-2023-reducing,
title = "Reducing Knowledge Noise for Improved Semantic Analysis in Biomedical Natural Language Processing Applications",
author = "Naseem, Usman and
Thapa, Surendrabikram and
Zhang, Qi and
Hu, Liang and
Masood, Anum and
Nasim, Mehwish",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.32",
doi = "10.18653/v1/2023.clinicalnlp-1.32",
pages = "272--277",
abstract = "Graph-based techniques have gained traction for representing and analyzing data in various natural language processing (NLP) tasks. Knowledge graph-based language representation models have shown promising results in leveraging domain-specific knowledge for NLP tasks, particularly in the biomedical NLP field. However, such models have limitations, including knowledge noise and neglect of contextual relationships, leading to potential semantic errors and reduced accuracy. To address these issues, this paper proposes two novel methods. The first method combines knowledge graph-based language model with nearest-neighbor models to incorporate semantic and category information from neighboring instances. The second method involves integrating knowledge graph-based language model with graph neural networks (GNNs) to leverage feature information from neighboring nodes in the graph. Experiments on relation extraction (RE) and classification tasks in English and Chinese language datasets demonstrate significant performance improvements with both methods, highlighting their potential for enhancing the performance of language models and improving NLP applications in the biomedical domain.",
}
|
Graph-based techniques have gained traction for representing and analyzing data in various natural language processing (NLP) tasks. Knowledge graph-based language representation models have shown promising results in leveraging domain-specific knowledge for NLP tasks, particularly in the biomedical NLP field. However, such models have limitations, including knowledge noise and neglect of contextual relationships, leading to potential semantic errors and reduced accuracy. To address these issues, this paper proposes two novel methods. The first method combines knowledge graph-based language model with nearest-neighbor models to incorporate semantic and category information from neighboring instances. The second method involves integrating knowledge graph-based language model with graph neural networks (GNNs) to leverage feature information from neighboring nodes in the graph. Experiments on relation extraction (RE) and classification tasks in English and Chinese language datasets demonstrate significant performance improvements with both methods, highlighting their potential for enhancing the performance of language models and improving NLP applications in the biomedical domain.
|
[
"Naseem, Usman",
"Thapa, Surendrabikram",
"Zhang, Qi",
"Hu, Liang",
"Masood, Anum",
"Nasim, Mehwish"
] |
Reducing Knowledge Noise for Improved Semantic Analysis in Biomedical Natural Language Processing Applications
|
clinicalnlp-1.32
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.33.bib
|
https://aclanthology.org/2023.clinicalnlp-1.33/
|
@inproceedings{lu-etal-2023-medical,
title = "Medical knowledge-enhanced prompt learning for diagnosis classification from clinical text",
author = "Lu, Yuxing and
Zhao, Xukai and
Wang, Jinzhuo",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.33",
doi = "10.18653/v1/2023.clinicalnlp-1.33",
pages = "278--288",
abstract = "Artificial intelligence based diagnosis systems have emerged as powerful tools to reform traditional medical care. Each clinician now wants to have his own intelligent diagnostic partner to expand the range of services he can provide. When reading a clinical note, experts make inferences with relevant knowledge. However, medical knowledge appears to be heterogeneous, including structured and unstructured knowledge. Existing approaches are incapable of uniforming them well. Besides, the descriptions of clinical findings in clinical notes, which are reasoned to diagnosis, vary a lot for different diseases or patients. To address these problems, we propose a Medical Knowledge-enhanced Prompt Learning (MedKPL) model for diagnosis classification. First, to overcome the heterogeneity of knowledge, given the knowledge relevant to diagnosis, MedKPL extracts and normalizes the relevant knowledge into a prompt sequence. Then, MedKPL integrates the knowledge prompt with the clinical note into a designed prompt for representation. Therefore, MedKPL can integrate medical knowledge into the models to enhance diagnosis and effectively transfer learned diagnosis capacity to unseen diseases using alternating relevant disease knowledge. The experimental results on two medical datasets show that our method can obtain better medical text classification results and can perform better in transfer and few-shot settings among datasets of different diseases.",
}
|
Artificial intelligence based diagnosis systems have emerged as powerful tools to reform traditional medical care. Each clinician now wants to have his own intelligent diagnostic partner to expand the range of services he can provide. When reading a clinical note, experts make inferences with relevant knowledge. However, medical knowledge appears to be heterogeneous, including structured and unstructured knowledge. Existing approaches are incapable of uniforming them well. Besides, the descriptions of clinical findings in clinical notes, which are reasoned to diagnosis, vary a lot for different diseases or patients. To address these problems, we propose a Medical Knowledge-enhanced Prompt Learning (MedKPL) model for diagnosis classification. First, to overcome the heterogeneity of knowledge, given the knowledge relevant to diagnosis, MedKPL extracts and normalizes the relevant knowledge into a prompt sequence. Then, MedKPL integrates the knowledge prompt with the clinical note into a designed prompt for representation. Therefore, MedKPL can integrate medical knowledge into the models to enhance diagnosis and effectively transfer learned diagnosis capacity to unseen diseases using alternating relevant disease knowledge. The experimental results on two medical datasets show that our method can obtain better medical text classification results and can perform better in transfer and few-shot settings among datasets of different diseases.
|
[
"Lu, Yuxing",
"Zhao, Xukai",
"Wang, Jinzhuo"
] |
Medical knowledge-enhanced prompt learning for diagnosis classification from clinical text
|
clinicalnlp-1.33
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.34.bib
|
https://aclanthology.org/2023.clinicalnlp-1.34/
|
@inproceedings{gaschi-etal-2023-multilingual,
title = "Multilingual Clinical {NER}: Translation or Cross-lingual Transfer?",
author = "Gaschi, F{\'e}lix and
Fontaine, Xavier and
Rastin, Parisa and
Toussaint, Yannick",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.34",
doi = "10.18653/v1/2023.clinicalnlp-1.34",
pages = "289--311",
abstract = "Natural language tasks like Named Entity Recognition (NER) in the clinical domain on non-English texts can be very time-consuming and expensive due to the lack of annotated data. Cross-lingual transfer (CLT) is a way to circumvent this issue thanks to the ability of multilingual large language models to be fine-tuned on a specific task in one language and to provide high accuracy for the same task in another language. However, other methods leveraging translation models can be used to perform NER without annotated data in the target language, by either translating the training set or test set. This paper compares cross-lingual transfer with these two alternative methods, to perform clinical NER in French and in German without any training data in those languages. To this end, we release MedNERF a medical NER test set extracted from French drug prescriptions and annotated with the same guidelines as an English dataset. Through extensive experiments on this dataset and on a German medical dataset (Frei and Kramer, 2021), we show that translation-based methods can achieve similar performance to CLT but require more care in their design. And while they can take advantage of monolingual clinical language models, those do not guarantee better results than large general-purpose multilingual models, whether with cross-lingual transfer or translation.",
}
|
Natural language tasks like Named Entity Recognition (NER) in the clinical domain on non-English texts can be very time-consuming and expensive due to the lack of annotated data. Cross-lingual transfer (CLT) is a way to circumvent this issue thanks to the ability of multilingual large language models to be fine-tuned on a specific task in one language and to provide high accuracy for the same task in another language. However, other methods leveraging translation models can be used to perform NER without annotated data in the target language, by either translating the training set or test set. This paper compares cross-lingual transfer with these two alternative methods, to perform clinical NER in French and in German without any training data in those languages. To this end, we release MedNERF a medical NER test set extracted from French drug prescriptions and annotated with the same guidelines as an English dataset. Through extensive experiments on this dataset and on a German medical dataset (Frei and Kramer, 2021), we show that translation-based methods can achieve similar performance to CLT but require more care in their design. And while they can take advantage of monolingual clinical language models, those do not guarantee better results than large general-purpose multilingual models, whether with cross-lingual transfer or translation.
|
[
"Gaschi, F{\\'e}lix",
"Fontaine, Xavier",
"Rastin, Parisa",
"Toussaint, Yannick"
] |
Multilingual Clinical NER: Translation or Cross-lingual Transfer?
|
clinicalnlp-1.34
|
Poster
|
2306.04384
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.35.bib
|
https://aclanthology.org/2023.clinicalnlp-1.35/
|
@inproceedings{mannion-etal-2023-umls,
title = "{UMLS}-{KGI}-{BERT}: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition",
author = "Mannion, Aidan and
Schwab, Didier and
Goeuriot, Lorraine",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.35",
doi = "10.18653/v1/2023.clinicalnlp-1.35",
pages = "312--322",
abstract = "Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.",
}
|
Pre-trained transformer language models (LMs) have in recent years become the dominant paradigm in applied NLP. These models have achieved state-of-the-art performance on tasks such as information extraction, question answering, sentiment analysis, document classification and many others. In the biomedical domain, significant progress has been made in adapting this paradigm to NLP tasks that require the integration of domain-specific knowledge as well as statistical modelling of language. In particular, research in this area has focused on the question of how best to construct LMs that take into account not only the patterns of token distribution in medical text, but also the wealth of structured information contained in terminology resources such as the UMLS. This work contributes a data-centric paradigm for enriching the language representations of biomedical transformer-encoder LMs by extracting text sequences from the UMLS.This allows for graph-based learning objectives to be combined with masked-language pre-training. Preliminary results from experiments in the extension of pre-trained LMs as well as training from scratch show that this framework improves downstream performance on multiple biomedical and clinical Named Entity Recognition (NER) tasks. All pre-trained models, data processing pipelines and evaluation scripts will be made publicly available.
|
[
"Mannion, Aidan",
"Schwab, Didier",
"Goeuriot, Lorraine"
] |
UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition
|
clinicalnlp-1.35
|
Poster
|
2307.11170
|
[
""
] |
https://huggingface.co/papers/2307.11170
| 0 | 0 | 0 | 4 | 1 |
[] |
[] |
[] |
https://aclanthology.org/2023.clinicalnlp-1.36.bib
|
https://aclanthology.org/2023.clinicalnlp-1.36/
|
@inproceedings{giorgi-etal-2023-wanglab,
title = "{W}ang{L}ab at {MEDIQA}-Chat 2023: Clinical Note Generation from Doctor-Patient Conversations using Large Language Models",
author = "Giorgi, John and
Toma, Augustin and
Xie, Ronald and
Chen, Sondra and
An, Kevin and
Zheng, Grace and
Wang, Bo",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.36",
doi = "10.18653/v1/2023.clinicalnlp-1.36",
pages = "323--334",
abstract = "This paper describes our submission to the MEDIQA-Chat 2023 shared task for automatic clinical note generation from doctor-patient conversations. We report results for two approaches: the first fine-tunes a pre-trained language model (PLM) on the shared task data, and the second uses few-shot in-context learning (ICL) with a large language model (LLM). Both achieve high performance as measured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second and first, respectively, of all submissions to the shared task. Expert human scrutiny indicates that notes generated via the ICL-based approach with GPT-4 are preferred about as often as human-written notes, making it a promising path toward automated note generation from doctor-patient conversations.",
}
|
This paper describes our submission to the MEDIQA-Chat 2023 shared task for automatic clinical note generation from doctor-patient conversations. We report results for two approaches: the first fine-tunes a pre-trained language model (PLM) on the shared task data, and the second uses few-shot in-context learning (ICL) with a large language model (LLM). Both achieve high performance as measured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second and first, respectively, of all submissions to the shared task. Expert human scrutiny indicates that notes generated via the ICL-based approach with GPT-4 are preferred about as often as human-written notes, making it a promising path toward automated note generation from doctor-patient conversations.
|
[
"Giorgi, John",
"Toma, Augustin",
"Xie, Ronald",
"Chen, Sondra",
"An, Kevin",
"Zheng, Grace",
"Wang, Bo"
] |
WangLab at MEDIQA-Chat 2023: Clinical Note Generation from Doctor-Patient Conversations using Large Language Models
|
clinicalnlp-1.36
|
Poster
|
2305.02220
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.37.bib
|
https://aclanthology.org/2023.clinicalnlp-1.37/
|
@inproceedings{villena-etal-2023-automatic,
title = "Automatic Coding at Scale: Design and Deployment of a Nationwide System for Normalizing Referrals in the {C}hilean Public Healthcare System",
author = "Villena, Fabi{\'a}n and
Rojas, Mat{\'\i}as and
Arias, Felipe and
Pacheco, Jorge and
Vera, Paulina and
Dunstan, Jocelyn",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.37",
doi = "10.18653/v1/2023.clinicalnlp-1.37",
pages = "335--343",
abstract = "The disease coding task involves assigning a unique identifier from a controlled vocabulary to each disease mentioned in a clinical document. This task is relevant since it allows information extraction from unstructured data to perform, for example, epidemiological studies about the incidence and prevalence of diseases in a determined context. However, the manual coding process is subject to errors as it requires medical personnel to be competent in coding rules and terminology. In addition, this process consumes a lot of time and energy, which could be allocated to more clinically relevant tasks. These difficulties can be addressed by developing computational systems that automatically assign codes to diseases. In this way, we propose a two-step system for automatically coding diseases in referrals from the Chilean public healthcare system. Specifically, our model uses a state-of-the-art NER model for recognizing disease mentions and a search engine system based on Elasticsearch for assigning the most relevant codes associated with these disease mentions. The system{'}s performance was evaluated on referrals manually coded by clinical experts. Our system obtained a MAP score of 0.63 for the subcategory level and 0.83 for the category level, close to the best-performing models in the literature. This system could be a support tool for health professionals, optimizing the coding and management process. Finally, to guarantee reproducibility, we publicly release the code of our models and experiments.",
}
|
The disease coding task involves assigning a unique identifier from a controlled vocabulary to each disease mentioned in a clinical document. This task is relevant since it allows information extraction from unstructured data to perform, for example, epidemiological studies about the incidence and prevalence of diseases in a determined context. However, the manual coding process is subject to errors as it requires medical personnel to be competent in coding rules and terminology. In addition, this process consumes a lot of time and energy, which could be allocated to more clinically relevant tasks. These difficulties can be addressed by developing computational systems that automatically assign codes to diseases. In this way, we propose a two-step system for automatically coding diseases in referrals from the Chilean public healthcare system. Specifically, our model uses a state-of-the-art NER model for recognizing disease mentions and a search engine system based on Elasticsearch for assigning the most relevant codes associated with these disease mentions. The system{'}s performance was evaluated on referrals manually coded by clinical experts. Our system obtained a MAP score of 0.63 for the subcategory level and 0.83 for the category level, close to the best-performing models in the literature. This system could be a support tool for health professionals, optimizing the coding and management process. Finally, to guarantee reproducibility, we publicly release the code of our models and experiments.
|
[
"Villena, Fabi{\\'a}n",
"Rojas, Mat{\\'\\i}as",
"Arias, Felipe",
"Pacheco, Jorge",
"Vera, Paulina",
"Dunstan, Jocelyn"
] |
Automatic Coding at Scale: Design and Deployment of a Nationwide System for Normalizing Referrals in the Chilean Public Healthcare System
|
clinicalnlp-1.37
|
Poster
|
2307.05560
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.38.bib
|
https://aclanthology.org/2023.clinicalnlp-1.38/
|
@inproceedings{zhou-etal-2023-building,
title = "Building blocks for complex tasks: Robust generative event extraction for radiology reports under domain shifts",
author = "Zhou, Sitong and
Yetisgen, Meliha and
Ostendorf, Mari",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.38",
doi = "10.18653/v1/2023.clinicalnlp-1.38",
pages = "344--357",
abstract = "This paper explores methods for extracting information from radiology reports that generalize across exam modalities to reduce requirements for annotated data. We demonstrate that multi-pass T5-based text-to-text generative models exhibit better generalization across exam modalities compared to approaches that employ BERT-based task-specific classification layers. We then develop methods that reduce the inference cost of the model, making large-scale corpus processing more feasible for clinical applications. Specifically, we introduce a generative technique that decomposes complex tasks into smaller subtask blocks, which improves a single-pass model when combined with multitask training. In addition, we leverage target-domain contexts during inference to enhance domain adaptation, enabling use of smaller models. Analyses offer insights into the benefits of different cost reduction strategies.",
}
|
This paper explores methods for extracting information from radiology reports that generalize across exam modalities to reduce requirements for annotated data. We demonstrate that multi-pass T5-based text-to-text generative models exhibit better generalization across exam modalities compared to approaches that employ BERT-based task-specific classification layers. We then develop methods that reduce the inference cost of the model, making large-scale corpus processing more feasible for clinical applications. Specifically, we introduce a generative technique that decomposes complex tasks into smaller subtask blocks, which improves a single-pass model when combined with multitask training. In addition, we leverage target-domain contexts during inference to enhance domain adaptation, enabling use of smaller models. Analyses offer insights into the benefits of different cost reduction strategies.
|
[
"Zhou, Sitong",
"Yetisgen, Meliha",
"Ostendorf, Mari"
] |
Building blocks for complex tasks: Robust generative event extraction for radiology reports under domain shifts
|
clinicalnlp-1.38
|
Poster
|
2306.09544
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.39.bib
|
https://aclanthology.org/2023.clinicalnlp-1.39/
|
@inproceedings{andrews-etal-2023-intersectionality,
title = "Intersectionality and Testimonial Injustice in Medical Records",
author = "Andrews, Kenya and
Shah, Bhuvni and
Cheng, Lu",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.39",
doi = "10.18653/v1/2023.clinicalnlp-1.39",
pages = "358--372",
abstract = "Detecting testimonial injustice is an essential element of addressing inequities and promoting inclusive healthcare practices, many of which are life-critical. However, using a single demographic factor to detect testimonial injustice does not fully encompass the nuanced identities that contribute to a patient{'}s experience. Further, some injustices may only be evident when examining the nuances that arise through the lens of intersectionality. Ignoring such injustices can result in poor quality of care or life-endangering events. Thus, considering intersectionality could result in more accurate classifications and just decisions. To illustrate this, we use real-world medical data to determine whether medical records exhibit words that could lead to testimonial injustice, employ fairness metrics (e.g. demographic parity, differential intersectional fairness, and subgroup fairness) to assess the severity to which subgroups are experiencing testimonial injustice, and analyze how the intersectionality of demographic features (e.g. gender and race) make a difference in uncovering testimonial injustice. From our analysis we found that with intersectionality we can better see disparities in how subgroups are treated and there are differences in how someone is treated based on the intersection of their demographic attributes. This has not been previously studied in clinical records, nor has it been proven through empirical study.",
}
|
Detecting testimonial injustice is an essential element of addressing inequities and promoting inclusive healthcare practices, many of which are life-critical. However, using a single demographic factor to detect testimonial injustice does not fully encompass the nuanced identities that contribute to a patient{'}s experience. Further, some injustices may only be evident when examining the nuances that arise through the lens of intersectionality. Ignoring such injustices can result in poor quality of care or life-endangering events. Thus, considering intersectionality could result in more accurate classifications and just decisions. To illustrate this, we use real-world medical data to determine whether medical records exhibit words that could lead to testimonial injustice, employ fairness metrics (e.g. demographic parity, differential intersectional fairness, and subgroup fairness) to assess the severity to which subgroups are experiencing testimonial injustice, and analyze how the intersectionality of demographic features (e.g. gender and race) make a difference in uncovering testimonial injustice. From our analysis we found that with intersectionality we can better see disparities in how subgroups are treated and there are differences in how someone is treated based on the intersection of their demographic attributes. This has not been previously studied in clinical records, nor has it been proven through empirical study.
|
[
"Andrews, Kenya",
"Shah, Bhuvni",
"Cheng, Lu"
] |
Intersectionality and Testimonial Injustice in Medical Records
|
clinicalnlp-1.39
|
Poster
|
2306.13675
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.40.bib
|
https://aclanthology.org/2023.clinicalnlp-1.40/
|
@inproceedings{blankemeier-etal-2023-interactive,
title = "Interactive Span Recommendation for Biomedical Text",
author = "Blankemeier, Louis and
Zhao, Theodore and
Tinn, Robert and
Kiblawi, Sid and
Gu, Yu and
Chaudhari, Akshay and
Poon, Hoifung and
Zhang, Sheng and
Wei, Mu and
Preston, J.",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.40",
doi = "10.18653/v1/2023.clinicalnlp-1.40",
pages = "373--384",
abstract = "Motivated by the scarcity of high-quality labeled biomedical text, as well as the success of data programming, we introduce KRISS-Search. By leveraging the Unified Medical Language Systems (UMLS) ontology, KRISS-Search addresses an interactive few-shot span recommendation task that we propose. We first introduce unsupervised KRISS-Search and show that our method outperforms existing methods in identifying spans that are semantically similar to a given span of interest, with {\textgreater}50{\%} AUPRC improvement relative to PubMedBERT. We then introduce supervised KRISS-Search, which leverages human interaction to improve the notion of similarity used by unsupervised KRISS-Search. Through simulated human feedback, we demonstrate an enhanced F1 score of 0.68 in classifying spans as semantically similar or different in the low-label setting, outperforming PubMedBERT by 2 F1 points. Finally, supervised KRISS-Search demonstrates competitive or superior performance compared to PubMedBERT in few-shot biomedical named entity recognition (NER) across five benchmark datasets, with an average improvement of 5.6 F1 points. We envision KRISS-Search increasing the efficiency of programmatic data labeling and also providing broader utility as an interactive biomedical search engine.",
}
|
Motivated by the scarcity of high-quality labeled biomedical text, as well as the success of data programming, we introduce KRISS-Search. By leveraging the Unified Medical Language Systems (UMLS) ontology, KRISS-Search addresses an interactive few-shot span recommendation task that we propose. We first introduce unsupervised KRISS-Search and show that our method outperforms existing methods in identifying spans that are semantically similar to a given span of interest, with {\textgreater}50{\%} AUPRC improvement relative to PubMedBERT. We then introduce supervised KRISS-Search, which leverages human interaction to improve the notion of similarity used by unsupervised KRISS-Search. Through simulated human feedback, we demonstrate an enhanced F1 score of 0.68 in classifying spans as semantically similar or different in the low-label setting, outperforming PubMedBERT by 2 F1 points. Finally, supervised KRISS-Search demonstrates competitive or superior performance compared to PubMedBERT in few-shot biomedical named entity recognition (NER) across five benchmark datasets, with an average improvement of 5.6 F1 points. We envision KRISS-Search increasing the efficiency of programmatic data labeling and also providing broader utility as an interactive biomedical search engine.
|
[
"Blankemeier, Louis",
"Zhao, Theodore",
"Tinn, Robert",
"Kiblawi, Sid",
"Gu, Yu",
"Chaudhari, Akshay",
"Poon, Hoifung",
"Zhang, Sheng",
"Wei, Mu",
"Preston, J."
] |
Interactive Span Recommendation for Biomedical Text
|
clinicalnlp-1.40
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.41.bib
|
https://aclanthology.org/2023.clinicalnlp-1.41/
|
@inproceedings{ramachandran-etal-2023-prompt,
title = "Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning",
author = "Ramachandran, Giridhar Kaushik and
Fu, Yujuan and
Han, Bin and
Lybarger, Kevin and
Dobbins, Nic and
Uzuner, Ozlem and
Yetisgen, Meliha",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.41",
doi = "10.18653/v1/2023.clinicalnlp-1.41",
pages = "385--393",
abstract = "Social determinants of health (SDOH) documented in the electronic health record through unstructured text are increasingly being studied to understand how SDOH impacts patient health outcomes. In this work, we utilize the Social History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified social history sections annotated for SDOH, including substance use, employment, and living status information. We explore the automatic extraction of SDOH information with SHAC in both standoff and inline annotation formats using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction performance with a high-performing supervised approach and perform thorough error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on the SHAC test set, similar to the 7th best-performing system among all teams in the n2c2 challenge with SHAC.",
}
|
Social determinants of health (SDOH) documented in the electronic health record through unstructured text are increasingly being studied to understand how SDOH impacts patient health outcomes. In this work, we utilize the Social History Annotation Corpus (SHAC), a multi-institutional corpus of de-identified social history sections annotated for SDOH, including substance use, employment, and living status information. We explore the automatic extraction of SDOH information with SHAC in both standoff and inline annotation formats using GPT-4 in a one-shot prompting setting. We compare GPT-4 extraction performance with a high-performing supervised approach and perform thorough error analyses. Our prompt-based GPT-4 method achieved an overall 0.652 F1 on the SHAC test set, similar to the 7th best-performing system among all teams in the n2c2 challenge with SHAC.
|
[
"Ramach",
"ran, Giridhar Kaushik",
"Fu, Yujuan",
"Han, Bin",
"Lybarger, Kevin",
"Dobbins, Nic",
"Uzuner, Ozlem",
"Yetisgen, Meliha"
] |
Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning
|
clinicalnlp-1.41
|
Poster
|
2306.07170
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.42.bib
|
https://aclanthology.org/2023.clinicalnlp-1.42/
|
@inproceedings{jeong-etal-2023-teddysum,
title = "Teddysum at {MEDIQA}-Chat 2023: an analysis of fine-tuning strategy for long dialog summarization",
author = "Jeong, Yongbin and
Han, Ju-Hyuck and
Chae, Kyung Min and
Cho, Yousang and
Seo, Hyunbin and
Lim, KyungTae and
Choi, Key-Sun and
Hahm, Younggyun",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.42",
doi = "10.18653/v1/2023.clinicalnlp-1.42",
pages = "394--402",
abstract = "In this paper, we introduce the design and various attempts for TaskB of MEDIQA-Chat 2023. The goal of TaskB in MEDIQA-Chat 2023 is to generate full clinical note from doctor-patient consultation dialogues. This task has several challenging issues, such as lack of training data, handling long dialogue inputs, and generating semi-structured clinical note which have section heads. To address these issues, we conducted various experiments and analyzed their results. We utilized the DialogLED model pre-trained on long dialogue data to handle long inputs, and we pre-trained on other dialogue datasets to address the lack of training data. We also attempted methods such as using prompts and contrastive learning for handling sections. This paper provides insights into clinical note generation through analyzing experimental methods and results, and it suggests future research directions.",
}
|
In this paper, we introduce the design and various attempts for TaskB of MEDIQA-Chat 2023. The goal of TaskB in MEDIQA-Chat 2023 is to generate full clinical note from doctor-patient consultation dialogues. This task has several challenging issues, such as lack of training data, handling long dialogue inputs, and generating semi-structured clinical note which have section heads. To address these issues, we conducted various experiments and analyzed their results. We utilized the DialogLED model pre-trained on long dialogue data to handle long inputs, and we pre-trained on other dialogue datasets to address the lack of training data. We also attempted methods such as using prompts and contrastive learning for handling sections. This paper provides insights into clinical note generation through analyzing experimental methods and results, and it suggests future research directions.
|
[
"Jeong, Yongbin",
"Han, Ju-Hyuck",
"Chae, Kyung Min",
"Cho, Yousang",
"Seo, Hyunbin",
"Lim, KyungTae",
"Choi, Key-Sun",
"Hahm, Younggyun"
] |
Teddysum at MEDIQA-Chat 2023: an analysis of fine-tuning strategy for long dialog summarization
|
clinicalnlp-1.42
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.43.bib
|
https://aclanthology.org/2023.clinicalnlp-1.43/
|
@inproceedings{chen-etal-2023-rare,
title = "Rare Codes Count: Mining Inter-code Relations for Long-tail Clinical Text Classification",
author = "Chen, Jiamin and
Li, Xuhong and
Xi, Junting and
Yu, Lei and
Xiong, Haoyi",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.43",
doi = "10.18653/v1/2023.clinicalnlp-1.43",
pages = "403--413",
abstract = "Multi-label clinical text classification, such as automatic ICD coding, has always been a challenging subject in Natural Language Processing, due to its long, domain-specific documents and long-tail distribution over a large label set. Existing methods adopt different model architectures to encode the clinical notes. Whereas without digging out the useful connections between labels, the model presents a huge gap in predicting performances between rare and frequent codes. In this work, we propose a novel method for further mining the helpful relations between different codes via a relation-enhanced code encoder to improve the rare code performance. Starting from the simple code descriptions, the model reaches comparable, even better performances than models with heavy external knowledge. Our proposed method is evaluated on MIMIC-III, a common dataset in the medical domain. It outperforms the previous state-of-art models on both overall metrics and rare code performances. Moreover, the interpretation results further prove the effectiveness of our methods. Our code is publicly available at \url{https://github.com/jiaminchen-1031/Rare-ICD}.",
}
|
Multi-label clinical text classification, such as automatic ICD coding, has always been a challenging subject in Natural Language Processing, due to its long, domain-specific documents and long-tail distribution over a large label set. Existing methods adopt different model architectures to encode the clinical notes. Whereas without digging out the useful connections between labels, the model presents a huge gap in predicting performances between rare and frequent codes. In this work, we propose a novel method for further mining the helpful relations between different codes via a relation-enhanced code encoder to improve the rare code performance. Starting from the simple code descriptions, the model reaches comparable, even better performances than models with heavy external knowledge. Our proposed method is evaluated on MIMIC-III, a common dataset in the medical domain. It outperforms the previous state-of-art models on both overall metrics and rare code performances. Moreover, the interpretation results further prove the effectiveness of our methods. Our code is publicly available at \url{https://github.com/jiaminchen-1031/Rare-ICD}.
|
[
"Chen, Jiamin",
"Li, Xuhong",
"Xi, Junting",
"Yu, Lei",
"Xiong, Haoyi"
] |
Rare Codes Count: Mining Inter-code Relations for Long-tail Clinical Text Classification
|
clinicalnlp-1.43
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.44.bib
|
https://aclanthology.org/2023.clinicalnlp-1.44/
|
@inproceedings{mishra-desetty-2023-newagehealthwarriors,
title = "{N}ew{A}ge{H}ealth{W}arriors at {MEDIQA}-Chat 2023 Task A: Summarizing Short Medical Conversation with Transformers",
author = "Mishra, Prakhar and
Desetty, Ravi Theja",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.44",
doi = "10.18653/v1/2023.clinicalnlp-1.44",
pages = "414--421",
abstract = "This paper presents the MEDIQA-Chat 2023 shared task organized at the ACL-Clinical NLP workshop. The shared task is motivated by the need to develop methods to automatically generate clinical notes from doctor-patient conversations. In this paper, we present our submission for \textit{MEDIQA-Chat 2023 Task A: Short Dialogue2Note Summarization}. Manual creation of these clinical notes requires extensive human efforts, thus making it a time-consuming and expensive process. To address this, we propose an ensemble-based method over GPT-3, BART, BERT variants, and Rule-based systems to automatically generate clinical notes from these conversations. The proposed system achieves a score of 0.730 and 0.544 for both the sub-tasks on the test set (ranking 8th on the leaderboard for both tasks) and shows better performance compared to a baseline system using BART variants.",
}
|
This paper presents the MEDIQA-Chat 2023 shared task organized at the ACL-Clinical NLP workshop. The shared task is motivated by the need to develop methods to automatically generate clinical notes from doctor-patient conversations. In this paper, we present our submission for \textit{MEDIQA-Chat 2023 Task A: Short Dialogue2Note Summarization}. Manual creation of these clinical notes requires extensive human efforts, thus making it a time-consuming and expensive process. To address this, we propose an ensemble-based method over GPT-3, BART, BERT variants, and Rule-based systems to automatically generate clinical notes from these conversations. The proposed system achieves a score of 0.730 and 0.544 for both the sub-tasks on the test set (ranking 8th on the leaderboard for both tasks) and shows better performance compared to a baseline system using BART variants.
|
[
"Mishra, Prakhar",
"Desetty, Ravi Theja"
] |
NewAgeHealthWarriors at MEDIQA-Chat 2023 Task A: Summarizing Short Medical Conversation with Transformers
|
clinicalnlp-1.44
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.45.bib
|
https://aclanthology.org/2023.clinicalnlp-1.45/
|
@inproceedings{sui-etal-2023-storyline,
title = "Storyline-Centric Detection of Aphasia and Dysarthria in Stroke Patient Transcripts",
author = "Sui, Peiqi and
Wong, Kelvin and
Yu, Xiaohui and
Volpi, John and
Wong, Stephen",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.45",
doi = "10.18653/v1/2023.clinicalnlp-1.45",
pages = "422--432",
abstract = "Aphasia and dysarthria are both common symptoms of stroke, affecting around 30{\%} and 50{\%} of acute ischemic stroke patients. In this paper, we propose a storyline-centric approach to detect aphasia and dysarthria in acute stroke patients using transcribed picture descriptions alone. Our pipeline enriches the training set with healthy data to address the lack of acute stroke patient data and utilizes knowledge distillation to significantly improve upon a document classification baseline, achieving an AUC of 0.814 (aphasia) and 0.764 (dysarthria) on a patient-only validation set.",
}
|
Aphasia and dysarthria are both common symptoms of stroke, affecting around 30{\%} and 50{\%} of acute ischemic stroke patients. In this paper, we propose a storyline-centric approach to detect aphasia and dysarthria in acute stroke patients using transcribed picture descriptions alone. Our pipeline enriches the training set with healthy data to address the lack of acute stroke patient data and utilizes knowledge distillation to significantly improve upon a document classification baseline, achieving an AUC of 0.814 (aphasia) and 0.764 (dysarthria) on a patient-only validation set.
|
[
"Sui, Peiqi",
"Wong, Kelvin",
"Yu, Xiaohui",
"Volpi, John",
"Wong, Stephen"
] |
Storyline-Centric Detection of Aphasia and Dysarthria in Stroke Patient Transcripts
|
clinicalnlp-1.45
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.46.bib
|
https://aclanthology.org/2023.clinicalnlp-1.46/
|
@inproceedings{aracena-etal-2023-pre,
title = "Pre-trained language models in {S}panish for health insurance coverage",
author = "Aracena, Claudio and
Rodr{\'\i}guez, Nicol{\'a}s and
Rocco, Victor and
Dunstan, Jocelyn",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.46",
doi = "10.18653/v1/2023.clinicalnlp-1.46",
pages = "433--438",
abstract = "The field of clinical natural language processing (NLP) can extract useful information from clinical text. Since 2017, the NLP field has shifted towards using pre-trained language models (PLMs), improving performance in several tasks. Most of the research in this field has focused on English text, but there are some available PLMs in Spanish. In this work, we use clinical PLMs to analyze text from admission and medical reports in Spanish for an insurance and health provider to give a probability of no coverage in a labor insurance process. Our results show that fine-tuning a PLM pre-trained with the provider{'}s data leads to better results, but this process is time-consuming and computationally expensive. At least for this task, fine-tuning publicly available clinical PLM leads to comparable results to a custom PLM, but in less time and with fewer resources. Analyzing large volumes of insurance requests is burdensome for employers, and models can ease this task by pre-classifying reports that are likely not to have coverage. Our approach of entirely using clinical-related text improves the current models while reinforcing the idea of clinical support systems that simplify human labor but do not replace it. To our knowledge, the clinical corpus collected for this study is the largest one reported for the Spanish language.",
}
|
The field of clinical natural language processing (NLP) can extract useful information from clinical text. Since 2017, the NLP field has shifted towards using pre-trained language models (PLMs), improving performance in several tasks. Most of the research in this field has focused on English text, but there are some available PLMs in Spanish. In this work, we use clinical PLMs to analyze text from admission and medical reports in Spanish for an insurance and health provider to give a probability of no coverage in a labor insurance process. Our results show that fine-tuning a PLM pre-trained with the provider{'}s data leads to better results, but this process is time-consuming and computationally expensive. At least for this task, fine-tuning publicly available clinical PLM leads to comparable results to a custom PLM, but in less time and with fewer resources. Analyzing large volumes of insurance requests is burdensome for employers, and models can ease this task by pre-classifying reports that are likely not to have coverage. Our approach of entirely using clinical-related text improves the current models while reinforcing the idea of clinical support systems that simplify human labor but do not replace it. To our knowledge, the clinical corpus collected for this study is the largest one reported for the Spanish language.
|
[
"Aracena, Claudio",
"Rodr{\\'\\i}guez, Nicol{\\'a}s",
"Rocco, Victor",
"Dunstan, Jocelyn"
] |
Pre-trained language models in Spanish for health insurance coverage
|
clinicalnlp-1.46
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.47.bib
|
https://aclanthology.org/2023.clinicalnlp-1.47/
|
@inproceedings{toleubay-etal-2023-utterance,
title = "Utterance Classification with Logical Neural Network: Explainable {AI} for Mental Disorder Diagnosis",
author = "Toleubay, Yeldar and
Agravante, Don Joven and
Kimura, Daiki and
Lin, Baihan and
Bouneffouf, Djallel and
Tatsubori, Michiaki",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.47",
doi = "10.18653/v1/2023.clinicalnlp-1.47",
pages = "439--446",
abstract = "In response to the global challenge of mental health problems, we proposes a Logical Neural Network (LNN) based Neuro-Symbolic AI method for the diagnosis of mental disorders. Due to the lack of effective therapy coverage for mental disorders, there is a need for an AI solution that can assist therapists with the diagnosis. However, current Neural Network models lack explainability and may not be trusted by therapists. The LNN is a Recurrent Neural Network architecture that combines the learning capabilities of neural networks with the reasoning capabilities of classical logic-based AI. The proposed system uses input predicates from clinical interviews to output a mental disorder class, and different predicate pruning techniques are used to achieve scalability and higher scores. In addition, we provide an insight extraction method to aid therapists with their diagnosis. The proposed system addresses the lack of explainability of current Neural Network models and provides a more trustworthy solution for mental disorder diagnosis.",
}
|
In response to the global challenge of mental health problems, we proposes a Logical Neural Network (LNN) based Neuro-Symbolic AI method for the diagnosis of mental disorders. Due to the lack of effective therapy coverage for mental disorders, there is a need for an AI solution that can assist therapists with the diagnosis. However, current Neural Network models lack explainability and may not be trusted by therapists. The LNN is a Recurrent Neural Network architecture that combines the learning capabilities of neural networks with the reasoning capabilities of classical logic-based AI. The proposed system uses input predicates from clinical interviews to output a mental disorder class, and different predicate pruning techniques are used to achieve scalability and higher scores. In addition, we provide an insight extraction method to aid therapists with their diagnosis. The proposed system addresses the lack of explainability of current Neural Network models and provides a more trustworthy solution for mental disorder diagnosis.
|
[
"Toleubay, Yeldar",
"Agravante, Don Joven",
"Kimura, Daiki",
"Lin, Baihan",
"Bouneffouf, Djallel",
"Tatsubori, Michiaki"
] |
Utterance Classification with Logical Neural Network: Explainable AI for Mental Disorder Diagnosis
|
clinicalnlp-1.47
|
Poster
|
2306.03902
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.clinicalnlp-1.48.bib
|
https://aclanthology.org/2023.clinicalnlp-1.48/
|
@inproceedings{zhou-etal-2023-survey,
title = "A Survey of Evaluation Methods of Generated Medical Textual Reports",
author = "Zhou, Yongxin and
Ringeval, Fabien and
Portet, Fran{\c{c}}ois",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.48",
doi = "10.18653/v1/2023.clinicalnlp-1.48",
pages = "447--459",
abstract = "Medical Report Generation (MRG) is a sub-task of Natural Language Generation (NLG) and aims to present information from various sources in textual form and synthesize salient information, with the goal of reducing the time spent by domain experts in writing medical reports and providing support information for decision-making. Given the specificity of the medical domain, the evaluation of automatically generated medical reports is of paramount importance to the validity of these systems. Therefore, in this paper, we focus on the evaluation of automatically generated medical reports from the perspective of automatic and human evaluation. We present evaluation methods for general NLG evaluation and how they have been applied to domain-specific medical tasks. The study shows that MRG evaluation methods are very diverse, and that further work is needed to build shared evaluation methods. The state of the art also emphasizes that such an evaluation must be task specific and include human assessments, requesting the participation of experts in the field.",
}
|
Medical Report Generation (MRG) is a sub-task of Natural Language Generation (NLG) and aims to present information from various sources in textual form and synthesize salient information, with the goal of reducing the time spent by domain experts in writing medical reports and providing support information for decision-making. Given the specificity of the medical domain, the evaluation of automatically generated medical reports is of paramount importance to the validity of these systems. Therefore, in this paper, we focus on the evaluation of automatically generated medical reports from the perspective of automatic and human evaluation. We present evaluation methods for general NLG evaluation and how they have been applied to domain-specific medical tasks. The study shows that MRG evaluation methods are very diverse, and that further work is needed to build shared evaluation methods. The state of the art also emphasizes that such an evaluation must be task specific and include human assessments, requesting the participation of experts in the field.
|
[
"Zhou, Yongxin",
"Ringeval, Fabien",
"Portet, Fran{\\c{c}}ois"
] |
A Survey of Evaluation Methods of Generated Medical Textual Reports
|
clinicalnlp-1.48
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.49.bib
|
https://aclanthology.org/2023.clinicalnlp-1.49/
|
@inproceedings{wang-etal-2023-umass,
title = "{UMASS}{\_}{B}io{NLP} at {MEDIQA}-Chat 2023: Can {LLM}s generate high-quality synthetic note-oriented doctor-patient conversations?",
author = "Wang, Junda and
Yao, Zonghai and
Mitra, Avijit and
Osebe, Samuel and
Yang, Zhichao and
Yu, Hong",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.49",
doi = "10.18653/v1/2023.clinicalnlp-1.49",
pages = "460--471",
abstract = "This paper presents UMASS{\_}BioNLP team participation in the MEDIQA-Chat 2023 shared task for Task-A and Task-C. We focus especially on Task-C and propose a novel LLMs cooperation system named a doctor-patient loop to generate high-quality conversation data sets. The experiment results demonstrate that our approaches yield reasonable performance as evaluated by automatic metrics such as ROUGE, medical concept recall, BLEU, and Self-BLEU. Furthermore, we conducted a comparative analysis between our proposed method and ChatGPT and GPT-4. This analysis also investigates the potential of utilizing cooperation LLMs to generate high-quality datasets.",
}
|
This paper presents UMASS{\_}BioNLP team participation in the MEDIQA-Chat 2023 shared task for Task-A and Task-C. We focus especially on Task-C and propose a novel LLMs cooperation system named a doctor-patient loop to generate high-quality conversation data sets. The experiment results demonstrate that our approaches yield reasonable performance as evaluated by automatic metrics such as ROUGE, medical concept recall, BLEU, and Self-BLEU. Furthermore, we conducted a comparative analysis between our proposed method and ChatGPT and GPT-4. This analysis also investigates the potential of utilizing cooperation LLMs to generate high-quality datasets.
|
[
"Wang, Junda",
"Yao, Zonghai",
"Mitra, Avijit",
"Osebe, Samuel",
"Yang, Zhichao",
"Yu, Hong"
] |
UMASS_BioNLP at MEDIQA-Chat 2023: Can LLMs generate high-quality synthetic note-oriented doctor-patient conversations?
|
clinicalnlp-1.49
|
Poster
|
2306.16931
|
[
"https://github.com/believewhat/dr.noteaid"
] |
https://huggingface.co/papers/2306.16931
| 3 | 0 | 0 | 6 | 1 |
[] |
[] |
[] |
https://aclanthology.org/2023.clinicalnlp-1.50.bib
|
https://aclanthology.org/2023.clinicalnlp-1.50/
|
@inproceedings{suri-etal-2023-healthmavericks,
title = "{H}ealth{M}avericks@{MEDIQA}-Chat 2023: Benchmarking different Transformer based models for Clinical Dialogue Summarization",
author = "Suri, Kunal and
Saha, Saumajit and
Singh, Atul",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.50",
doi = "10.18653/v1/2023.clinicalnlp-1.50",
pages = "472--489",
abstract = "In recent years, we have seen many Transformer based models being created to address Dialog Summarization problem. While there has been a lot of work on understanding how these models stack against each other in summarizing regular conversations such as the ones found in DialogSum dataset, there haven{'}t been many analysis of these models on Clinical Dialog Summarization. In this article, we describe our solution to MEDIQA-Chat 2023 Shared Tasks as part of ACL-ClinicalNLP 2023 workshop which benchmarks some of the popular Transformer Architectures such as BioBart, Flan-T5, DialogLED, and OpenAI GPT3 on the problem of Clinical Dialog Summarization. We analyse their performance on two tasks - summarizing short conversations and long conversations. In addition to this, we also benchmark two popular summarization ensemble methods and report their performance.",
}
|
In recent years, we have seen many Transformer based models being created to address Dialog Summarization problem. While there has been a lot of work on understanding how these models stack against each other in summarizing regular conversations such as the ones found in DialogSum dataset, there haven{'}t been many analysis of these models on Clinical Dialog Summarization. In this article, we describe our solution to MEDIQA-Chat 2023 Shared Tasks as part of ACL-ClinicalNLP 2023 workshop which benchmarks some of the popular Transformer Architectures such as BioBart, Flan-T5, DialogLED, and OpenAI GPT3 on the problem of Clinical Dialog Summarization. We analyse their performance on two tasks - summarizing short conversations and long conversations. In addition to this, we also benchmark two popular summarization ensemble methods and report their performance.
|
[
"Suri, Kunal",
"Saha, Saumajit",
"Singh, Atul"
] |
HealthMavericks@MEDIQA-Chat 2023: Benchmarking different Transformer based models for Clinical Dialogue Summarization
|
clinicalnlp-1.50
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.51.bib
|
https://aclanthology.org/2023.clinicalnlp-1.51/
|
@inproceedings{mathur-etal-2023-summqa,
title = "{S}umm{QA} at {MEDIQA}-Chat 2023: In-Context Learning with {GPT}-4 for Medical Summarization",
author = "Mathur, Yash and
Rangreji, Sanketh and
Kapoor, Raghav and
Palavalli, Medha and
Bertsch, Amanda and
Gormley, Matthew",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.51",
doi = "10.18653/v1/2023.clinicalnlp-1.51",
pages = "490--502",
abstract = "Medical dialogue summarization is challenging due to the unstructured nature of medical conversations, the use of medical terminologyin gold summaries, and the need to identify key information across multiple symptom sets. We present a novel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA 2023 Shared Task. Our approach for sectionwise summarization (Task A) is a two-stage process of selecting semantically similar dialogues and using the top-k similar dialogues as in-context examples for GPT-4. For full-note summarization (Task B), we use a similar solution with k=1. We achieved 3rd place in Task A (2nd among all teams), 4th place in Task B Division Wise Summarization (2nd among all teams), 15th place in Task A Section Header Classification (9th among all teams), and 8th place among all teams in Task B. Our results highlight the effectiveness of few-shot prompting for this task, though we also identify several weaknesses of prompting-based approaches. We compare GPT-4 performance with several finetuned baselines. We find that GPT-4 summaries are more abstractive and shorter. We make our code publicly available.",
}
|
Medical dialogue summarization is challenging due to the unstructured nature of medical conversations, the use of medical terminologyin gold summaries, and the need to identify key information across multiple symptom sets. We present a novel system for the Dialogue2Note Medical Summarization tasks in the MEDIQA 2023 Shared Task. Our approach for sectionwise summarization (Task A) is a two-stage process of selecting semantically similar dialogues and using the top-k similar dialogues as in-context examples for GPT-4. For full-note summarization (Task B), we use a similar solution with k=1. We achieved 3rd place in Task A (2nd among all teams), 4th place in Task B Division Wise Summarization (2nd among all teams), 15th place in Task A Section Header Classification (9th among all teams), and 8th place among all teams in Task B. Our results highlight the effectiveness of few-shot prompting for this task, though we also identify several weaknesses of prompting-based approaches. We compare GPT-4 performance with several finetuned baselines. We find that GPT-4 summaries are more abstractive and shorter. We make our code publicly available.
|
[
"Mathur, Yash",
"Rangreji, Sanketh",
"Kapoor, Raghav",
"Palavalli, Medha",
"Bertsch, Am",
"a",
"Gormley, Matthew"
] |
SummQA at MEDIQA-Chat 2023: In-Context Learning with GPT-4 for Medical Summarization
|
clinicalnlp-1.51
|
Poster
|
[
"https://github.com/raghav1606/summqa"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.52.bib
|
https://aclanthology.org/2023.clinicalnlp-1.52/
|
@inproceedings{ben-abacha-etal-2023-overview,
title = "Overview of the {MEDIQA}-Chat 2023 Shared Tasks on the Summarization {\&} Generation of Doctor-Patient Conversations",
author = "Ben Abacha, Asma and
Yim, Wen-wai and
Adams, Griffin and
Snider, Neal and
Yetisgen, Meliha",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.52",
doi = "10.18653/v1/2023.clinicalnlp-1.52",
pages = "503--513",
abstract = "Automatic generation of clinical notes from doctor-patient conversations can play a key role in reducing daily doctors{'} workload and improving their interactions with the patients. MEDIQA-Chat 2023 aims to advance and promote research on effective solutions through shared tasks on the automatic summarization of doctor-patient conversations and on the generation of synthetic dialogues from clinical notes for data augmentation. Seventeen teams participated in the challenge and experimented with a broad range of approaches and models. In this paper, we describe the three MEDIQA-Chat 2023 tasks, the datasets, and the participants{'} results and methods. We hope that these shared tasks will lead to additional research efforts and insights on the automatic generation and evaluation of clinical notes.",
}
|
Automatic generation of clinical notes from doctor-patient conversations can play a key role in reducing daily doctors{'} workload and improving their interactions with the patients. MEDIQA-Chat 2023 aims to advance and promote research on effective solutions through shared tasks on the automatic summarization of doctor-patient conversations and on the generation of synthetic dialogues from clinical notes for data augmentation. Seventeen teams participated in the challenge and experimented with a broad range of approaches and models. In this paper, we describe the three MEDIQA-Chat 2023 tasks, the datasets, and the participants{'} results and methods. We hope that these shared tasks will lead to additional research efforts and insights on the automatic generation and evaluation of clinical notes.
|
[
"Ben Abacha, Asma",
"Yim, Wen-wai",
"Adams, Griffin",
"Snider, Neal",
"Yetisgen, Meliha"
] |
Overview of the MEDIQA-Chat 2023 Shared Tasks on the Summarization & Generation of Doctor-Patient Conversations
|
clinicalnlp-1.52
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.53.bib
|
https://aclanthology.org/2023.clinicalnlp-1.53/
|
@inproceedings{sasikumar-mantri-2023-transfer,
title = "Transfer Learning for Low-Resource Clinical Named Entity Recognition",
author = "Sasikumar, Nevasini and
Mantri, Krishna Sri Ipsit",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.53",
doi = "10.18653/v1/2023.clinicalnlp-1.53",
pages = "514--518",
abstract = "We propose a transfer learning method that adapts a high-resource English clinical NER model to low-resource languages and domains using only small amounts of in-domain annotated data. Our approach involves translating in-domain datasets to English, fine-tuning the English model on the translated data, and then transferring it to the target language/domain. Experiments on Spanish, French, and conversational clinical text datasets show accuracy gains over models trained on target data alone. Our method achieves state-of-the-art performance and can enable clinical NLP in more languages and modalities with limited resources.",
}
|
We propose a transfer learning method that adapts a high-resource English clinical NER model to low-resource languages and domains using only small amounts of in-domain annotated data. Our approach involves translating in-domain datasets to English, fine-tuning the English model on the translated data, and then transferring it to the target language/domain. Experiments on Spanish, French, and conversational clinical text datasets show accuracy gains over models trained on target data alone. Our method achieves state-of-the-art performance and can enable clinical NLP in more languages and modalities with limited resources.
|
[
"Sasikumar, Nevasini",
"Mantri, Krishna Sri Ipsit"
] |
Transfer Learning for Low-Resource Clinical Named Entity Recognition
|
clinicalnlp-1.53
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.54.bib
|
https://aclanthology.org/2023.clinicalnlp-1.54/
|
@inproceedings{srivastava-2023-iuteam1,
title = "{IUTEAM}1 at {MEDIQA}-Chat 2023: Is simple fine tuning effective for multi layer summarization of clinical conversations?",
author = "Srivastava, Dhananjay",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.54",
doi = "10.18653/v1/2023.clinicalnlp-1.54",
pages = "519--523",
abstract = "Clinical conversation summarization has become an important application of Natural language Processing. In this work, we intend to analyze summarization model ensembling approaches, that can be utilized to improve the overall accuracy of the generated medical report called chart note. The work starts with a single summarization model creating the baseline. Then leads to an ensemble of summarization models trained on a separate section of the chart note. This leads to the final approach of passing the generated results to another summarization model in a multi-layer/stage fashion for better coherency of the generated text. Our results indicate that although an ensemble of models specialized in each section produces better results, the multi-layer/stage approach does not improve accuracy. The code for the above paper is available at \url{https://github.com/dhananjay-srivastava/MEDIQA-Chat-2023-iuteam1.git}",
}
|
Clinical conversation summarization has become an important application of Natural language Processing. In this work, we intend to analyze summarization model ensembling approaches, that can be utilized to improve the overall accuracy of the generated medical report called chart note. The work starts with a single summarization model creating the baseline. Then leads to an ensemble of summarization models trained on a separate section of the chart note. This leads to the final approach of passing the generated results to another summarization model in a multi-layer/stage fashion for better coherency of the generated text. Our results indicate that although an ensemble of models specialized in each section produces better results, the multi-layer/stage approach does not improve accuracy. The code for the above paper is available at \url{https://github.com/dhananjay-srivastava/MEDIQA-Chat-2023-iuteam1.git}
|
[
"Srivastava, Dhananjay"
] |
IUTEAM1 at MEDIQA-Chat 2023: Is simple fine tuning effective for multi layer summarization of clinical conversations?
|
clinicalnlp-1.54
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.55.bib
|
https://aclanthology.org/2023.clinicalnlp-1.55/
|
@inproceedings{alqahtani-etal-2023-care4lang,
title = "{C}are4{L}ang at {MEDIQA}-Chat 2023: Fine-tuning Language Models for Classifying and Summarizing Clinical Dialogues",
author = "Alqahtani, Amal and
Salama, Rana and
Diab, Mona and
Youssef, Abdou",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.55",
doi = "10.18653/v1/2023.clinicalnlp-1.55",
pages = "524--528",
abstract = "Summarizing medical conversations is one of the tasks proposed by MEDIQA-Chat to promote research on automatic clinical note generation from doctor-patient conversations. In this paper, we present our submission to this task using fine-tuned language models, including T5, BART and BioGPT models. The fine-tuned models are evaluated using ensemble metrics including ROUGE, BERTScore andBLEURT. Among the fine-tuned models, Flan-T5 achieved the highest aggregated score for dialogue summarization.",
}
|
Summarizing medical conversations is one of the tasks proposed by MEDIQA-Chat to promote research on automatic clinical note generation from doctor-patient conversations. In this paper, we present our submission to this task using fine-tuned language models, including T5, BART and BioGPT models. The fine-tuned models are evaluated using ensemble metrics including ROUGE, BERTScore andBLEURT. Among the fine-tuned models, Flan-T5 achieved the highest aggregated score for dialogue summarization.
|
[
"Alqahtani, Amal",
"Salama, Rana",
"Diab, Mona",
"Youssef, Abdou"
] |
Care4Lang at MEDIQA-Chat 2023: Fine-tuning Language Models for Classifying and Summarizing Clinical Dialogues
|
clinicalnlp-1.55
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.56.bib
|
https://aclanthology.org/2023.clinicalnlp-1.56/
|
@inproceedings{milintsevich-agarwal-2023-calvados,
title = "{C}alvados at {MEDIQA}-Chat 2023: Improving Clinical Note Generation with Multi-Task Instruction Finetuning",
author = "Milintsevich, Kirill and
Agarwal, Navneet",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.56",
doi = "10.18653/v1/2023.clinicalnlp-1.56",
pages = "529--535",
abstract = "This paper presents our system for the MEDIQA-Chat 2023 shared task on medical conversation summarization. Our approach involves finetuning a LongT5 model on multiple tasks simultaneously, which we demonstrate improves the model{'}s overall performance while reducing the number of factual errors and hallucinations in the generated summary. Furthermore, we investigated the effect of augmenting the data with in-text annotations from a clinical named entity recognition model, finding that this approach decreased summarization quality. Lastly, we explore using different text generation strategies for medical note generation based on the length of the note. Our findings suggest that the application of our proposed approach can be beneficial for improving the accuracy and effectiveness of medical conversation summarization.",
}
|
This paper presents our system for the MEDIQA-Chat 2023 shared task on medical conversation summarization. Our approach involves finetuning a LongT5 model on multiple tasks simultaneously, which we demonstrate improves the model{'}s overall performance while reducing the number of factual errors and hallucinations in the generated summary. Furthermore, we investigated the effect of augmenting the data with in-text annotations from a clinical named entity recognition model, finding that this approach decreased summarization quality. Lastly, we explore using different text generation strategies for medical note generation based on the length of the note. Our findings suggest that the application of our proposed approach can be beneficial for improving the accuracy and effectiveness of medical conversation summarization.
|
[
"Milintsevich, Kirill",
"Agarwal, Navneet"
] |
Calvados at MEDIQA-Chat 2023: Improving Clinical Note Generation with Multi-Task Instruction Finetuning
|
clinicalnlp-1.56
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.57.bib
|
https://aclanthology.org/2023.clinicalnlp-1.57/
|
@inproceedings{zhang-etal-2023-ds4dh,
title = "{DS}4{DH} at {MEDIQA}-Chat 2023: Leveraging {SVM} and {GPT}-3 Prompt Engineering for Medical Dialogue Classification and Summarization",
author = "Zhang, Boya and
Mishra, Rahul and
Teodoro, Douglas",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.57",
doi = "10.18653/v1/2023.clinicalnlp-1.57",
pages = "536--545",
abstract = "This paper presents the results of the Data Science for Digital Health (DS4DH) group in the MEDIQA-Chat Tasks at ACL-ClinicalNLP 2023. Our study combines the power of a classical machine learning method, Support Vector Machine, for classifying medical dialogues, along with the implementation of one-shot prompts using GPT-3.5. We employ dialogues and summaries from the same category as prompts to generate summaries for novel dialogues. Our findings exceed the average benchmark score, offering a robust reference for assessing performance in this field.",
}
|
This paper presents the results of the Data Science for Digital Health (DS4DH) group in the MEDIQA-Chat Tasks at ACL-ClinicalNLP 2023. Our study combines the power of a classical machine learning method, Support Vector Machine, for classifying medical dialogues, along with the implementation of one-shot prompts using GPT-3.5. We employ dialogues and summaries from the same category as prompts to generate summaries for novel dialogues. Our findings exceed the average benchmark score, offering a robust reference for assessing performance in this field.
|
[
"Zhang, Boya",
"Mishra, Rahul",
"Teodoro, Douglas"
] |
DS4DH at MEDIQA-Chat 2023: Leveraging SVM and GPT-3 Prompt Engineering for Medical Dialogue Classification and Summarization
|
clinicalnlp-1.57
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.clinicalnlp-1.58.bib
|
https://aclanthology.org/2023.clinicalnlp-1.58/
|
@inproceedings{tang-etal-2023-gersteinlab,
title = "{G}erstein{L}ab at {MEDIQA}-Chat 2023: Clinical Note Summarization from Doctor-Patient Conversations through Fine-tuning and In-context Learning",
author = "Tang, Xiangru and
Tran, Andrew and
Tan, Jeffrey and
Gerstein, Mark",
editor = "Naumann, Tristan and
Ben Abacha, Asma and
Bethard, Steven and
Roberts, Kirk and
Rumshisky, Anna",
booktitle = "Proceedings of the 5th Clinical Natural Language Processing Workshop",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.clinicalnlp-1.58",
doi = "10.18653/v1/2023.clinicalnlp-1.58",
pages = "546--554",
abstract = "This paper presents our contribution to the MEDIQA-2023 Dialogue2Note shared task, encompassing both subtask A and subtask B. We approach the task as a dialogue summarization problem and implement two distinct pipelines: (a) a fine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b) few-shot in-context learning (ICL) using a large language model, GPT-4. Both methods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1 (deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421, respectively. Additionally, we predict the associated section headers using RoBERTa and SciBERT based classification models. Our team ranked fourth among all teams, while each team is allowed to submit three runs as part of their submission. We also utilize expert annotations to demonstrate that the notes generated through the ICL GPT-4 are better than all other baselines. The code for our submission is available.",
}
|
This paper presents our contribution to the MEDIQA-2023 Dialogue2Note shared task, encompassing both subtask A and subtask B. We approach the task as a dialogue summarization problem and implement two distinct pipelines: (a) a fine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b) few-shot in-context learning (ICL) using a large language model, GPT-4. Both methods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1 (deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421, respectively. Additionally, we predict the associated section headers using RoBERTa and SciBERT based classification models. Our team ranked fourth among all teams, while each team is allowed to submit three runs as part of their submission. We also utilize expert annotations to demonstrate that the notes generated through the ICL GPT-4 are better than all other baselines. The code for our submission is available.
|
[
"Tang, Xiangru",
"Tran, Andrew",
"Tan, Jeffrey",
"Gerstein, Mark"
] |
GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from Doctor-Patient Conversations through Fine-tuning and In-context Learning
|
clinicalnlp-1.58
|
Poster
|
2305.05001
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.1.bib
|
https://aclanthology.org/2023.codi-1.1/
|
@inproceedings{schrader-etal-2023-mulms,
title = "{M}u{LMS}-{AZ}: An Argumentative Zoning Dataset for the Materials Science Domain",
author = {Schrader, Timo and
B{\"u}rkle, Teresa and
Henning, Sophie and
Tan, Sherry and
Finco, Matteo and
Gr{\"u}newald, Stefan and
Indrikova, Maira and
Hildebrand, Felix and
Friedrich, Annemarie},
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.1",
doi = "10.18653/v1/2023.codi-1.1",
pages = "1--15",
abstract = "Scientific publications follow conventionalized rhetorical structures. Classifying the Argumentative Zone (AZ), e.g., identifying whether a sentence states a Motivation, a Result or Background information, has been proposed to improve processing of scholarly documents. In this work, we adapt and extend this idea to the domain of materials science research. We present and release a new dataset of 50 manually annotated research articles. The dataset spans seven sub-topics and is annotated with a materials-science focused multi-label annotation scheme for AZ. We detail corpus statistics and demonstrate high inter-annotator agreement. Our computational experiments show that using domain-specific pre-trained transformer-based text encoders is key to high classification performance. We also find that AZ categories from existing datasets in other domains are transferable to varying degrees.",
}
|
Scientific publications follow conventionalized rhetorical structures. Classifying the Argumentative Zone (AZ), e.g., identifying whether a sentence states a Motivation, a Result or Background information, has been proposed to improve processing of scholarly documents. In this work, we adapt and extend this idea to the domain of materials science research. We present and release a new dataset of 50 manually annotated research articles. The dataset spans seven sub-topics and is annotated with a materials-science focused multi-label annotation scheme for AZ. We detail corpus statistics and demonstrate high inter-annotator agreement. Our computational experiments show that using domain-specific pre-trained transformer-based text encoders is key to high classification performance. We also find that AZ categories from existing datasets in other domains are transferable to varying degrees.
|
[
"Schrader, Timo",
"B{\\\"u}rkle, Teresa",
"Henning, Sophie",
"Tan, Sherry",
"Finco, Matteo",
"Gr{\\\"u}newald, Stefan",
"Indrikova, Maira",
"Hildebr",
", Felix",
"Friedrich, Annemarie"
] |
MuLMS-AZ: An Argumentative Zoning Dataset for the Materials Science Domain
|
codi-1.1
|
Poster
|
2307.02340
|
[
"https://github.com/boschresearch/mulms-az-codi2023"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.2.bib
|
https://aclanthology.org/2023.codi-1.2/
|
@inproceedings{lee-etal-2023-side,
title = "A Side-by-side Comparison of Transformers for Implicit Discourse Relation Classification",
author = "Lee, Bruce W. and
Yang, Bongseok and
Lee, Jason",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.2",
doi = "10.18653/v1/2023.codi-1.2",
pages = "16--23",
abstract = "Though discourse parsing can help multiple NLP fields, there has been no wide language model search done on implicit discourse relation classification. This hinders researchers from fully utilizing public-available models in discourse analysis. This work is a straightforward, fine-tuned discourse performance comparison of 7 pre-trained language models. We use PDTB-3, a popular discourse relation annotated dataset. Through our model search, we raise SOTA to 0.671 ACC and obtain novel observations. Some are contrary to what has been reported before (Shi and Demberg, 2019b), that sentence-level pre-training objectives (NSP, SBO, SOP) generally fail to produce the best-performing model for implicit discourse relation classification. Counterintuitively, similar-sized PLMs with MLM and full attention led to better performance. Our code is publicly released.",
}
|
Though discourse parsing can help multiple NLP fields, there has been no wide language model search done on implicit discourse relation classification. This hinders researchers from fully utilizing public-available models in discourse analysis. This work is a straightforward, fine-tuned discourse performance comparison of 7 pre-trained language models. We use PDTB-3, a popular discourse relation annotated dataset. Through our model search, we raise SOTA to 0.671 ACC and obtain novel observations. Some are contrary to what has been reported before (Shi and Demberg, 2019b), that sentence-level pre-training objectives (NSP, SBO, SOP) generally fail to produce the best-performing model for implicit discourse relation classification. Counterintuitively, similar-sized PLMs with MLM and full attention led to better performance. Our code is publicly released.
|
[
"Lee, Bruce W.",
"Yang, Bongseok",
"Lee, Jason"
] |
A Side-by-side Comparison of Transformers for Implicit Discourse Relation Classification
|
codi-1.2
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.3.bib
|
https://aclanthology.org/2023.codi-1.3/
|
@inproceedings{lai-ji-2023-ensemble,
title = "Ensemble Transfer Learning for Multilingual Coreference Resolution",
author = "Lai, Tuan and
Ji, Heng",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.3",
doi = "10.18653/v1/2023.codi-1.3",
pages = "24--36",
abstract = "Entity coreference resolution is an important research problem with many applications, including information extraction and question answering. Coreference resolution for English has been studied extensively. However, there is relatively little work for other languages. A problem that frequently occurs when working with a non-English language is the scarcity of annotated training data. To overcome this challenge, we design a simple but effective ensemble-based framework that combines various transfer learning (TL) techniques. We first train several models using different TL methods. Then, during inference, we compute the unweighted average scores of the models{'} predictions to extract the final set of predicted clusters. Furthermore, we also propose a low-cost TL method that bootstraps coreference resolution models by utilizing Wikipedia anchor texts. Leveraging the idea that the coreferential links naturally exist between anchor texts pointing to the same article, our method builds a sizeable distantly-supervised dataset for the target language that consists of tens of thousands of documents. We can pre-train a model on the pseudo-labeled dataset before finetuning it on the final target dataset. Experimental results on two benchmark datasets, OntoNotes and SemEval, confirm the effectiveness of our methods. Our best ensembles consistently outperform the baseline approach of simple training by up to 7.68{\%} in the F1 score. These ensembles also achieve new state-of-the-art results for three languages: Arabic, Dutch, and Spanish.",
}
|
Entity coreference resolution is an important research problem with many applications, including information extraction and question answering. Coreference resolution for English has been studied extensively. However, there is relatively little work for other languages. A problem that frequently occurs when working with a non-English language is the scarcity of annotated training data. To overcome this challenge, we design a simple but effective ensemble-based framework that combines various transfer learning (TL) techniques. We first train several models using different TL methods. Then, during inference, we compute the unweighted average scores of the models{'} predictions to extract the final set of predicted clusters. Furthermore, we also propose a low-cost TL method that bootstraps coreference resolution models by utilizing Wikipedia anchor texts. Leveraging the idea that the coreferential links naturally exist between anchor texts pointing to the same article, our method builds a sizeable distantly-supervised dataset for the target language that consists of tens of thousands of documents. We can pre-train a model on the pseudo-labeled dataset before finetuning it on the final target dataset. Experimental results on two benchmark datasets, OntoNotes and SemEval, confirm the effectiveness of our methods. Our best ensembles consistently outperform the baseline approach of simple training by up to 7.68{\%} in the F1 score. These ensembles also achieve new state-of-the-art results for three languages: Arabic, Dutch, and Spanish.
|
[
"Lai, Tuan",
"Ji, Heng"
] |
Ensemble Transfer Learning for Multilingual Coreference Resolution
|
codi-1.3
|
Poster
|
2301.09175
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.4.bib
|
https://aclanthology.org/2023.codi-1.4/
|
@inproceedings{zhang-etal-2023-contrastive-hierarchical,
title = "Contrastive Hierarchical Discourse Graph for Scientific Document Summarization",
author = "Zhang, Haopeng and
Liu, Xiao and
Zhang, Jiawei",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.4",
doi = "10.18653/v1/2023.codi-1.4",
pages = "37--47",
abstract = "The extended structural context has made scientific paper summarization a challenging task. This paper proposes CHANGES, a contrastive hierarchical graph neural network for extractive scientific paper summarization. CHANGES represents a scientific paper with a hierarchical discourse graph and learns effective sentence representations with dedicated designed hierarchical graph information aggregation. We also propose a graph contrastive learning module to learn global theme-aware sentence representations. Extensive experiments on the PubMed and arXiv benchmark datasets prove the effectiveness of CHANGES and the importance of capturing hierarchical structure information in modeling scientific papers.",
}
|
The extended structural context has made scientific paper summarization a challenging task. This paper proposes CHANGES, a contrastive hierarchical graph neural network for extractive scientific paper summarization. CHANGES represents a scientific paper with a hierarchical discourse graph and learns effective sentence representations with dedicated designed hierarchical graph information aggregation. We also propose a graph contrastive learning module to learn global theme-aware sentence representations. Extensive experiments on the PubMed and arXiv benchmark datasets prove the effectiveness of CHANGES and the importance of capturing hierarchical structure information in modeling scientific papers.
|
[
"Zhang, Haopeng",
"Liu, Xiao",
"Zhang, Jiawei"
] |
Contrastive Hierarchical Discourse Graph for Scientific Document Summarization
|
codi-1.4
|
Poster
|
2306.00177
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.5.bib
|
https://aclanthology.org/2023.codi-1.5/
|
@inproceedings{de-langhe-etal-2023-leveraging,
title = "Leveraging Structural Discourse Information for Event Coreference Resolution in {D}utch",
author = "De Langhe, Loic and
De Clercq, Orphee and
Hoste, Veronique",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.5",
doi = "10.18653/v1/2023.codi-1.5",
pages = "48--53",
abstract = "We directly embed easily extractable discourse structure information (subsection, paragraph and text type) in a transformer-based Dutch event coreference resolution model in order to more explicitly provide it with structural information that is known to be important in coreferential relationships. Results show that integrating this type of knowledge leads to a significant improvement in CONLL F1 for within-document settings (+ 8.6{\textbackslash}{\%}) and a minor improvement for cross-document settings (+ 1.1{\textbackslash}{\%}).",
}
|
We directly embed easily extractable discourse structure information (subsection, paragraph and text type) in a transformer-based Dutch event coreference resolution model in order to more explicitly provide it with structural information that is known to be important in coreferential relationships. Results show that integrating this type of knowledge leads to a significant improvement in CONLL F1 for within-document settings (+ 8.6{\textbackslash}{\%}) and a minor improvement for cross-document settings (+ 1.1{\textbackslash}{\%}).
|
[
"De Langhe, Loic",
"De Clercq, Orphee",
"Hoste, Veronique"
] |
Leveraging Structural Discourse Information for Event Coreference Resolution in Dutch
|
codi-1.5
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.6.bib
|
https://aclanthology.org/2023.codi-1.6/
|
@inproceedings{liu-etal-2023-entity,
title = "Entity Coreference and Co-occurrence Aware Argument Mining from Biomedical Literature",
author = "Liu, Boyang and
Schlegel, Viktor and
Batista-navarro, Riza and
Ananiadou, Sophia",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.6",
doi = "10.18653/v1/2023.codi-1.6",
pages = "54--60",
abstract = "Biomedical argument mining (BAM) aims at automatically identifying the argumentative structure in biomedical texts. However, identifying and classifying argumentative relations (AR) between argumentative components (AC) is challenging since it not only needs to understand the semantics of ACs but also need to capture the interactions between them. We argue that entities can serve as bridges that connect different ACs since entities and their mentions convey significant semantic information in biomedical argumentation. For example, it is common that related AC pairs share a common entity. Capturing such entity information can be beneficial for the Relation Identification (RI) task. In order to incorporate this entity information into BAM, we propose an Entity Coreference and Co-occurrence aware Argument Mining (ECCAM) framework based on an edge-oriented graph model for BAM. We evaluate our model on a benchmark dataset and from the experimental results we find that our method improves upon state-of-the-art methods.",
}
|
Biomedical argument mining (BAM) aims at automatically identifying the argumentative structure in biomedical texts. However, identifying and classifying argumentative relations (AR) between argumentative components (AC) is challenging since it not only needs to understand the semantics of ACs but also need to capture the interactions between them. We argue that entities can serve as bridges that connect different ACs since entities and their mentions convey significant semantic information in biomedical argumentation. For example, it is common that related AC pairs share a common entity. Capturing such entity information can be beneficial for the Relation Identification (RI) task. In order to incorporate this entity information into BAM, we propose an Entity Coreference and Co-occurrence aware Argument Mining (ECCAM) framework based on an edge-oriented graph model for BAM. We evaluate our model on a benchmark dataset and from the experimental results we find that our method improves upon state-of-the-art methods.
|
[
"Liu, Boyang",
"Schlegel, Viktor",
"Batista-navarro, Riza",
"Ananiadou, Sophia"
] |
Entity Coreference and Co-occurrence Aware Argument Mining from Biomedical Literature
|
codi-1.6
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.8.bib
|
https://aclanthology.org/2023.codi-1.8/
|
@inproceedings{knaebel-2023-weakly,
title = "A Weakly-Supervised Learning Approach to the Identification of {``}Alternative Lexicalizations{''} in Shallow Discourse Parsing",
author = "Knaebel, Ren{\'e}",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.8",
doi = "10.18653/v1/2023.codi-1.8",
pages = "61--69",
abstract = "Recently, the identification of free connective phrases as signals for discourse relations has received new attention with the introduction of statistical models for their automatic extraction. The limited amount of annotations makes it still challenging to develop well-performing models. In our work, we want to overcome this limitation with semi-supervised learning from unlabeled news texts. We implement a self-supervised sequence labeling approach and filter its predictions by a second model trained to disambiguate signal candidates. With our novel model design, we report state-of-the-art results and in addition, achieve an average improvement of about 5{\%} for both exactly and partially matched alternativelylexicalized discourse signals due to weak supervision.",
}
|
Recently, the identification of free connective phrases as signals for discourse relations has received new attention with the introduction of statistical models for their automatic extraction. The limited amount of annotations makes it still challenging to develop well-performing models. In our work, we want to overcome this limitation with semi-supervised learning from unlabeled news texts. We implement a self-supervised sequence labeling approach and filter its predictions by a second model trained to disambiguate signal candidates. With our novel model design, we report state-of-the-art results and in addition, achieve an average improvement of about 5{\%} for both exactly and partially matched alternativelylexicalized discourse signals due to weak supervision.
|
[
"Knaebel, Ren{\\'e}"
] |
A Weakly-Supervised Learning Approach to the Identification of “Alternative Lexicalizations” in Shallow Discourse Parsing
|
codi-1.8
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.9.bib
|
https://aclanthology.org/2023.codi-1.9/
|
@inproceedings{xiao-carenini-2023-entity,
title = "Entity-based {S}pan{C}opy for Abstractive Summarization to Improve the Factual Consistency",
author = "Xiao, Wen and
Carenini, Giuseppe",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.9",
doi = "10.18653/v1/2023.codi-1.9",
pages = "70--81",
abstract = "Discourse-aware techniques, including entity-aware approaches, play a crucial role in summarization. In this paper, we propose an entity-based SpanCopy mechanism to tackle the entity-level factual inconsistency problem in abstractive summarization, i.e. reducing the mismatched entities between the generated summaries and the source documents. Complemented by a Global Relevance component to identify summary-worthy entities, our approach demonstrates improved factual consistency while preserving saliency on four summarization datasets, contributing to the effective application of discourse-aware methods summarization tasks.",
}
|
Discourse-aware techniques, including entity-aware approaches, play a crucial role in summarization. In this paper, we propose an entity-based SpanCopy mechanism to tackle the entity-level factual inconsistency problem in abstractive summarization, i.e. reducing the mismatched entities between the generated summaries and the source documents. Complemented by a Global Relevance component to identify summary-worthy entities, our approach demonstrates improved factual consistency while preserving saliency on four summarization datasets, contributing to the effective application of discourse-aware methods summarization tasks.
|
[
"Xiao, Wen",
"Carenini, Giuseppe"
] |
Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency
|
codi-1.9
|
Poster
|
2209.03479
|
[
"https://github.com/wendy-xiao/entity-based-spancopy"
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.10.bib
|
https://aclanthology.org/2023.codi-1.10/
|
@inproceedings{niu-etal-2023-discourse,
title = "Discourse Information for Document-Level Temporal Dependency Parsing",
author = "Niu, Jingcheng and
Ng, Victoria and
Rees, Erin and
De Montigny, Simon and
Penn, Gerald",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.10",
doi = "10.18653/v1/2023.codi-1.10",
pages = "82--88",
abstract = "In this study, we examine the benefits of incorporating discourse information into document-level temporal dependency parsing. Specifically, we evaluate the effectiveness of integrating both high-level discourse profiling information, which describes the discourse function of sentences, and surface-level sentence position information into temporal dependency graph (TDG) parsing. Unexpectedly, our results suggest that simple sentence position information, particularly when encoded using our novel sentence-position embedding method, performs the best, perhaps because it does not rely on noisy model-generated feature inputs. Our proposed system surpasses the current state-of-the-art TDG parsing systems in performance. Furthermore, we aim to broaden the discussion on the relationship between temporal dependency parsing and discourse analysis, given the substantial similarities shared between the two tasks. We argue that discourse analysis results should not be merely regarded as an additional input feature for temporal dependency parsing. Instead, adopting advanced discourse analysis techniques and research insights can lead to more effective and comprehensive approaches to temporal information extraction tasks.",
}
|
In this study, we examine the benefits of incorporating discourse information into document-level temporal dependency parsing. Specifically, we evaluate the effectiveness of integrating both high-level discourse profiling information, which describes the discourse function of sentences, and surface-level sentence position information into temporal dependency graph (TDG) parsing. Unexpectedly, our results suggest that simple sentence position information, particularly when encoded using our novel sentence-position embedding method, performs the best, perhaps because it does not rely on noisy model-generated feature inputs. Our proposed system surpasses the current state-of-the-art TDG parsing systems in performance. Furthermore, we aim to broaden the discussion on the relationship between temporal dependency parsing and discourse analysis, given the substantial similarities shared between the two tasks. We argue that discourse analysis results should not be merely regarded as an additional input feature for temporal dependency parsing. Instead, adopting advanced discourse analysis techniques and research insights can lead to more effective and comprehensive approaches to temporal information extraction tasks.
|
[
"Niu, Jingcheng",
"Ng, Victoria",
"Rees, Erin",
"De Montigny, Simon",
"Penn, Gerald"
] |
Discourse Information for Document-Level Temporal Dependency Parsing
|
codi-1.10
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.11.bib
|
https://aclanthology.org/2023.codi-1.11/
|
@inproceedings{shahmohammadi-etal-2023-encoding,
title = "Encoding Discourse Structure: Comparison of {RST} and {QUD}",
author = "Shahmohammadi, Sara and
Seemann, Hannah and
Stede, Manfred and
Scheffler, Tatjana",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.11",
doi = "10.18653/v1/2023.codi-1.11",
pages = "89--98",
abstract = "We present a quantitative and qualitative comparison of the discourse trees defined by the Rhetorical Structure Theory and Questions under Discussion models. Based on an empirical analysis of parallel annotations for 28 texts (blog posts and podcast transcripts), we conclude that both discourse frameworks capture similar structural information. The qualitative analysis shows that while complex discourse units often match between analyses, QUD structures do not indicate the centrality of segments.",
}
|
We present a quantitative and qualitative comparison of the discourse trees defined by the Rhetorical Structure Theory and Questions under Discussion models. Based on an empirical analysis of parallel annotations for 28 texts (blog posts and podcast transcripts), we conclude that both discourse frameworks capture similar structural information. The qualitative analysis shows that while complex discourse units often match between analyses, QUD structures do not indicate the centrality of segments.
|
[
"Shahmohammadi, Sara",
"Seemann, Hannah",
"Stede, Manfred",
"Scheffler, Tatjana"
] |
Encoding Discourse Structure: Comparison of RST and QUD
|
codi-1.11
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.13.bib
|
https://aclanthology.org/2023.codi-1.13/
|
@inproceedings{varghese-etal-2023-exploiting,
title = "Exploiting Knowledge about Discourse Relations for Implicit Discourse Relation Classification",
author = "Varghese, Nobel and
Yung, Frances and
Anuranjana, Kaveri and
Demberg, Vera",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.13",
doi = "10.18653/v1/2023.codi-1.13",
pages = "99--105",
abstract = "In discourse relation recognition, the classification labels are typically represented as one-hot vectors. However, the categories are in fact not all independent of one another on the contrary, there are several frameworks that describe the labels{'} similarities (by e.g. sorting them into a hierarchy or describing them interms of features (Sanders et al., 2021)). Recently, several methods for representing the similarities between labels have been proposed (Zhang et al., 2018; Wang et al., 2018; Xiong et al., 2021). We here explore and extend the Label Confusion Model (Guo et al., 2021) for learning a representation for discourse relation labels. We explore alternative ways of informing the model about the similarities between relations, by representing relations in terms of their names (and parent category), their typical markers, or in terms of CCR features that describe the relations. Experimental results show that exploiting label similarity improves classification results.",
}
|
In discourse relation recognition, the classification labels are typically represented as one-hot vectors. However, the categories are in fact not all independent of one another on the contrary, there are several frameworks that describe the labels{'} similarities (by e.g. sorting them into a hierarchy or describing them interms of features (Sanders et al., 2021)). Recently, several methods for representing the similarities between labels have been proposed (Zhang et al., 2018; Wang et al., 2018; Xiong et al., 2021). We here explore and extend the Label Confusion Model (Guo et al., 2021) for learning a representation for discourse relation labels. We explore alternative ways of informing the model about the similarities between relations, by representing relations in terms of their names (and parent category), their typical markers, or in terms of CCR features that describe the relations. Experimental results show that exploiting label similarity improves classification results.
|
[
"Varghese, Nobel",
"Yung, Frances",
"Anuranjana, Kaveri",
"Demberg, Vera"
] |
Exploiting Knowledge about Discourse Relations for Implicit Discourse Relation Classification
|
codi-1.13
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.14.bib
|
https://aclanthology.org/2023.codi-1.14/
|
@inproceedings{liu-etal-2023-sae,
title = "{SAE}-{NTM}: Sentence-Aware Encoder for Neural Topic Modeling",
author = "Liu, Hao and
Gao, Jingsheng and
Xiang, Suncheng and
Liu, Ting and
Fu, Yuzhuo",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.14",
doi = "10.18653/v1/2023.codi-1.14",
pages = "106--111",
abstract = "Incorporating external knowledge, such as pre-trained language models (PLMs), into neural topic modeling has achieved great success in recent years. However, employing PLMs for topic modeling generally ignores the maximum sequence length of PLMs and the interaction between external knowledge and bag-of-words (BOW). To this end, we propose a sentence-aware encoder for neural topic modeling, which adopts fine-grained sentence embeddings as external knowledge to entirely utilize the semantic information of input documents. We introduce sentence-aware attention for document representation, where BOW enables the model to attend on topical sentences that convey topic-related cues. Experiments on three benchmark datasets show that our framework outperforms other state-of-the-art neural topic models in topic coherence. Further, we demonstrate that the proposed approach can yield better latent document-topic features through improvement on the document classification.",
}
|
Incorporating external knowledge, such as pre-trained language models (PLMs), into neural topic modeling has achieved great success in recent years. However, employing PLMs for topic modeling generally ignores the maximum sequence length of PLMs and the interaction between external knowledge and bag-of-words (BOW). To this end, we propose a sentence-aware encoder for neural topic modeling, which adopts fine-grained sentence embeddings as external knowledge to entirely utilize the semantic information of input documents. We introduce sentence-aware attention for document representation, where BOW enables the model to attend on topical sentences that convey topic-related cues. Experiments on three benchmark datasets show that our framework outperforms other state-of-the-art neural topic models in topic coherence. Further, we demonstrate that the proposed approach can yield better latent document-topic features through improvement on the document classification.
|
[
"Liu, Hao",
"Gao, Jingsheng",
"Xiang, Suncheng",
"Liu, Ting",
"Fu, Yuzhuo"
] |
SAE-NTM: Sentence-Aware Encoder for Neural Topic Modeling
|
codi-1.14
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.15.bib
|
https://aclanthology.org/2023.codi-1.15/
|
@inproceedings{herold-ney-2023-improving,
title = "Improving Long Context Document-Level Machine Translation",
author = "Herold, Christian and
Ney, Hermann",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.15",
doi = "10.18653/v1/2023.codi-1.15",
pages = "112--125",
abstract = "Document-level context for neural machine translation (NMT) is crucial to improve the translation consistency and cohesion, the translation of ambiguous inputs, as well as several other linguistic phenomena. Many works have been published on the topic of document-level NMT, but most restrict the system to only local context, typically including just the one or two preceding sentences as additional information. This might be enough to resolve some ambiguous inputs, but it is probably not sufficient to capture some document-level information like the topic or style of a conversation. When increasing the context size beyond just the local context, there are two challenges: (i) the memory usage increases exponentially (ii) the translation performance starts to degrade. We argue that the widely-used attention mechanism is responsible for both issues. Therefore, we propose a constrained attention variant that focuses the attention on the most relevant parts of the sequence, while simultaneously reducing the memory consumption. For evaluation, we utilize targeted test sets in combination with novel evaluation techniques to analyze the translations in regards to specific discourse-related phenomena. We find that our approach is a good compromise between sentence-level NMT vs attending to the full context, especially in low resource scenarios.",
}
|
Document-level context for neural machine translation (NMT) is crucial to improve the translation consistency and cohesion, the translation of ambiguous inputs, as well as several other linguistic phenomena. Many works have been published on the topic of document-level NMT, but most restrict the system to only local context, typically including just the one or two preceding sentences as additional information. This might be enough to resolve some ambiguous inputs, but it is probably not sufficient to capture some document-level information like the topic or style of a conversation. When increasing the context size beyond just the local context, there are two challenges: (i) the memory usage increases exponentially (ii) the translation performance starts to degrade. We argue that the widely-used attention mechanism is responsible for both issues. Therefore, we propose a constrained attention variant that focuses the attention on the most relevant parts of the sequence, while simultaneously reducing the memory consumption. For evaluation, we utilize targeted test sets in combination with novel evaluation techniques to analyze the translations in regards to specific discourse-related phenomena. We find that our approach is a good compromise between sentence-level NMT vs attending to the full context, especially in low resource scenarios.
|
[
"Herold, Christian",
"Ney, Hermann"
] |
Improving Long Context Document-Level Machine Translation
|
codi-1.15
|
Poster
|
2306.05183
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.16.bib
|
https://aclanthology.org/2023.codi-1.16/
|
@inproceedings{ruby-etal-2023-unpacking,
title = "Unpacking Ambiguous Structure: A Dataset for Ambiguous Implicit Discourse Relations for {E}nglish and {E}gyptian {A}rabic",
author = "Ruby, Ahmed and
Stymne, Sara and
Hardmeier, Christian",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.16",
doi = "10.18653/v1/2023.codi-1.16",
pages = "126--144",
abstract = "In this paper, we present principles of constructing and resolving ambiguity in implicit discourse relations. Following these principles, we created a dataset in both English and Egyptian Arabic that controls for semantic disambiguation, enabling the investigation of prosodic features in future work. In these datasets, examples are two-part sentences with an implicit discourse relation that can be ambiguously read as either causal or concessive, paired with two different preceding context sentences forcing either the causal or the concessive reading. We also validated both datasets by humans and language models (LMs) to study whether context can help humans or LMs resolve ambiguities of implicit relations and identify the intended relation. As a result, this task posed no difficulty for humans, but proved challenging for BERT/CamelBERT and ELECTRA/AraELECTRA models.",
}
|
In this paper, we present principles of constructing and resolving ambiguity in implicit discourse relations. Following these principles, we created a dataset in both English and Egyptian Arabic that controls for semantic disambiguation, enabling the investigation of prosodic features in future work. In these datasets, examples are two-part sentences with an implicit discourse relation that can be ambiguously read as either causal or concessive, paired with two different preceding context sentences forcing either the causal or the concessive reading. We also validated both datasets by humans and language models (LMs) to study whether context can help humans or LMs resolve ambiguities of implicit relations and identify the intended relation. As a result, this task posed no difficulty for humans, but proved challenging for BERT/CamelBERT and ELECTRA/AraELECTRA models.
|
[
"Ruby, Ahmed",
"Stymne, Sara",
"Hardmeier, Christian"
] |
Unpacking Ambiguous Structure: A Dataset for Ambiguous Implicit Discourse Relations for English and Egyptian Arabic
|
codi-1.16
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.20.bib
|
https://aclanthology.org/2023.codi-1.20/
|
@inproceedings{bleiweiss-2023-two,
title = "Two-step Text Summarization for Long-form Biographical Narrative Genre",
author = "Bleiweiss, Avi",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.20",
doi = "10.18653/v1/2023.codi-1.20",
pages = "145--155",
abstract = "Transforming narrative structure to implicit discourse relations in long-form text has recently seen a mindset shift toward assessing generation consistency. To this extent, summarization of lengthy biographical discourse is of practical benefit to readers, as it helps them decide whether immersing for days or weeks in a bulky book turns a rewarding experience. Machine-generated summaries can reduce the cognitive load and the time spent by authors to write the summary. Nevertheless, summarization faces significant challenges of factual inconsistencies with respect to the inputs. In this paper, we explored a two-step summary generation aimed to retain source-summary faithfulness. Our method uses a graph representation to rank sentence saliency in each of the novel chapters, leading to distributing summary segments in distinct regions of the chapter. Basing on the previously extracted sentences we produced an abstractive summary in a manner more computationally tractable for detecting inconsistent information. We conducted a series of quantitative analyses on a test set of four long biographical novels and showed to improve summarization quality in automatic evaluation over both single-tier settings and external baselines.",
}
|
Transforming narrative structure to implicit discourse relations in long-form text has recently seen a mindset shift toward assessing generation consistency. To this extent, summarization of lengthy biographical discourse is of practical benefit to readers, as it helps them decide whether immersing for days or weeks in a bulky book turns a rewarding experience. Machine-generated summaries can reduce the cognitive load and the time spent by authors to write the summary. Nevertheless, summarization faces significant challenges of factual inconsistencies with respect to the inputs. In this paper, we explored a two-step summary generation aimed to retain source-summary faithfulness. Our method uses a graph representation to rank sentence saliency in each of the novel chapters, leading to distributing summary segments in distinct regions of the chapter. Basing on the previously extracted sentences we produced an abstractive summary in a manner more computationally tractable for detecting inconsistent information. We conducted a series of quantitative analyses on a test set of four long biographical novels and showed to improve summarization quality in automatic evaluation over both single-tier settings and external baselines.
|
[
"Bleiweiss, Avi"
] |
Two-step Text Summarization for Long-form Biographical Narrative Genre
|
codi-1.20
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.codi-1.21.bib
|
https://aclanthology.org/2023.codi-1.21/
|
@inproceedings{lopez-cortez-jacobs-2023-distribution,
title = "The distribution of discourse relations within and across turns in spontaneous conversation",
author = "L{\'o}pez Cortez, S. Magal{\'\i} and
Jacobs, Cassandra L.",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.21",
doi = "10.18653/v1/2023.codi-1.21",
pages = "156--162",
abstract = "Time pressure and topic negotiation may impose constraints on how people leverage discourse relations (DRs) in spontaneous conversational contexts. In this work, we adapt a system of DRs for written language to spontaneous dialogue using crowdsourced annotations from novice annotators. We then test whether discourse relations are used differently across several types of multi-utterance contexts. We compare the patterns of DR annotation within and across speakers and within and across turns. Ultimately, we find that different discourse contexts produce distinct distributions of discourse relations, with single-turn annotations creating the most uncertainty for annotators. Additionally, we find that the discourse relation annotations are of sufficient quality to predict from embeddings of discourse units.",
}
|
Time pressure and topic negotiation may impose constraints on how people leverage discourse relations (DRs) in spontaneous conversational contexts. In this work, we adapt a system of DRs for written language to spontaneous dialogue using crowdsourced annotations from novice annotators. We then test whether discourse relations are used differently across several types of multi-utterance contexts. We compare the patterns of DR annotation within and across speakers and within and across turns. Ultimately, we find that different discourse contexts produce distinct distributions of discourse relations, with single-turn annotations creating the most uncertainty for annotators. Additionally, we find that the discourse relation annotations are of sufficient quality to predict from embeddings of discourse units.
|
[
"L{\\'o}pez Cortez, S. Magal{\\'\\i}",
"Jacobs, Cass",
"ra L."
] |
The distribution of discourse relations within and across turns in spontaneous conversation
|
codi-1.21
|
Poster
|
2307.03645
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.22.bib
|
https://aclanthology.org/2023.codi-1.22/
|
@inproceedings{dang-etal-2023-embedding,
title = "Embedding Mental Health Discourse for Community Recommendation",
author = "Dang, Hy and
Nguyen, Bang and
Ziems, Noah and
Jiang, Meng",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.22",
doi = "10.18653/v1/2023.codi-1.22",
pages = "163--172",
abstract = "Our paper investigates the use of discourse embedding techniques to develop a community recommendation system that focuses on mental health support groups on social media. Social media platforms provide a means for users to anonymously connect with communities that cater to their specific interests. However, with the vast number of online communities available, users may face difficulties in identifying relevant groups to address their mental health concerns. To address this challenge, we explore the integration of discourse information from various subreddit communities using embedding techniques to develop an effective recommendation system. Our approach involves the use of content-based and collaborative filtering techniques to enhance the performance of the recommendation system. Our findings indicate that the proposed approach outperforms the use of each technique separately and provides interpretability in the recommendation process.",
}
|
Our paper investigates the use of discourse embedding techniques to develop a community recommendation system that focuses on mental health support groups on social media. Social media platforms provide a means for users to anonymously connect with communities that cater to their specific interests. However, with the vast number of online communities available, users may face difficulties in identifying relevant groups to address their mental health concerns. To address this challenge, we explore the integration of discourse information from various subreddit communities using embedding techniques to develop an effective recommendation system. Our approach involves the use of content-based and collaborative filtering techniques to enhance the performance of the recommendation system. Our findings indicate that the proposed approach outperforms the use of each technique separately and provides interpretability in the recommendation process.
|
[
"Dang, Hy",
"Nguyen, Bang",
"Ziems, Noah",
"Jiang, Meng"
] |
Embedding Mental Health Discourse for Community Recommendation
|
codi-1.22
|
Poster
|
2307.03892
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.codi-1.23.bib
|
https://aclanthology.org/2023.codi-1.23/
|
@inproceedings{hewett-2023-apa,
title = "{APA}-{RST}: A Text Simplification Corpus with {RST} Annotations",
author = "Hewett, Freya",
editor = "Strube, Michael and
Braud, Chloe and
Hardmeier, Christian and
Li, Junyi Jessy and
Loaiciga, Sharid and
Zeldes, Amir",
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.23",
doi = "10.18653/v1/2023.codi-1.23",
pages = "173--179",
abstract = "We present a corpus of parallel German-language simplified newspaper articles. The articles have been aligned at sentence level and annotated according to the Rhetorical Structure Theory (RST) framework. These RST annotated texts could shed light on structural aspects of text complexity and how simplifications work on a text-level.",
}
|
We present a corpus of parallel German-language simplified newspaper articles. The articles have been aligned at sentence level and annotated according to the Rhetorical Structure Theory (RST) framework. These RST annotated texts could shed light on structural aspects of text complexity and how simplifications work on a text-level.
|
[
"Hewett, Freya"
] |
APA-RST: A Text Simplification Corpus with RST Annotations
|
codi-1.23
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.dialdoc-1.1.bib
|
https://aclanthology.org/2023.dialdoc-1.1/
|
@inproceedings{gou-etal-2023-cross,
title = "Cross-lingual Data Augmentation for Document-grounded Dialog Systems in Low Resource Languages",
author = "Gou, Qi and
Xia, Zehua and
Du, Wenzhe",
editor = "Muresan, Smaranda and
Chen, Vivian and
Casey, Kennington and
David, Vandyke and
Nina, Dethlefs and
Koji, Inoue and
Erik, Ekstedt and
Stefan, Ultes",
booktitle = "Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.dialdoc-1.1",
doi = "10.18653/v1/2023.dialdoc-1.1",
pages = "1--7",
abstract = "This paper proposes a framework to address the issue of data scarcity in Document-Grounded Dialogue Systems(DGDS). Our model leverages high-resource languages to enhance the capability of dialogue generation in low-resource languages. Specifically, We present a novel pipeline CLEM (Cross-Lingual Enhanced Model) including adversarial training retrieval (Retriever and Re-ranker), and Fid (fusion-in-decoder) generator. To further leverage high-resource language, we also propose an innovative architecture to conduct alignment across different languages with translated training. Extensive experiment results demonstrate the effectiveness of our model and we achieved 4th place in the DialDoc 2023 Competition. Therefore, CLEM can serve as a solution to resource scarcity in DGDS and provide useful guidance for multi-lingual alignment tasks.",
}
|
This paper proposes a framework to address the issue of data scarcity in Document-Grounded Dialogue Systems(DGDS). Our model leverages high-resource languages to enhance the capability of dialogue generation in low-resource languages. Specifically, We present a novel pipeline CLEM (Cross-Lingual Enhanced Model) including adversarial training retrieval (Retriever and Re-ranker), and Fid (fusion-in-decoder) generator. To further leverage high-resource language, we also propose an innovative architecture to conduct alignment across different languages with translated training. Extensive experiment results demonstrate the effectiveness of our model and we achieved 4th place in the DialDoc 2023 Competition. Therefore, CLEM can serve as a solution to resource scarcity in DGDS and provide useful guidance for multi-lingual alignment tasks.
|
[
"Gou, Qi",
"Xia, Zehua",
"Du, Wenzhe"
] |
Cross-lingual Data Augmentation for Document-grounded Dialog Systems in Low Resource Languages
|
dialdoc-1.1
|
Poster
|
2305.14949
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.dialdoc-1.2.bib
|
https://aclanthology.org/2023.dialdoc-1.2/
|
@inproceedings{yen-etal-2023-moqa,
title = "{M}o{QA}: Benchmarking Multi-Type Open-Domain Question Answering",
author = "Yen, Howard and
Gao, Tianyu and
Lee, Jinhyuk and
Chen, Danqi",
editor = "Muresan, Smaranda and
Chen, Vivian and
Casey, Kennington and
David, Vandyke and
Nina, Dethlefs and
Koji, Inoue and
Erik, Ekstedt and
Stefan, Ultes",
booktitle = "Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.dialdoc-1.2",
doi = "10.18653/v1/2023.dialdoc-1.2",
pages = "8--29",
abstract = "Previous research on open-domain question answering (QA) mainly focuses on questions with short answers. However, information-seeking QA often requires various formats of answers depending on the nature of the questions, e.g., why/how questions typically require a long answer. In this paper, we present MoQA, a benchmark for open-domain QA that requires building one system that can provide short, medium, long, and yes/no answers to different questions accordingly. MoQA builds upon Natural Questions with multiple types of questions and additional crowdsourcing efforts to ensure high query quality. We adapt state-of-the-art models, and reveal unique findings in multi-type open-domain QA: (1) For retriever-reader models, training one retriever on all types achieves the overall best performance, but it is challenging to train one reader model to output answers of different formats, or to train a question classifier to distinguish between types; (2) An end-to-end closed-book QA model trained on multiple types struggles with the task across the board; (3) State-of-the-art large language models such as the largest GPT-3 models (Brown et al., 2020; Ouyang et al., 2022) also lag behind open-book QA models. Our benchmark and analysis call for more effort into building versatile open-domain QA models in the future.",
}
|
Previous research on open-domain question answering (QA) mainly focuses on questions with short answers. However, information-seeking QA often requires various formats of answers depending on the nature of the questions, e.g., why/how questions typically require a long answer. In this paper, we present MoQA, a benchmark for open-domain QA that requires building one system that can provide short, medium, long, and yes/no answers to different questions accordingly. MoQA builds upon Natural Questions with multiple types of questions and additional crowdsourcing efforts to ensure high query quality. We adapt state-of-the-art models, and reveal unique findings in multi-type open-domain QA: (1) For retriever-reader models, training one retriever on all types achieves the overall best performance, but it is challenging to train one reader model to output answers of different formats, or to train a question classifier to distinguish between types; (2) An end-to-end closed-book QA model trained on multiple types struggles with the task across the board; (3) State-of-the-art large language models such as the largest GPT-3 models (Brown et al., 2020; Ouyang et al., 2022) also lag behind open-book QA models. Our benchmark and analysis call for more effort into building versatile open-domain QA models in the future.
|
[
"Yen, Howard",
"Gao, Tianyu",
"Lee, Jinhyuk",
"Chen, Danqi"
] |
MoQA: Benchmarking Multi-Type Open-Domain Question Answering
|
dialdoc-1.2
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.dialdoc-1.3.bib
|
https://aclanthology.org/2023.dialdoc-1.3/
|
@inproceedings{zhang-etal-2023-exploration,
title = "Exploration of multilingual prompts in document-grounded dialogue",
author = "Zhang, Xiaocheng and
Qing, Huang and
Lin, Fu",
editor = "Muresan, Smaranda and
Chen, Vivian and
Casey, Kennington and
David, Vandyke and
Nina, Dethlefs and
Koji, Inoue and
Erik, Ekstedt and
Stefan, Ultes",
booktitle = "Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.dialdoc-1.3",
doi = "10.18653/v1/2023.dialdoc-1.3",
pages = "30--35",
abstract = "Transferring DGD models from high-resource languages to low-resource languages is a meaningful but challenging task. Being able to provide multilingual responses to multilingual documents further complicates the task. This paper describes our method at DialDoc23 Shared Task (Document-Grounded Dialogue and Conversational Question Answering) for generate responses based on the most relevant passage retrieved. We divide it into three steps of retrieval, re-ranking and generation. Our methods include negative sample augmentation, prompt learning, pseudo-labeling and ensemble. On the submission page, we rank 2nd based on the sum of token-level F1, SacreBleu and Rouge-L scores used for the final evaluation, and get the total score of 210.25.",
}
|
Transferring DGD models from high-resource languages to low-resource languages is a meaningful but challenging task. Being able to provide multilingual responses to multilingual documents further complicates the task. This paper describes our method at DialDoc23 Shared Task (Document-Grounded Dialogue and Conversational Question Answering) for generate responses based on the most relevant passage retrieved. We divide it into three steps of retrieval, re-ranking and generation. Our methods include negative sample augmentation, prompt learning, pseudo-labeling and ensemble. On the submission page, we rank 2nd based on the sum of token-level F1, SacreBleu and Rouge-L scores used for the final evaluation, and get the total score of 210.25.
|
[
"Zhang, Xiaocheng",
"Qing, Huang",
"Lin, Fu"
] |
Exploration of multilingual prompts in document-grounded dialogue
|
dialdoc-1.3
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
||
https://aclanthology.org/2023.dialdoc-1.4.bib
|
https://aclanthology.org/2023.dialdoc-1.4/
|
@inproceedings{su-etal-2023-position,
title = "Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue",
author = "Su, Hsuan and
H. Kumar, Shachi and
Mazumder, Sahisnu and
Chen, Wenda and
Manuvinakurike, Ramesh and
Okur, Eda and
Sahay, Saurav and
Nachman, Lama and
Chen, Shang-Tse and
Lee, Hung-yi",
editor = "Muresan, Smaranda and
Chen, Vivian and
Casey, Kennington and
David, Vandyke and
Nina, Dethlefs and
Koji, Inoue and
Erik, Ekstedt and
Stefan, Ultes",
booktitle = "Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.dialdoc-1.4",
doi = "10.18653/v1/2023.dialdoc-1.4",
pages = "36--43",
abstract = "With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set of knowledge statements in front of dialogue history. However, such a mechanism forces knowledge sets to be concatenated in an ordered manner, making models implicitly pay imbalanced attention to the sets during training. In this paper, we first investigate how the order of the knowledge set can influence autoregressive dialogue systems{'} responses. We conduct experiments on two commonly used dialogue datasets with two types of transformer-based models and find that models view the input knowledge unequally. To this end, we propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input in these models. With the proposed position embedding method, the experimental results show that each knowledge statement is uniformly considered to generate responses.",
}
|
With the power of large pretrained language models, various research works have integrated knowledge into dialogue systems. The traditional techniques treat knowledge as part of the input sequence for the dialogue system, prepending a set of knowledge statements in front of dialogue history. However, such a mechanism forces knowledge sets to be concatenated in an ordered manner, making models implicitly pay imbalanced attention to the sets during training. In this paper, we first investigate how the order of the knowledge set can influence autoregressive dialogue systems{'} responses. We conduct experiments on two commonly used dialogue datasets with two types of transformer-based models and find that models view the input knowledge unequally. To this end, we propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input in these models. With the proposed position embedding method, the experimental results show that each knowledge statement is uniformly considered to generate responses.
|
[
"Su, Hsuan",
"H. Kumar, Shachi",
"Mazumder, Sahisnu",
"Chen, Wenda",
"Manuvinakurike, Ramesh",
"Okur, Eda",
"Sahay, Saurav",
"Nachman, Lama",
"Chen, Shang-Tse",
"Lee, Hung-yi"
] |
Position Matters! Empirical Study of Order Effect in Knowledge-grounded Dialogue
|
dialdoc-1.4
|
Poster
|
2302.05888
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
|
https://aclanthology.org/2023.dialdoc-1.5.bib
|
https://aclanthology.org/2023.dialdoc-1.5/
|
@inproceedings{liu-etal-2023-enhancing-multilingual,
title = "Enhancing Multilingual Document-Grounded Dialogue Using Cascaded Prompt-Based Post-Training Models",
author = "Liu, Jun and
Cheng, Shuang and
Zhou, Zineng and
Gu, Yang and
Ye, Jian and
Luo, Haiyong",
editor = "Muresan, Smaranda and
Chen, Vivian and
Casey, Kennington and
David, Vandyke and
Nina, Dethlefs and
Koji, Inoue and
Erik, Ekstedt and
Stefan, Ultes",
booktitle = "Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.dialdoc-1.5",
doi = "10.18653/v1/2023.dialdoc-1.5",
pages = "44--51",
abstract = "The Dialdoc23 shared task presents a Multilingual Document-Grounded Dialogue Systems (MDGDS) challenge, where system responses are generated in multiple languages using user{'}s queries, historical dialogue records and relevant passages. A major challenge for this task is the limited training data available in low-resource languages such as French and Vietnamese. In this paper, we propose Cascaded Prompt-based Post-training Models, dividing the task into three subtasks: Retrieval, Reranking and Generation. We conduct post-training on high-resource language such as English and Chinese to enhance performance of low-resource languages by using the similarities of languages. Additionally, we utilize the prompt method to activate model{'}s ability on diverse languages within the dialogue domain and explore which prompt is a good prompt. Our comprehensive experiments demonstrate the effectiveness of our proposed methods, which achieved the first place on the leaderboard with a total score of 215.40 in token-level F1, SacreBleu, and Rouge-L metrics.",
}
|
The Dialdoc23 shared task presents a Multilingual Document-Grounded Dialogue Systems (MDGDS) challenge, where system responses are generated in multiple languages using user{'}s queries, historical dialogue records and relevant passages. A major challenge for this task is the limited training data available in low-resource languages such as French and Vietnamese. In this paper, we propose Cascaded Prompt-based Post-training Models, dividing the task into three subtasks: Retrieval, Reranking and Generation. We conduct post-training on high-resource language such as English and Chinese to enhance performance of low-resource languages by using the similarities of languages. Additionally, we utilize the prompt method to activate model{'}s ability on diverse languages within the dialogue domain and explore which prompt is a good prompt. Our comprehensive experiments demonstrate the effectiveness of our proposed methods, which achieved the first place on the leaderboard with a total score of 215.40 in token-level F1, SacreBleu, and Rouge-L metrics.
|
[
"Liu, Jun",
"Cheng, Shuang",
"Zhou, Zineng",
"Gu, Yang",
"Ye, Jian",
"Luo, Haiyong"
] |
Enhancing Multilingual Document-Grounded Dialogue Using Cascaded Prompt-Based Post-Training Models
|
dialdoc-1.5
|
Poster
|
[
""
] | -1 | -1 | -1 | -1 | 0 |
[] |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.