Feature Extraction
Transformers
Safetensors
vision-encoder-decoder
custom_code
anicolson commited on
Commit
1ef055b
1 Parent(s): cc77401

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -2
README.md CHANGED
@@ -23,6 +23,9 @@ EAST was applied to a multimodal language model with RadGraph as the reward. Oth
23
  - Special tokens (`[NF]` and `[NI]`) to handle missing *findings* and *impression* sections.
24
  - Non-causal attention masking for the image embeddings and a causal attention masking for the report token embeddings.
25
 
 
 
 
26
  ## Example:
27
 
28
  ```python
@@ -93,7 +96,29 @@ _, impression = model.split_and_decode_sections(output_ids, tokenizer)
93
  ## Notebook example:
94
  https://huggingface.co/aehrc/cxrmate-rrg24/blob/main/demo.ipynb
95
 
96
- ## Citation:
 
97
 
98
- [More Information Needed]
99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  - Special tokens (`[NF]` and `[NI]`) to handle missing *findings* and *impression* sections.
24
  - Non-causal attention masking for the image embeddings and a causal attention masking for the report token embeddings.
25
 
26
+ ## Paper:
27
+ https://aclanthology.org/2024.bionlp-1.8/
28
+
29
  ## Example:
30
 
31
  ```python
 
96
  ## Notebook example:
97
  https://huggingface.co/aehrc/cxrmate-rrg24/blob/main/demo.ipynb
98
 
99
+ ## Known issues:
100
+ There is no penalty in the reward for sampled reports that differ in length to the radiologist report. Hence, the model has learned to generate longer reports, often with repetitions. This was fixed in our recent work: https://arxiv.org/abs/2406.13181.
101
 
102
+ ## Citation:
103
 
104
+ @inproceedings{nicolson-etal-2024-e,
105
+ title = "e-Health {CSIRO} at {RRG}24: Entropy-Augmented Self-Critical Sequence Training for Radiology Report Generation",
106
+ author = "Nicolson, Aaron and
107
+ Liu, Jinghui and
108
+ Dowling, Jason and
109
+ Nguyen, Anthony and
110
+ Koopman, Bevan",
111
+ editor = "Demner-Fushman, Dina and
112
+ Ananiadou, Sophia and
113
+ Miwa, Makoto and
114
+ Roberts, Kirk and
115
+ Tsujii, Junichi",
116
+ booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing",
117
+ month = aug,
118
+ year = "2024",
119
+ address = "Bangkok, Thailand",
120
+ publisher = "Association for Computational Linguistics",
121
+ url = "https://aclanthology.org/2024.bionlp-1.8",
122
+ pages = "99--104",
123
+ abstract = "The core novelty of our approach lies in the addition of entropy regularisation to self-critical sequence training. This helps maintain a higher entropy in the token distribution, preventing overfitting to common phrases and ensuring a broader exploration of the vocabulary during training, which is essential for handling the diversity of the radiology reports in the RRG24 datasets. We apply this to a multimodal language model with RadGraph as the reward. Additionally, our model incorporates several other aspects. We use token type embeddings to differentiate between findings and impression section tokens, as well as image embeddings. To handle missing sections, we employ special tokens. We also utilise an attention mask with non-causal masking for the image embeddings and a causal mask for the report token embeddings.",
124
+ }