Feature Extraction
Transformers
Safetensors
vision-encoder-decoder
custom_code
anicolson commited on
Commit
e41f3b0
1 Parent(s): ca35de8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -186
README.md CHANGED
@@ -6,7 +6,7 @@ datasets:
6
  - StanfordAIMI/interpret-cxr-test-hidden
7
  ---
8
 
9
- # CXRMate-RRG4
10
 
11
  This is an evolution of https://huggingface.co/aehrc/cxrmate developed for the Radiology Report Generation task of BioNLP @ ACL 2024.
12
 
@@ -22,193 +22,36 @@ We use token type embeddings to differentiate between findings and impression se
22
  To handle missing sections, we employ special tokens.
23
  We also utilise an attention mask with non-causal masking for the image embeddings and a causal mask for the report token embeddings.
24
 
25
-
26
-
27
- ## Model Details
28
-
29
- ### Model Description
30
-
31
- <!-- Provide a longer summary of what this model is. -->
32
-
33
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
34
-
35
- - **Developed by:** [More Information Needed]
36
- - **Funded by [optional]:** [More Information Needed]
37
- - **Shared by [optional]:** [More Information Needed]
38
- - **Model type:** [More Information Needed]
39
- - **Language(s) (NLP):** [More Information Needed]
40
- - **License:** [More Information Needed]
41
- - **Finetuned from model [optional]:** [More Information Needed]
42
-
43
- ### Model Sources [optional]
44
-
45
- <!-- Provide the basic links for the model. -->
46
-
47
- - **Repository:** [More Information Needed]
48
- - **Paper [optional]:** [More Information Needed]
49
- - **Demo [optional]:** [More Information Needed]
50
-
51
- ## Uses
52
-
53
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
54
-
55
- ### Direct Use
56
-
57
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
58
-
59
- [More Information Needed]
60
-
61
- ### Downstream Use [optional]
62
-
63
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
64
-
65
- [More Information Needed]
66
-
67
- ### Out-of-Scope Use
68
-
69
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
70
-
71
- [More Information Needed]
72
-
73
- ## Bias, Risks, and Limitations
74
-
75
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
76
-
77
- [More Information Needed]
78
-
79
- ### Recommendations
80
-
81
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
82
-
83
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
84
-
85
- ## How to Get Started with the Model
86
-
87
- Use the code below to get started with the model.
88
-
89
- [More Information Needed]
90
-
91
- ## Training Details
92
-
93
- ### Training Data
94
-
95
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
96
-
97
- [More Information Needed]
98
-
99
- ### Training Procedure
100
-
101
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
102
-
103
- #### Preprocessing [optional]
104
-
105
- [More Information Needed]
106
-
107
-
108
- #### Training Hyperparameters
109
-
110
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
111
-
112
- #### Speeds, Sizes, Times [optional]
113
-
114
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
115
-
116
- [More Information Needed]
117
-
118
- ## Evaluation
119
-
120
- <!-- This section describes the evaluation protocols and provides the results. -->
121
-
122
- ### Testing Data, Factors & Metrics
123
-
124
- #### Testing Data
125
-
126
- <!-- This should link to a Dataset Card if possible. -->
127
-
128
- [More Information Needed]
129
-
130
- #### Factors
131
-
132
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
133
-
134
- [More Information Needed]
135
-
136
- #### Metrics
137
-
138
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
139
-
140
- [More Information Needed]
141
-
142
- ### Results
143
-
144
- [More Information Needed]
145
-
146
- #### Summary
147
-
148
-
149
-
150
- ## Model Examination [optional]
151
-
152
- <!-- Relevant interpretability work for the model goes here -->
153
-
154
- [More Information Needed]
155
-
156
- ## Environmental Impact
157
-
158
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
159
-
160
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
161
-
162
- - **Hardware Type:** [More Information Needed]
163
- - **Hours used:** [More Information Needed]
164
- - **Cloud Provider:** [More Information Needed]
165
- - **Compute Region:** [More Information Needed]
166
- - **Carbon Emitted:** [More Information Needed]
167
-
168
- ## Technical Specifications [optional]
169
-
170
- ### Model Architecture and Objective
171
-
172
- [More Information Needed]
173
-
174
- ### Compute Infrastructure
175
-
176
- [More Information Needed]
177
-
178
- #### Hardware
179
-
180
- [More Information Needed]
181
-
182
- #### Software
183
-
184
- [More Information Needed]
185
-
186
- ## Citation [optional]
187
-
188
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
189
 
190
  **BibTeX:**
191
 
192
  [More Information Needed]
193
 
194
- **APA:**
195
-
196
- [More Information Needed]
197
-
198
- ## Glossary [optional]
199
-
200
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
201
-
202
- [More Information Needed]
203
-
204
- ## More Information [optional]
205
-
206
- [More Information Needed]
207
-
208
- ## Model Card Authors [optional]
209
-
210
- [More Information Needed]
211
-
212
- ## Model Card Contact
213
-
214
- [More Information Needed]
 
6
  - StanfordAIMI/interpret-cxr-test-hidden
7
  ---
8
 
9
+ # CXRMate-RRG4: Entropy-Augmented Self-Critical Sequence Training for Radiology Report Generation
10
 
11
  This is an evolution of https://huggingface.co/aehrc/cxrmate developed for the Radiology Report Generation task of BioNLP @ ACL 2024.
12
 
 
22
  To handle missing sections, we employ special tokens.
23
  We also utilise an attention mask with non-causal masking for the image embeddings and a causal mask for the report token embeddings.
24
 
25
+ ## How to use:
26
+ ```
27
+ tokenizer = transformers.AutoTokenizer.from_pretrained('aehrc/cxrmate-rrg24')
28
+ model = transformers.AutoModel.from_pretrained('aehrc/cxrmate-rrg24', trust_remote_code=True)
29
+ transforms = v2.Compose(
30
+ [
31
+ v2.PILToTensor(),
32
+ v2.Grayscale(num_output_channels=3),
33
+ v2.Resize(size=model.config.encoder.image_size, antialias=True),
34
+ v2.CenterCrop(size=[model.config.encoder.image_size]*2),
35
+ v2.ToDtype(torch.float32, scale=True),
36
+ v2.Normalize(mean=model.config.encoder.image_mean, std=model.config.encoder.image_std),
37
+ ]
38
+ )
39
+ image = transforms(image)
40
+ output_ids = model.generate(
41
+ pixel_values=image.unsqueeze(0).unsqueeze(0),
42
+ max_length=512,
43
+ bad_words_ids=[[tokenizer.convert_tokens_to_ids('[NF]')], [tokenizer.convert_tokens_to_ids('[NI]')]],
44
+ num_beams=4,
45
+ use_cache=True,
46
+ )
47
+ findings, impression = model.split_and_decode_sections(output_ids, tokenizer)
48
+ ```
49
+
50
+ ## Paper:
51
+
52
+ ## Citation:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  **BibTeX:**
55
 
56
  [More Information Needed]
57