Feature Extraction
Transformers
Safetensors
vision-encoder-decoder
custom_code
anicolson commited on
Commit
7c82fd0
1 Parent(s): e41f3b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -23,7 +23,13 @@ To handle missing sections, we employ special tokens.
23
  We also utilise an attention mask with non-causal masking for the image embeddings and a causal mask for the report token embeddings.
24
 
25
  ## How to use:
26
- ```
 
 
 
 
 
 
27
  tokenizer = transformers.AutoTokenizer.from_pretrained('aehrc/cxrmate-rrg24')
28
  model = transformers.AutoModel.from_pretrained('aehrc/cxrmate-rrg24', trust_remote_code=True)
29
  transforms = v2.Compose(
@@ -38,7 +44,7 @@ transforms = v2.Compose(
38
  )
39
  image = transforms(image)
40
  output_ids = model.generate(
41
- pixel_values=image.unsqueeze(0).unsqueeze(0),
42
  max_length=512,
43
  bad_words_ids=[[tokenizer.convert_tokens_to_ids('[NF]')], [tokenizer.convert_tokens_to_ids('[NI]')]],
44
  num_beams=4,
 
23
  We also utilise an attention mask with non-causal masking for the image embeddings and a causal mask for the report token embeddings.
24
 
25
  ## How to use:
26
+
27
+ ```python
28
+ import torch
29
+ from torchvision.transforms import v2
30
+ import transformers
31
+
32
+
33
  tokenizer = transformers.AutoTokenizer.from_pretrained('aehrc/cxrmate-rrg24')
34
  model = transformers.AutoModel.from_pretrained('aehrc/cxrmate-rrg24', trust_remote_code=True)
35
  transforms = v2.Compose(
 
44
  )
45
  image = transforms(image)
46
  output_ids = model.generate(
47
+ pixel_values=images,
48
  max_length=512,
49
  bad_words_ids=[[tokenizer.convert_tokens_to_ids('[NF]')], [tokenizer.convert_tokens_to_ids('[NI]')]],
50
  num_beams=4,