oohtmeel's picture
Update README.md
de591fb verified
|
raw
history blame
1.79 kB
---
license: mit
pipeline_tag: image-classification
tags:
- medical
---
## Model Details
This model is trained on 224X224 Grayscale images which are CT-scans
that are transformed into JPEG. The model is a finetuned version of
[Swin Transformer (tiny-sized model)](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224).
I also used this tutorial.[Swin Transformer (tiny-sized model)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb#scrollTo=UX6dwmT7GP91).
## Uses
The model can be used to classify JPEG images of CT scans into either cancer positive or
Cancer negative groups.
I think it would work okay for any image classification task.
## Training Data
The model was trained on data originally obtained from the National Cancer Institute
Imaging Data Commons. https://portal.imaging.datacommons.cancer.gov/explore/
The data set used consisted of about 11,000 images which were transformed CT scans
some of which contained Cancerous Nodules and some that did not.
## How to Use
```python
from huggingface_hub import hf_hub_download
from PIL import Image
abc= hf_hub_download(repo_id="oohtmeel/swin-tiny-patch4-finetuned-lung-cancer-ct-scans",
filename="_X000a109d-56da-4c3f-8680-55afa04d6ae0.dcm.jpg.jpg")
image = Image.open(abc)
processor = AutoImageProcessor.from_pretrained("oohtmeel/swin-tiny-patch4-finetuned-lung-cancer-ct-scans")
model = AutoModelForImageClassification.from_pretrained("oohtmeel/swin-tiny-patch4-finetuned-lung-cancer-ct-scans")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```