VRAM needed for FP16: 15.9 GB

Run inference with this

This is a pre-alpha POC (Proof Of Concept)

Instructions:

clone:

git clone https://github.com/SicariusSicariiStuff/X-Ray_Vision.git

Settings up venv, (tested for python 3.11, probably works with 3.10)

python3.11 -m venv env
source env/bin/activate

Install dependencies

pip install git+https://github.com/huggingface/[email protected]
pip install torch
pip install pillow
pip install accelerate

Running inference

Usage:

python xRay-Vision.py /path/to/model/ /dir/with/images/

The output will print to console, and export the results into a dir named after your image dir with suffix "_TXT"

So if you run:

python xRay-Vision.py /some_path/x-Ray_model/ /home/images/weird_cats/

Then results will be exported to:

/home/images/weird_cats_TXT/
Downloads last month
1
Safetensors
Model size
4.3B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support