Image-Text-to-Text
Safetensors
English
llava_llama
medical

Issue with image processings

#1
by amadeus1981 - opened

Hello,
I'm trying to run the code given you example on github:

git clone [email protected]:AIMedLab/PULSE.git

cd PULSE/LLaVA

conda create -n pulse-llava python=3.10 -y

conda activate pulse-llava

pip install -e ".[train]"

pip install flash-attn --no-build-isolation

After following the steps and downloading the safetensors for this model, I run :
python llava/eval/run_llava.py --model-path "PULSE-ECG/PULSE-7B" --image-file "images/ecg_example.png" --query "What are the main features in this ECG image?" --conv-mode "llava_v1"

However, it seems that the same error persists with the
image = process_anyres_image(image, image_processor, model_cfg.image_grid_pinpoints) as I checked and it seems that image_processor is None.

Can you please advise?

PULSE-ECG org

Could you provide the complete error message?

Here is the complete error message:

Some weights of the model checkpoint at PULSE-ECG/PULSE-7B were not used when initializing LlavaLlamaForCausalLM: ['model.image_newline']

  • This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Traceback (most recent call last):
    File "/home//PULSE/LLaVA/llava/eval/run_llava.py", line 148, in
    eval_model(args)
    File "/home/
    /PULSE/LLaVA/llava/eval/run_llava.py", line 105, in eval_model
    images_tensor = process_images(
    File "/home/****/PULSE/LLaVA/llava/mm_utils.py", line 178, in process_images
    image = process_anyres_image(image, image_processor, model_cfg.image_grid_pinpoints)
    File "/home/***/PULSE/LLaVA/llava/mm_utils.py", line 140, in process_anyres_image
    patches = divide_to_patches(image_padded, processor.crop_size['height'])
    AttributeError: 'NoneType' object has no attribute 'crop_size

I think the issue comes from when in the builder.py in the load_pretrained_model function

image_processor = None


if 'llava' in model_name.lower():
    mm_use_im_start_end = getattr(model.config, "mm_use_im_start_end", False)
    mm_use_im_patch_token = getattr(model.config, "mm_use_im_patch_token", True)
    if mm_use_im_patch_token:
        tokenizer.add_tokens([DEFAULT_IMAGE_PATCH_TOKEN], special_tokens=True)
    if mm_use_im_start_end:
        tokenizer.add_tokens([DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN], special_tokens=True)
    model.resize_token_embeddings(len(tokenizer))

    vision_tower = model.get_vision_tower()
    if not vision_tower.is_loaded:
        vision_tower.load_model(device_map=device_map)
    if device_map != 'auto':
        vision_tower.to(device=device_map, dtype=torch.float16)
    image_processor = vision_tower.image_processor

Since running the command suggested in the github:
python llava/eval/run_llava.py --model-path "PULSE-ECG/PULSE-7B" --image-file "images/ecg_example.png" --query "What are the main features in this ECG image?" --conv-mode "llava_v1"
will result in model_name.lower() will be pulse-7b thus the returned image_processor = None

PULSE-ECG org

Yes, you’re right. Changing if 'llava' in model_name.lower(): to if 'llava' in model_name.lower() or 'pulse' in model_name.lower(): will work. This hasn’t yet been updated in the gitHub codebase. I’ll check and make sure to update it there.

PULSE-ECG org

Thank you for pointing this out. I have updated the code: https://github.com/AIMedLab/PULSE/commit/836f864e306012558c7d2611cc85f431be6785b5

No problem!!

Just wanted to ask, it seems that the demo page (https://huggingface.co/spaces/paralym/PULSE-7B) isn't working

PULSE-ECG org

Thanks for the reminder. It seems that the issue is due to the expiration of my Plus subscription. We’ll try applying for free ZeroGPU support.

Sign up or log in to comment