It does not like to run local with the latest VLLM and transformers:

#6
by surak - opened
2024-10-17 16:40:41 | ERROR | stderr |     return get_tokenizer_group(parallel_config.tokenizer_pool_config,
2024-10-17 16:40:41 | ERROR | stderr |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-10-17 16:40:41 | ERROR | stderr |   File "/p/haicluster/llama/FastChat/sc_venv_sglang/venv/lib/python3.11/site-packages/vllm/transformers_utils/tokenizer_group/__init__.py", line 49, in get_tokenizer_group
2024-10-17 16:40:41 | ERROR | stderr |     return tokenizer_cls.from_config(tokenizer_pool_config, **init_kwargs)
2024-10-17 16:40:41 | ERROR | stderr |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-10-17 16:40:41 | ERROR | stderr |   File "/p/haicluster/llama/FastChat/sc_venv_sglang/venv/lib/python3.11/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 30, in from_config
2024-10-17 16:40:41 | ERROR | stderr |     return cls(**init_kwargs)
2024-10-17 16:40:41 | ERROR | stderr |            ^^^^^^^^^^^^^^^^^^
2024-10-17 16:40:41 | ERROR | stderr |   File "/p/haicluster/llama/FastChat/sc_venv_sglang/venv/lib/python3.11/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 23, in __init__
2024-10-17 16:40:41 | ERROR | stderr |     self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
2024-10-17 16:40:41 | ERROR | stderr |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-10-17 16:40:41 | ERROR | stderr |   File "/p/haicluster/llama/FastChat/sc_venv_sglang/venv/lib/python3.11/site-packages/vllm/transformers_utils/tokenizer.py", line 160, in get_tokenizer
2024-10-17 16:40:41 | ERROR | stderr |     raise e
2024-10-17 16:40:41 | ERROR | stderr |   File "/p/haicluster/llama/FastChat/sc_venv_sglang/venv/lib/python3.11/site-packages/vllm/transformers_utils/tokenizer.py", line 139, in get_tokenizer
2024-10-17 16:40:41 | ERROR | stderr |     tokenizer = AutoTokenizer.from_pretrained(
2024-10-17 16:40:41 | ERROR | stderr |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-10-17 16:40:41 | ERROR | stderr |   File "/p/haicluster/llama/FastChat/sc_venv_sglang/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 865, in from_pretrained
2024-10-17 16:40:41 | ERROR | stderr |     config = AutoConfig.from_pretrained(
2024-10-17 16:40:41 | ERROR | stderr |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-10-17 16:40:41 | ERROR | stderr |   File "/p/haicluster/llama/FastChat/sc_venv_sglang/venv/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1040, in from_pretrained
2024-10-17 16:40:41 | ERROR | stderr |     raise ValueError(

 ValueError: Unrecognized model in models/Ministral-8B-Instruct-2410. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, git, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, graphormer, grounding-dino, groupvit, hiera, hubert, ibert, idefics, idefics2, idefics3, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, pix2struct, pixtral, plbart, poolformer, pop2piano, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zoedepth

I have the same problem when running on vllm

ValueError: Unrecognized model in /data/Ministral-8B-Instruct-2410. Should have a model_type key in its config.json, or contain one of the following strings in its name:

Maybe you haven't put all the options at the vllm startup:
--config-format mistral --load-format mistral --tokenizer-mode mistral

I agree with @thies , and I solved this problem by adding these parameters when loading LLM in my script:

model = LLM(
            model=model_name,
            tensor_parallel_size=num_gpus,
            trust_remote_code=True,
            download_dir=hf_cache_path,
            max_num_batched_tokens=max_prompt_length,
            max_model_len=8192,
            tokenizer_mode="mistral",
            config_format="mistral",
            load_format="mistral"
        )

Ok, I have the vllm_worker from FastChat working on a Nvidia RTX 3090 with 24 gb. I had to add the following parameters:

--config-format mistral 
--load-format mistral 
--tokenizer-mode mistral
--max-model-len 19312
surak changed discussion status to closed

Sign up or log in to comment