This is broken with Unsloth stable (and the unstable build is broken anyway)

#5
by Ransom - opened

Run the CLI :
python3 unsloth-cli-v2.py
--model_name "unsloth/Llama-3.3-70B-Instruct-GGUF"
--load_in_4bit
--dataset "./qa_Wodehouse_unsloth_conversion.jsonl"
--output_dir "./wodehouse_finetune_output"
--per_device_train_batch_size 2
--gradient_accumulation_steps 4
--learning_rate 2e-4
--max_steps 400
--save_model
--save_gguf
--quantization "q4_k_m"

You get:
root@84e46c931c26:/workspace# python3 unsloth-cli-v2.py
--model_name "unsloth/Llama-3.3-70B-Instruct-GGUF"
--load_in_4bit
--dataset "./qa_Wodehouse_unsloth_conversion.jsonl"
--output_dir "./wodehouse_finetune_output"
--per_device_train_batch_size 2
--gradient_accumulation_steps 4
--learning_rate 2e-4
--max_steps 400
--save_model
--save_gguf
--quantization "q4_k_m"
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
Traceback (most recent call last):
File "/workspace/unsloth-cli-v2.py", line 235, in
run(args)
File "/workspace/unsloth-cli-v2.py", line 48, in run
model, tokenizer = FastLanguageModel.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/unsloth/models/loader.py", line 211, in from_pretrained
raise RuntimeError(autoconfig_error or peft_error)
RuntimeError: unsloth/Llama-3.3-70B-Instruct-GGUF does not appear to have a file named config.json. Checkout 'https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF/tree/main' for available files.

Sign up or log in to comment