This model is a fine-tuned version of the ChatGLM3 base model using the Stanford Alpaca Dataset. The fine-tuning process utilizes scripts and files located in the ChatGLM3/finetune_basemodel_demo directory.
Steps to reproduce fine-tuning:
- Download the alpaca_data.json file from the Stanford Alpaca Dataset repository. (https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json)
- Convert alpaca_data.json to alpaca_data.jsonl format using the format_alpaca2jsonl.py in the ChatGLM3/finetune_basemodel_demo/scripts directory. Ensure the input and output paths are correctly specified.
- Execute the finetune_lora.sh script within the ChatGLM3/finetune_basemodel_demo/scripts directory. Make sure to set the DATASET_PATH variable to the location of your formatted dataset.
Please adhere to the licensing agreements of the Stanford Alpaca Dataset when using this model.
- Downloads last month
- 8