|
--- |
|
library_name: peft |
|
base_model: unsloth/gemma-1.1-7b-it-bnb-4bit |
|
datasets: |
|
- naklecha/minecraft-question-answer-700k |
|
--- |
|
# Gemma 1.1 7B Instruct Minecraft Adapter Model |
|
## Updated Version |
|
This model is fine-tuned from [Unsloth's Gemma 1.1 7B Instruct quantized model](https://huggingface.co/unsloth/gemma-1.1-7b-it-bnb-4bit) with [naklecha's Minecraft Question-Answer dataset](https://huggingface.co/datasets/naklecha/minecraft-question-answer-700k). |
|
Fine-tuned with first 100k rows from dataset with 1 epoch, it took around 2 hours 20 minutes with NVIDIA RTX 4090. |
|
|
|
Model can now generate some good answers. But sometimes it can generate inappropriate answers. I think this problem is based on lack of data. |
|
|
|
## Important Notes |
|
- Model sometimes generates answers with no meanings. I am currently investigating this. This process can be long since I am a beginner in this field. If you have any suggestions, feel free to say it on model's Community page. |
|
- Model is using bitsandbytes so use it with a CUDA supported GPU. |