language: | |
- en | |
library_name: peft | |
base_model: unsloth/gemma-1.1-7b-it-bnb-4bit | |
datasets: | |
- naklecha/minecraft-question-answer-700k | |
pipeline_tag: text-generation | |
# Gemma 1.1 7B Minecraft QA Model | |
This fine-tuned Gemma 7B Instruct model from Unsloth using data from [naklecha/minecraft-question-answer-700k](https://huggingface.co/datasets/naklecha/minecraft-question-answer-700k) to enhance its understanding and generation of Minecraft-related content. | |
This specialized model is now better equipped to provide detailed information and support for Minecraft players and developers. | |
## Model Info | |
Fine-tuned with NVIDIA A100 with 100 steps, took 5 minutes to train using TRL, QLoRA and quantized. | |
Note that since the model is quantized, you have to use it with a GPU. |