File size: 1,511 Bytes
c1f2169 3163fde c1f2169 3a02782 c1f2169 b9d4a12 c1f2169 3a02782 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
base_model: unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
datasets:
- theprint/VanRossum-Alpaca
---
# Homage to Python
This model has been trained for **1 epoch** on the VanRossum dataset.
The VanRossum dataset is all Python! I used [DataMix](https://github.com/theprint/DataMix) to combine a handful of highly rated Python-centric datasets, to get a sampling of each and create something new.
This data set has **80,000 entries** and is named after [**Guido Van Rossum**](https://en.wikipedia.org/wiki/Guido_van_Rossum), the man who invented Python back in 1991.
See the [VanRossum Collection](https://huggingface.co/collections/theprint/vanrossum-67363abb2d3459644d7fd102) on HF for all things related to this dataset.
## Alpaca / GPT
There are 2 versions of this dataset available on Huggingface.
- [VanRossum-GPT](https://huggingface.co/datasets/theprint/VanRossum-GPT)
- [VanRossum-Alpaca](https://huggingface.co/datasets/theprint/VanRossum-Alpaca)
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |