File size: 1,487 Bytes
4b562ae b8452c4 4b562ae b8452c4 4b562ae b8452c4 4b562ae b8452c4 c3183f7 b8452c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
base_model: Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit
language: ['en', 'es']
license: apache-2.0
tags: ['text-generation-inference', 'transformers', 'unsloth', 'mistral', 'gguf']
datasets: ['iamtarun/python_code_instructions_18k_alpaca', 'jtatman/python-code-dataset-500k', 'flytech/python-codes-25k', 'Vezora/Tested-143k-Python-Alpaca', 'codefuse-ai/CodeExercise-Python-27k', 'Vezora/Tested-22k-Python-Alpaca', 'mlabonne/Evol-Instruct-Python-26k']
library_name: adapter-transformers
metrics:
---
# Uploaded model
[<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
- **Developed by:** Agnuxo(https://github.com/Agnuxo1)
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## Benchmark Results
This model has been fine-tuned for various tasks and evaluated on the following benchmarks:
Model Size: 3,821,079,552 parameters
Required Memory: 14.23 GB
For more details, visit my [GitHub](https://github.com/Agnuxo1).
Thanks for your interest in this model!
|