File size: 1,831 Bytes
ea0140d
da249a4
f572884
ea0140d
da249a4
ea0140d
 
 
 
 
 
 
 
da249a4
 
ea0140d
da249a4
ea0140d
 
da249a4
ea0140d
da249a4
ea0140d
da249a4
 
ea0140d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59

---
base_model: Meta/tiny-llama
language: ['en', 'es']
license: apache-2.0
tags: ['text-generation-inference', 'transformers', 'unsloth', 'mistral', 'gguf']
datasets: ['iamtarun/python_code_instructions_18k_alpaca', 'jtatman/python-code-dataset-500k', 'flytech/python-codes-25k', 'Vezora/Tested-143k-Python-Alpaca', 'codefuse-ai/CodeExercise-Python-27k', 'Vezora/Tested-22k-Python-Alpaca', 'mlabonne/Evol-Instruct-Python-26k']
library_name: adapter-transformers
metrics: 
- accuracy
- bertscore
- glue
- perplexity
---

# Uploaded model

[<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
- **Developed by:** Agnuxo(https://github.com/Agnuxo1)
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal

This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)


## Benchmark Results

This model has been fine-tuned for various tasks and evaluated on the following benchmarks:

### accuracy
**Accuracy:** Not Available

![accuracy Accuracy](./accuracy_accuracy.png)

### bertscore
**Bertscore:** Not Available

![bertscore Bertscore](./bertscore_bertscore.png)

### glue
**Glue:** Not Available

![glue Glue](./glue_glue.png)

### perplexity
**Perplexity:** Not Available

![perplexity Perplexity](./perplexity_perplexity.png)


Model Size: 4,124,864 parameters
Required Memory: 0.02 GB

For more details, visit my [GitHub](https://github.com/Agnuxo1).

Thanks for your interest in this model!