File size: 2,572 Bytes
8e75a9d
 
 
 
 
 
 
 
 
 
 
f08a80b
461ec6b
 
 
6d860a5
461ec6b
 
8e75a9d
461ec6b
8e75a9d
 
461ec6b
 
 
 
 
 
362ad96
461ec6b
 
 
362ad96
461ec6b
 
 
362ad96
461ec6b
362ad96
461ec6b
2256d75
 
 
a265c0f
2256d75
a265c0f
9b1e8b3
 
2256d75
461ec6b
 
 
 
 
 
 
 
 
 
6d860a5
461ec6b
 
 
 
 
6d860a5
461ec6b
 
 
 
 
362ad96
6d860a5
 
461ec6b
 
 
 
 
 
 
 
 
 
 
6d860a5
461ec6b
 
 
 
 
 
 
 
 
6d860a5
461ec6b
 
 
 
8e75a9d
 
362ad96
 
 
 
04767cf
362ad96
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- safetersors
- pytorch
- coding
- finetuned
datasets:
- Replete-AI/code_bagel
---
![a-captivating-and-surreal-image-of-a-goat-with-wil-XIKQzKvDRjihmI3sKK7IoA-0fxZxe6tRAKlXRRc4S9EPA.jpeg](https://cdn-uploads.huggingface.co/production/uploads/650b3f38b5b6029d37298ae8/Mlfr8UeQmKMt8gtgDh_aP.jpeg)


# INTRO

we are happy to announce our frist coding model

# Model Card for Model ID

this is a finetune model of [llama3.1](https://llama.meta.com) that can perform well in coding



- **Developed by: [kshabana4ai](https://huggingface.co/kshabana-ai)
- **Funded by no one
- **Shared by kshabana
- **Model type: safetensors and gguf
- **Language(s) English
- **License:** Apache2.0
- **Finetuned from model [llama3.1-instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)


#### installation

download [ollama](https://ollama.com/download)

```bash
ollama run hf.co/kshabana/GOAT-coder-llama3.1-8b:Q4_K_M
```

### Model Sources 

https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct

- **Dataset Repository:** 

https://huggingface.co/datasets/Replete-AI/code_bagel

## Uses

This model is finetuned specifically for coding and it has a 131072 context lingth.



## Bias, Risks, and Limitations

it can some times produce a wrong answers.



### Recommendations

NOTE: you shold have [lm studio](https://lmstudio.ai/) or [ollama](https://ollama.com) to use this model

IN LM-STUDIO: it is recommended to use this model with the defult lm-studio configration.

IN OLLAMA: we will be pushing the model soon to ollama.



## Training Details

model trained with unsloth.

### Training Data

the training dataset that i used: https://huggingface.co/datasets/Replete-AI/code_bagel

## IMPORTANT LINKS

OLLAMA: https://ollama.com

LM-STUDIO: https://lmstudio.ai

llama3.1: instruct: https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct

dataset: https://huggingface.co/datasets/Replete-AI/code_bagel
 



This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

kshabana4ai

[<img src ="https://cdn-uploads.huggingface.co/production/uploads/650b3f38b5b6029d37298ae8/1kvHhWQGdBqgUuwkUw0L1.png" width="200"/>](https://huggingface.co/kshabana-ai)