File size: 4,186 Bytes
4437aa5 360bf72 4437aa5 360bf72 4437aa5 360bf72 4437aa5 00ec376 4437aa5 00ec376 4437aa5 360bf72 4437aa5 3c336f3 4437aa5 360bf72 4437aa5 360bf72 4437aa5 360bf72 4437aa5 aab561f 4437aa5 a6c9537 2bb874a a6c9537 2bb874a a6c9537 4437aa5 a6c9537 4437aa5 360bf72 4437aa5 9fc81d7 360bf72 9fc81d7 4437aa5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
---
base_model: mwitiderrick/open_llama_3b_code_instruct_0.1
datasets:
- mwitiderrick/AlpacaCode
inference: true
model_type: llama
prompt_template: |
<s>[INST]
{prompt}
[/INST]
created_by: mwitiderrick
tags:
- transformers
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: mwitiderrick/open_llama_3b_instruct_v_0.2
results:
- task:
type: text-generation
dataset:
name: hellaswag
type: hellaswag
metrics:
- name: hellaswag(0-Shot)
type: hellaswag (0-Shot)
value: 0.6600
- task:
type: text-generation
dataset:
name: winogrande
type: winogrande
metrics:
- name: winogrande(0-Shot)
type: winogrande (0-Shot)
value: 0.6322
- task:
type: text-generation
dataset:
name: arc_challenge
type: arc_challenge
metrics:
- name: arc_challenge(0-Shot)
type: arc_challenge (0-Shot)
value: 0.
source:
name: open_llama_3b_instruct_v_0.2 model card
url: https://huggingface.co/mwitiderrick/open_llama_3b_instruct_v_0.2
---
# OpenLLaMA Glaive: An Open Reproduction of LLaMA
This is an [OpenLlama model Code Instruct](https://huggingface.co/mwitiderrick/open_llama_3b_code_instruct_0.1) that has been fine-tuned on 1 epoch of the
[Glaive Assistsnt](https://huggingface.co/datasets/mwitiderrick/glaive-code-assistant) dataset.
## Prompt Template
```
<s>[INST] {{ user_msg }} [/INST]
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline
tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/open_llama_3b_glaive_assistant_v0.1")
model = AutoModelForCausalLM.from_pretrained("mwitiderrick/open_llama_3b_glaive_assistant_v0.1")
query = "Write a quick sort algorithm in Python"
text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
output = text_gen(f"<s>[INST]{query}[/INST]")
print(output[0]['generated_text'])
"""
<s>[INST]Write a quick sort algorithm in Python[/INST]
Quick sort is a divide and conquer algorithm that sorts an array in-place.
It works by repeatedly dividing the array into two sub-arrays, sorting
them, and then merging them back together.
Here's a Python implementation of the quick sort algorithm:
def quick_sort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + [pivot] + quick_sort
"""
```
## Metrics
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 0|acc |0.4974|± |0.0050|
| | |none | 0|acc_norm|0.6600|± |0.0047|
| Groups |Version|Filter|n-shot| Metric | Value | |Stderr|
|----------|-------|------|-----:|-----------|-------:|---|-----:|
|truthfulqa|N/A |none | 0|bleu_max | 23.5771|± |0.5407|
| | |none | 0|bleu_acc | 0.2754|± |0.0002|
| | |none | 0|bleu_diff | -8.1019|± |0.5137|
| | |none | 0|rouge1_max | 49.5707|± |0.6501|
| | |none | 0|rouge1_acc | 0.2607|± |0.0002|
| | |none | 0|rouge1_diff| -9.8962|± |0.5492|
| | |none | 0|rouge2_max | 33.0399|± |0.8237|
| | |none | 0|rouge2_acc | 0.2313|± |0.0002|
| | |none | 0|rouge2_diff|-11.9054|± |0.7963|
| | |none | 0|rougeL_max | 46.3168|± |0.6705|
| | |none | 0|rougeL_acc | 0.2521|± |0.0002|
| | |none | 0|rougeL_diff|-10.1301|± |0.5669|
| | |none | 0|acc | 0.3191|± |0.0405|
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|----------|-------|------|-----:|------|-----:|---|-----:|
|winogrande|Yaml |none | 0|acc |0.6322|± |0.0136|
``` |