File size: 3,886 Bytes
485031e 845b289 a98cf31 485031e 175891e 485031e 175891e dadbca4 175891e 0f383d0 1a482e0 175891e 2ee34ad 36a3e9a 175891e 350d5a3 29137c4 350d5a3 175891e d8e7981 5b17c29 d8e7981 ce93872 d8e7981 175891e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
library_name: transformers
tags:
- code
datasets:
- jtatman/python-code-dataset-500k
language:
- en
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
license: apache-2.0
---
- **Developed by:** [More Information Needed]
- **Finetuned from model:** TinyLlama/TinyLlama-1.1B-Chat-v1.0
#### Training Hyperparameters
```python
python examples/scripts/sft.py --model_name TinyLlama/TinyLlama-1.1B-Chat-v1.0 --dataset_name jtatman/python-code-dataset-500k --load_in_4bit --dataset_text_field text --per_device_train_batch_size 2 --per_device_eval_batch_size 8 --gradient_accumulation_steps 1 --learning_rate 2e-4 --optim adamw_torch --save_steps 2000 --logging_steps 500 --warmup_ratio 0 --use_peft --lora_r 64 --lora_alpha 16 --lora_dropout 0.1 --report_to wandb --num_train_epochs 1 --output_dir TinyLlama-1.1B-Chat-v1.0-PCD250k_v0.1
```
However, only 250K out of the 500K dataset was used for fine-tuning.
Of that, 70% was used for training data and 30% for evaluation.
# Usage
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="SSK-DNB/TinyLlama-1.1B-Chat-v1.0-PCD250k_v0.1", torch_dtype=torch.bfloat16, device_map="auto")
text = '''Create a program that determines whether a given year is a leap year or not.
The input is an integer Y (1000 ≤ Y ≤ 2999) representing a year, provided in a single line.
Output "YES" if the given year is a leap year, otherwise output "NO" in a single line.
A leap year is determined according to the following rules:
Rule 1: A year divisible by 4 is a leap year.
Rule 2: A year divisible by 100 is not a leap year.
Rule 3: A year divisible by 400 is a leap year.
Rule 4: If none of the above rules (Rule 1-3) apply, the year is not a leap year.
If a year satisfies multiple rules, the rule with the higher number takes precedence.
'''
texts = f"Translate the following problem statement into Python code. :\n{text}"
messages = [
{"role": "system","content": "You are a chatbot who can help code!",},
{"role": "user", "content": f"{texts}"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(
prompt,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
repetition_penalty=1.0,
top_k=50,
top_p=1.0,
min_p=0
)
print(outputs[0]["generated_text"])
```
Also, this repository contains GGUF format model files and provides only the q4_k_m model.
Please download the GGUF format model file from the repository and place it in the same directory, then execute the following code.
# llama-cpp-python Usage
```python
from llama_cpp import Llama
llm = Llama(model_path="TinyLlama-1.1B-Chat-v1.0-PCD250k_v0.1_Q4_K_M.gguf", verbose=False,n_ctx=2000,n_gpu_layers=-1)
system_message = "You are a chatbot who can help code!"
text = '''Create a program that determines whether a given year is a leap year or not.
The input is an integer Y (1000 ≤ Y ≤ 2999) representing a year, provided in a single line.
Output "YES" if the given year is a leap year, otherwise output "NO" in a single line.
A leap year is determined according to the following rules:
Rule 1: A year divisible by 4 is a leap year.
Rule 2: A year divisible by 100 is not a leap year.
Rule 3: A year divisible by 400 is a leap year.
Rule 4: If none of the above rules (Rule 1-3) apply, the year is not a leap year.
If a year satisfies multiple rules, the rule with the higher number takes precedence.
'''
texts = f"Translate the following problem statement into Python code. :\n{text}"
prompt = f"<|system|>\n{system_message}</s>\n<|user|>\n{texts}</s>\n<|assistant|>\n"
output = llm(
prompt,
stop=["</s>"],
max_tokens=512,
echo=True,
top_k=50,
top_p=1.0,
temperature=0.1,
min_p=0,
repeat_penalty=1.0,
typical_p=1.0
)
print(output['choices'][0]["text"])
``` |