|
--- |
|
library_name: peft |
|
--- |
|
## Training procedure |
|
|
|
|
|
The following `bitsandbytes` quantization config was used during training: |
|
- load_in_8bit: False |
|
- load_in_4bit: True |
|
- llm_int8_threshold: 6.0 |
|
- llm_int8_skip_modules: None |
|
- llm_int8_enable_fp32_cpu_offload: False |
|
- llm_int8_has_fp16_weight: False |
|
- bnb_4bit_quant_type: nf4 |
|
- bnb_4bit_use_double_quant: False |
|
- bnb_4bit_compute_dtype: float16 |
|
### Framework versions |
|
|
|
- PEFT 0.4.0 |
|
|
|
### How to Get Started with the Model |
|
|
|
```python |
|
from transformers import pipeline |
|
from transformers import AutoTokenizer |
|
from peft import PeftModel, PeftConfig |
|
from transformers import AutoModelForCausalLM , BitsAndBytesConfig |
|
import torch |
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=getattr(torch, "float16"), |
|
bnb_4bit_use_double_quant=False) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
"meta-llama/Llama-2-13b-hf", |
|
quantization_config=bnb_config, |
|
device_map={"": 0}) |
|
model.config.use_cache = False |
|
model.config.pretraining_tp = 1 |
|
model = PeftModel.from_pretrained(model, "TuningAI/Llama2_13B_startup_Assistant") |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf", trust_remote_code=True) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
tokenizer.padding_side = "right" |
|
while 1: |
|
input_text = input(">>>") |
|
prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n {input_text}. [/INST]" |
|
num_new_tokens = 60 |
|
num_prompt_tokens = len(tokenizer(prompt)['input_ids']) |
|
max_length = num_prompt_tokens + num_new_tokens |
|
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=max_length) |
|
result = pipe(prompt) |
|
print(result[0]['generated_text'].replace(prompt, '')) |
|
``` |
|
|