|
--- |
|
tags: |
|
- text-generation |
|
- causal-lm |
|
- transformers |
|
library_name: transformers |
|
model-index: |
|
- name: Llama-3-8B Fine-tuned |
|
results: [] |
|
--- |
|
|
|
# Fine-Tuned Llama-3-8B Model |
|
|
|
This model is a fine-tuned version of `NousResearch/Meta-Llama-3-8B` using LoRA and 8-bit quantization. |
|
|
|
## Usage |
|
To load the model: |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "ubiodee/Test_Plutus" |
|
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|