|
--- |
|
library_name: transformers |
|
license: mit |
|
datasets: |
|
- MattCoddity/dockerNLcommands |
|
language: |
|
- en |
|
old_version: Bruce1489/Llama3.2-docker-command |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
<h3>I fine-tuned the "meta-llama/Llama-3.2-1B-Instruct" model using the QLoRA technique. </h3> |
|
<h3>The resulting model can generate Docker commands based on given statements.</h3> |
|
|
|
- **Developed by:** Bruce1489(์ ์ฑํ, Bruce Jung) |
|
- **License:** mit |
|
- **Finetuned from model [optional]:** meta-llama/Llama-3.2-1B-Instruct |
|
|
|
### Direct Use |
|
```python |
|
import torch |
|
from transformers import pipeline |
|
|
|
pipe = pipeline( |
|
"text-generation", |
|
model="Bruce1489/Llama3.2-docker-command-v2", |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto" |
|
) |
|
messages = [ |
|
{"role": "system", "content": "translate this sentence in docker command"}, |
|
{"role": "user", "content": "Please show me the Docker containers that have exited and are related to the mongo image."}, |
|
] |
|
|
|
outputs = pipe( |
|
messages, |
|
max_new_tokens=256, |
|
temperature = 0.5, |
|
top_p = 0.9, |
|
top_k = 10, |
|
do_sample = True, |
|
repetition_penalty = 1.2 |
|
) |
|
print(outputs[0]["generated_text"][-1]['content']) |
|
|
|
``` |
|
|
|
|