File size: 1,398 Bytes
ed20098 6838360 ed20098 d338a64 ed20098 3b129b2 ed20098 6838360 bfcc7f1 93899d8 bfcc7f1 bc7fe14 bfcc7f1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
base_model:
- IntervitensInc/Llama-3.2-3B-chatml
- alpindale/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- merge
- finetuned
- llama
- llama-3
license: llama3.2
inference:
parameters:
temperature: 0.2
widget:
- messages:
- role: user
content: Any plans for a weekend?
---
#
<img src=https://huggingface.co/altomek/Lo101-3B-AnD/resolve/main/Lo101.png>
<a href="https://youtu.be/As3LGNTlPQ0?si=C8aQVt6XxF6qxU4-" title="Mr.Kitty After Dark // Jennifer Connelly Career Opportunities" target="_blank">intro music...</a>
This is...
## Llama Lo1 01
My first RP directed finetune and merge! Not as expresive like Llama Instruct might be. Writes simpler responses in chat. Somewhat broken... I need to learn more about chat templates... ;P
Trained on few datasets from [jeiku](https://huggingface.co/jeiku) - Thank you!
Have fun!
<img src=https://huggingface.co/altomek/Lo101-3B-AnD/resolve/main/Lo101-chat1.png>
<br>
<img src=https://huggingface.co/altomek/Lo101-3B-AnD/resolve/main/Lo101-chat2.png>
### Settings
- ChatML, Alpaca or Llama 3 Instruct templates for chat should work, however **model have dificulties with longer responses so make sure to set response tokens to something like 256!**
- Set Temperature below 1
- You can easily overload this AI with too complicaded, long character cards. Keep things simple! ;P
|