File size: 1,289 Bytes
da32384 cd8297f 8873d85 da32384 94d5ff2 e589671 94d5ff2 da32384 cd8297f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: augmxnt/shisa-base-7b-v1
library_name: peft
datasets:
- NilanE/ParallelFiction-Ja_En-100k
- mpasila/ParallelFiction-Ja_En-100k-alpaca
---
Experimental LoRA, may not be super good. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k).
Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples).
### Prompt format: Alpaca
```
Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
```
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** augmxnt/shisa-base-7b-v1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |