File size: 1,666 Bytes
f4c0211 96d0ecb f4c0211 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: mpasila/JP-EN-Translator-2K-steps-7B
datasets:
- NilanE/ParallelFiction-Ja_En-100k
- mpasila/ParallelFiction-Ja_En-100k-alpaca
---
This is an ExLlamaV2 quantized model in 4bpw of [mpasila/JP-EN-Translator-2K-steps-7B](https://huggingface.co/mpasila/JP-EN-Translator-2K-steps-7B) using the default calibration dataset.
# Original Model card
Experimental model, may not perform that well. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k).
After training with an 8k context length it didn't appear to improve performance much at all. Not sure if I should keep training it (which is costly) or if I should fix some issues with the dataset (like it starting with Ch or Chapter) or I go back to finetuning Finnish models.
### Prompt format: Alpaca
```
Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
```
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** augmxnt/shisa-base-7b-v1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|