--- license: llama2 datasets: - lmg-anon/VNTL-v2-2k-small language: - ja - en pipeline_tag: translation --- This is a experimental llama2 7B qlora made using the [VNTL-v2-2k-small](https://huggingface.co/datasets/lmg-anon/VNTL-v2-2k-small) dataset. Unlike version 0.1, the input was masked out of the loss calculation during training. The objectives of this fine-tune are: 1. Teaching the model how to translate while respecting the context. 2. Teaching the model how to translate while following the character's metadata. 3. Teaching the model how to translate while respecting the translation fidelity. I can say with certainty that objectives 1 and 2 were completed successfully. However, the objective 3 wasn't, most likely due to the difficulty of the model making such association, so instead, I used the fidelity classification just to exclude the translations with low/medium fidelity from affecting the loss calculation, since these translations are, most of the time, creative translations or straight up mismatched translations. This is an prompt example: ``` <> Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん) Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female | Aliases: None <> 【桜乃】:『……ごめん』 <> (fidelity = absolute) 【Sakuno】:『... Sorry.』 <> 【新吾】:「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」 <> (fidelity = high) ``` The generated translation for that prompt, with temperature 0, is: ``` 【Shingo】:「Don't worry about it. I was just glad you were lost. You're cute, so I was worried about you.」 ```