metadata
base_model: llm-jp/llm-jp-3-13b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
datasets:
- Aratako/Synthetic-JP-EN-Coding-Dataset-801k
- kanhatakeyama/ramdom-to-fixed-multiturn-Calm3
- kanhatakeyama/AutoMultiTurnByCalm3-22
- ichikara-instruction-003-001-1
- ichikara-instruction-003-003-1
Uploaded model
- Developed by: keiju12uh
- License: apache-2.0
- Finetuned from model : llm-jp/llm-jp-3-13b
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Used_datasets
- Aratako/Synthetic-JP-EN-Coding-Dataset-801k: 50000 sample
- kanhatakeyama/ramdom-to-fixed-multiturn-Calm3: 10000 sample
- kanhatakeyama/AutoMultiTurnByCalm3-22: 50000 sample
- ichikara-instruction-003-001-1
- ichikara-instruction-003-003-1
Sample_Use
''' a '''