We made a LLM model with meta-llama/Meta-Llama-3-8B
We use QDoRA(256, 64) and lr=1e-5, NEFTune=3
Our changed private data used, and 5 epoch train
We are making LLM model for Kolon !