File size: 1,603 Bytes
f97b822 ea58ac5 e4e9b0f f97b822 ea58ac5 f97b822 ea58ac5 f97b822 ea58ac5 f97b822 ea58ac5 f97b822 ea58ac5 f97b822 ea58ac5 f97b822 ea58ac5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- ko
- en
tags:
- korean
- gemma
- pytorch
pipeline_tag: text-generation
base_model: beomi/gemma-ko-7b
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6332f1a52b866de639ee0279/XXemQnrO181w0-v59NADb.jpeg)
# Gemma Ko 7B Instruct v0.50
- Eval Loss: `1.08372`
- Train Loss: `1.09816`
- lr: `1.5e-5`
- optimizer: adamw
- lr_scheduler_type: cosine
## Model Details
### Model Description
The Gemma Ko 7B Instruct v0.50 model is designed for generating human-like text in the Korean language.
It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation.
This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation.
- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** Korean, English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [beomi/gemma-ko-7b](https://huggingface.co/beomi/gemma-ko-7b)
# Limitations and Ethical Considerations
As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution. |