Model Card for kkk662/histrin-gpt-model
This is a fine-tuned GPT model specialized for technical text in Korean, focusing on railway systems and signaling mechanisms.
Model Details
Model Description
This model is fine-tuned on technical documentation related to railway systems, using a dataset that includes both text and tables extracted from PDFs. It is designed to understand and generate precise Korean technical text.
- Developed by: kkk662
- Funded by (optional): Internal Research
- Shared by (optional): kkk662
- Model type: GPT-Neo 1.3B
- Language(s) (NLP): Korean, English
- License: Apache 2.0
- Finetuned from model (optional): EleutherAI/gpt-neo-1.3B
Model Sources [optional]
- Repository: https://huggingface.co/kkk662/histrin-gpt-model
- Paper [optional]: N/A
- Demo [optional]: N/A
Uses
Direct Use
This model can be directly used for generating technical descriptions, explanations, and summaries related to railway systems in Korean.
Downstream Use [optional]
The model can be fine-tuned further for domain-specific tasks, such as creating operation manuals or translating technical documents into Korean.
Out-of-Scope Use
This model is not suitable for general-purpose NLP tasks outside the technical domain.
Bias, Risks, and Limitations
Recommendations
Users should be aware that the model's performance is highly domain-specific. Misuse in non-technical or unrelated domains may lead to inaccurate or nonsensical results.
How to Get Started with the Model
Use the code below to get started with the model:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "kkk662/histrin-gpt-model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "ํ๊ตญ ๊ณ ์์ฒ ๋ ์์คํ
๋ํด ์ค๋ช
ํด์ค."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 151