Spaces:
Running
Running
title: "Organization Profile" | |
emoji: "π" | |
colorFrom: "blue" | |
colorTo: "indigo" | |
sdk: "static" | |
app_file: "app.py" | |
pinned: false | |
# Organization Profile | |
## About Us | |
Development and optimization of Japanese language models using parameter-efficient fine-tuning techniques. Focused on creating robust and efficient LLM solutions for specific NLP tasks. | |
## Our Focus | |
- Japanese Language Model Development | |
- Parameter-Efficient Fine-tuning | |
- Specialized NLP Tasks | |
- Model Optimization | |
## Projects | |
- LLM-jp Model Fine-tuning | |
- ELYZA Tasks Implementation | |
- Instruction-tuning with Japanese Datasets | |
## Technologies | |
- Base Model: LLM-jp-3-13b | |
- Fine-tuning: LoRA/QLoRA | |
- Training Framework: transformers | |
## Contact | |
- Website: [machigaesagashi.com](https://machigaesagashi.com) | |
- GitHub: [ike3don3](https://github.com/ike3don3) |