ike3don3
company
AI & ML interests
"Our focus is on developing and fine-tuning large language models for Japanese natural language processing tasks. We are particularly interested in optimizing model performance for specific use cases while maintaining efficiency through techniques like QLoRA and parameter-efficient fine-tuning. Our current work involves adapting pre-trained models for specialized tasks, with an emphasis on improving inference quality and maintaining model reliability."
Recent Activity
Organization Card
Organization Profile
About Us
Development and optimization of Japanese language models using parameter-efficient fine-tuning techniques. Focused on creating robust and efficient LLM solutions for specific NLP tasks.
Our Focus
- Japanese Language Model Development
- Parameter-Efficient Fine-tuning
- Specialized NLP Tasks
- Model Optimization
Projects
- LLM-jp Model Fine-tuning
- ELYZA Tasks Implementation
- Instruction-tuning with Japanese Datasets
Technologies
- Base Model: LLM-jp-3-13b
- Fine-tuning: LoRA/QLoRA
- Training Framework: transformers
Contact
- Website: machigaesagashi.com
- GitHub: ike3don3
models
None public yet
datasets
None public yet