File size: 1,638 Bytes
0e61cf7 13ae62e 4a240ee 13ae62e e1ff401 13ae62e e1ff401 13ae62e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
title: README
emoji: 🐢
colorFrom: purple
colorTo: pink
sdk: static
pinned: false
---
## Panda Villa Tech Limited
<p align="center">
<img src="https://raw.githubusercontent.com/PandaVT/DataTager/main/assert/PandaVilla_logo.jpg" width="650" style="margin-bottom: 0.2;"/>
<p>
<h5 align="center"> Grow Together ⭐ </h5>
<h4 align="center"> [<a href="https://github.com/PandaVT/DataTager">GitHub</a> | <a href="https://datatager.com/">DataTager</a>]</h4>
**Long-term Focus:**
- Our company is dedicated to long-term specialization in **synthetic data**, **metaphysics**, and **psychology LLM**, exploring how these fields can intersect with AI.
**Product:** DataTager
**Website:** [DataTager.com](https://DataTager.com/)
- **Description:** DataTager is a tool designed to evaluate and generate the training data needed for large language models. We believe it's more important for individuals and enterprises to fine-tune large models easily and create models tailored to their specific business needs, rather than just choosing models with the highest benchmarks.
**Philosophy:**
- We published a paper titled "AnyTaskTune," advocating that **Task Fine-Tuning** based on real-world scenarios is crucial. This approach is more significant than using universally high-scoring models.
**Resources:**
- We have open-sourced various subtask datasets across multiple domains to support the community. These resources are available on our website for anyone interested in specific task fine-tuning.
Explore more on how to fine-tune your tasks efficiently with our resources at [DataTager.com](https://DataTager.com/).
|