|
--- |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: llama3 |
|
model_type: llama |
|
--- |
|
|
|
# π Llama-3-CycleQD |
|
|
|
π€ [Models](https://huggingface.co/SakanaAI) | π [Paper](https://arxiv.org/abs/2410.14735) | π¦ [Twitter](https://twitter.com/SakanaAILabs) |
|
|
|
This collection of agentic Language Models (LLMs) is based on Llama-3-8B-Instruct. |
|
**Llama-3-8B-Instruct-CycleQD-CS** is created using the CycleQD method, which leverages: |
|
|
|
- [SakanaAI/Llama-3-8B-Instruct-DB-Expert](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-DB-Expert) |
|
- [SakanaAI/Llama-3-8B-Instruct-OS-Expert](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-OS-Expert) |
|
- [SakanaAI/Llama-3-8B-Instruct-Coding-Expert](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-Coding-Expert) |
|
|
|
Please refer to our [report](https://arxiv.org/abs/2410.14735) for more details. |
|
We are grateful to the developers of the following source model and training data: |
|
|
|
- [Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) |
|
- [Agent-FLAN](https://huggingface.co/datasets/internlm/Agent-FLAN) |
|
- [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K) |
|
- [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder-OSS-Instruct-75K) |
|
|
|
<!-- ## Model Index |
|
|
|
| Model | Link | |
|
|---|---| |
|
| DB expert | [SakanaAI/Llama-3-8B-Instruct-DB-Expert](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-DB-Expert) | |
|
| OS expert | [SakanaAI/Llama-3-8B-Instruct-OS-Expert](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-OS-Expert) | |
|
| Coding Expert | [SakanaAI/Llama-3-8B-Instruct-Coding-Expert](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-Coding-Expert) | |
|
| CycleQD | [SakanaAI/Llama-3-8B-Instruct-CycleQD-CS](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-CycleQD-CS) | |
|
--> |
|
## Model Details |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** [Sakana AI](https://sakana.ai/) |
|
- **Model type:** Autoregressive Language Model |
|
- **License:** [META LLAMA 3 COMMUNITY LICENSE](https://www.llama.com/llama3/license/) |
|
- **Repository:** [SakanaAI/CycleQD](https://github.com/SakanaAI/CycleQD) |
|
- **Paper:** https://arxiv.org/abs/2410.14735 |
|
|
|
|
|
## Uses |
|
This model is provided for research and development purposes only and should be considered as an experimental prototype. |
|
It is not intended for commercial use or deployment in mission-critical environments. |
|
Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed. |
|
Sakana AI shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained. |
|
Users must fully understand the risks associated with the use of this model and use it at their own discretion. |
|
|
|
## Acknowledgement |
|
|
|
We would like to thank the developers of the source models and training datasets for their contributions and for making their work available. These models are based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO), and built with Meta Llama 3. |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{sakana2024cycleQD, |
|
title={Agent Skill Acquisition for Large Language Models via CycleQD}, |
|
author={So Kuroki and Taishi Nakamura and Takuya Akiba and Yujin Tang}, |
|
year={2024}, |
|
eprint={2410.14735}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2410.14735}, |
|
} |
|
``` |
|
|