This repo contains a low-rank adapter for LLaMA-7b fit on the Stanford Alpaca dataset.
It doesn't contain the foundation model itself, so it's MIT licensed!
The adapter was trained with a catalan translation of the cleaned alpaca dataset.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.