File size: 311 Bytes
8a81db5
 
 
4ae3c48
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
---
license: mit
---

This repo contains a low-rank adapter for LLaMA-7b
fit on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.

It doesn't contain the foundation model itself, so it's MIT licensed!

The adapter was trained with a catalan translation of the cleaned alpaca dataset.