mariagrandury
commited on
Commit
•
dd95505
1
Parent(s):
f6c197b
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ library_name: transformers
|
|
11 |
inference: false
|
12 |
---
|
13 |
|
14 |
-
**LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a
|
15 |
|
16 |
Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples proprietary dataset inspired in famous instruction datasets such as Alpaca and Dolly.
|
17 |
|
@@ -58,7 +58,7 @@ Be one of the first to discover the possibilities of LINCE!
|
|
58 |
|
59 |
## Model Description
|
60 |
|
61 |
-
LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a
|
62 |
|
63 |
- **Developed by:** [Clibrain](https://www.clibrain.com/)
|
64 |
- **Model type:** Language model, instruction model, causal decoder-only
|
|
|
11 |
inference: false
|
12 |
---
|
13 |
|
14 |
+
**LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a Spanish instruction-tuned LLM 🔥
|
15 |
|
16 |
Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples proprietary dataset inspired in famous instruction datasets such as Alpaca and Dolly.
|
17 |
|
|
|
58 |
|
59 |
## Model Description
|
60 |
|
61 |
+
LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples proprietary dataset.
|
62 |
|
63 |
- **Developed by:** [Clibrain](https://www.clibrain.com/)
|
64 |
- **Model type:** Language model, instruction model, causal decoder-only
|