mariagrandury
commited on
Commit
•
2139af6
1
Parent(s):
9f81101
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,9 @@ inference: false
|
|
16 |
|
17 |
# Model Card for LINCE-ZERO
|
18 |
|
19 |
-
**LINCE
|
|
|
|
|
20 |
|
21 |
The model is released under the Apache 2.0 license.
|
22 |
|
@@ -24,6 +26,7 @@ The model is released under the Apache 2.0 license.
|
|
24 |
<img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo"">
|
25 |
</div>
|
26 |
|
|
|
27 |
|
28 |
# Table of Contents
|
29 |
|
@@ -46,7 +49,6 @@ The model is released under the Apache 2.0 license.
|
|
46 |
- [Factors](#factors)
|
47 |
- [Metrics](#metrics)
|
48 |
- [Results](#results)
|
49 |
-
- [Model Examination](#model-examination)
|
50 |
- [Environmental Impact](#environmental-impact)
|
51 |
- [Technical Specifications](#technical-specifications)
|
52 |
- [Model Architecture and Objective](#model-architecture-and-objective)
|
@@ -57,24 +59,24 @@ The model is released under the Apache 2.0 license.
|
|
57 |
- [Contact](#contact)
|
58 |
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
|
59 |
|
60 |
-
# Model Details
|
61 |
|
62 |
## Model Description
|
63 |
|
64 |
-
LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a state-of-the-art Spanish instruction language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish.
|
65 |
|
66 |
- **Developed by:** [Clibrain](https://www.clibrain.com/)
|
67 |
- **Model type:** Language model, instruction model, causal decoder-only
|
68 |
- **Language(s) (NLP):** es
|
69 |
- **License:** apache-2.0
|
70 |
-
- **Parent Model:**
|
71 |
|
72 |
## Model Sources
|
73 |
|
74 |
-
- **Paper**: Coming soon!
|
75 |
-
- **Demo**: Coming soon!
|
76 |
|
77 |
-
# Uses
|
78 |
|
79 |
## Direct Use
|
80 |
|
@@ -90,7 +92,7 @@ LINCE-ZERO is an instruct model, it’s primarily intended for direct use and ma
|
|
90 |
|
91 |
LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies.
|
92 |
|
93 |
-
# Bias, Risks, and Limitations
|
94 |
|
95 |
LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups.
|
96 |
|
@@ -105,7 +107,7 @@ Please, when utilizing LINCE-ZERO, exercise caution and critically assess the ou
|
|
105 |
|
106 |
If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards.
|
107 |
|
108 |
-
# Training Details
|
109 |
|
110 |
## Training Data
|
111 |
|
@@ -115,6 +117,8 @@ Alpaca is a 24.2 MB dataset of 52,002 instructions and demonstrations in English
|
|
115 |
|
116 |
Dolly is a 13.1 MB dataset of 15,011 instruction-following records in American English. It was generated by thousands of Databricks employees, who were requested to provide reference texts copied from Wikipedia for specific categories. To learn more, consult [Dolly’s Data Card](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
|
117 |
|
|
|
|
|
118 |
## Training Procedure
|
119 |
|
120 |
For detailed information about the model architecture and compute infrastructure, please refer to the Technical Specifications section.
|
@@ -133,7 +137,7 @@ More information needed
|
|
133 |
|
134 |
More information needed (throughput, start/end time, checkpoint size if relevant, etc.)
|
135 |
|
136 |
-
# Evaluation
|
137 |
|
138 |
## Testing Data, Factors & Metrics
|
139 |
|
@@ -149,9 +153,9 @@ Since LINCE-ZERO is an instruction model, the metrics used to evaluate it are:
|
|
149 |
|
150 |
### Results
|
151 |
|
152 |
-
Paper coming soon
|
153 |
|
154 |
-
# Technical Specifications
|
155 |
|
156 |
## Model Architecture and Objective
|
157 |
|
@@ -173,7 +177,7 @@ LINCE-ZERO was trained on AWS SageMaker, on ... GPUs in ... instances.
|
|
173 |
|
174 |
More information needed
|
175 |
|
176 |
-
# Environmental Impact
|
177 |
|
178 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
179 |
|
@@ -183,25 +187,25 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
183 |
- **Compute Region:** More information needed
|
184 |
- **Carbon Emitted:** More information needed
|
185 |
|
186 |
-
# Citation
|
187 |
|
188 |
There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite:
|
189 |
|
190 |
```markdown
|
191 |
@article{lince-zero,
|
192 |
-
title={{LINCE}: Llm for Instructions from Natural Corpus en Español},
|
193 |
author={},
|
194 |
year={2023}
|
195 |
}
|
196 |
```
|
197 |
|
198 |
-
# Contact
|
199 |
|
200 |
[[email protected]](mailto:[email protected])
|
201 |
|
202 |
-
# How to Get Started with LINCE-ZERO
|
203 |
|
204 |
-
Use the code below to get started with LINCE-ZERO
|
205 |
|
206 |
```py
|
207 |
import torch
|
|
|
16 |
|
17 |
# Model Card for LINCE-ZERO
|
18 |
|
19 |
+
**LINCE-ZERO** (Llm for Instructions from Natural Corpus en Español) is a SOTA Spanish instruction-tuned LLM 🔥
|
20 |
+
|
21 |
+
Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using a combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish and augmented to 80k examples.
|
22 |
|
23 |
The model is released under the Apache 2.0 license.
|
24 |
|
|
|
26 |
<img src="https://huggingface.co/clibrain/lince-zero/resolve/main/LINCE-CLIBRAIN-HD.jpg" alt="lince logo"">
|
27 |
</div>
|
28 |
|
29 |
+
<br />
|
30 |
|
31 |
# Table of Contents
|
32 |
|
|
|
49 |
- [Factors](#factors)
|
50 |
- [Metrics](#metrics)
|
51 |
- [Results](#results)
|
|
|
52 |
- [Environmental Impact](#environmental-impact)
|
53 |
- [Technical Specifications](#technical-specifications)
|
54 |
- [Model Architecture and Objective](#model-architecture-and-objective)
|
|
|
59 |
- [Contact](#contact)
|
60 |
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
|
61 |
|
62 |
+
# 🐯 Model Details
|
63 |
|
64 |
## Model Description
|
65 |
|
66 |
+
LINCE-ZERO (Llm for Instructions from Natural Corpus en Español) is a state-of-the-art Spanish instruction-tuned large language model. Developed by [Clibrain](https://www.clibrain.com/), it is a causal decoder-only model with 7B parameters. LINCE-ZERO is based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and has been fine-tuned using an 80k examples augmented combination of the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets, both translated into Spanish.
|
67 |
|
68 |
- **Developed by:** [Clibrain](https://www.clibrain.com/)
|
69 |
- **Model type:** Language model, instruction model, causal decoder-only
|
70 |
- **Language(s) (NLP):** es
|
71 |
- **License:** apache-2.0
|
72 |
+
- **Parent Model:** https://huggingface.co/tiiuae/falcon-7b
|
73 |
|
74 |
## Model Sources
|
75 |
|
76 |
+
- **Paper**: Coming soon! ✨
|
77 |
+
- **Demo**: Coming soon! ✨
|
78 |
|
79 |
+
# 💡 Uses
|
80 |
|
81 |
## Direct Use
|
82 |
|
|
|
92 |
|
93 |
LINCE-ZERO should not be used for production purposes without conducting a thorough assessment of risks and mitigation strategies.
|
94 |
|
95 |
+
# ⚠️ Bias, Risks, and Limitations
|
96 |
|
97 |
LINCE-ZERO has limitations associated with both the underlying language model and the instruction tuning data. It is crucial to acknowledge that predictions generated by the model may inadvertently exhibit common deficiencies of language models, including hallucination, toxicity, and perpetuate harmful stereotypes across protected classes, identity characteristics, and sensitive, social, and occupational groups.
|
98 |
|
|
|
107 |
|
108 |
If considering LINCE-ZERO for production use, it is crucial to thoroughly evaluate the associated risks and adopt suitable precautions. Conduct a comprehensive assessment to address any potential biases and ensure compliance with legal and ethical standards.
|
109 |
|
110 |
+
# 📚 Training Details
|
111 |
|
112 |
## Training Data
|
113 |
|
|
|
117 |
|
118 |
Dolly is a 13.1 MB dataset of 15,011 instruction-following records in American English. It was generated by thousands of Databricks employees, who were requested to provide reference texts copied from Wikipedia for specific categories. To learn more, consult [Dolly’s Data Card](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
|
119 |
|
120 |
+
After combining both translations, the dataset was augmented to a total of 80k examples.
|
121 |
+
|
122 |
## Training Procedure
|
123 |
|
124 |
For detailed information about the model architecture and compute infrastructure, please refer to the Technical Specifications section.
|
|
|
137 |
|
138 |
More information needed (throughput, start/end time, checkpoint size if relevant, etc.)
|
139 |
|
140 |
+
# ✅ Evaluation
|
141 |
|
142 |
## Testing Data, Factors & Metrics
|
143 |
|
|
|
153 |
|
154 |
### Results
|
155 |
|
156 |
+
Paper coming soon! Meanwhile, check the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
157 |
|
158 |
+
# ⚙️ Technical Specifications
|
159 |
|
160 |
## Model Architecture and Objective
|
161 |
|
|
|
177 |
|
178 |
More information needed
|
179 |
|
180 |
+
# 🌳 Environmental Impact
|
181 |
|
182 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
183 |
|
|
|
187 |
- **Compute Region:** More information needed
|
188 |
- **Carbon Emitted:** More information needed
|
189 |
|
190 |
+
# 📝 Citation
|
191 |
|
192 |
There is a paper coming soon! Meanwhile, when using LINCE-ZERO please use the following information to cite:
|
193 |
|
194 |
```markdown
|
195 |
@article{lince-zero,
|
196 |
+
title={{LINCE-ZERO}: Llm for Instructions from Natural Corpus en Español},
|
197 |
author={},
|
198 |
year={2023}
|
199 |
}
|
200 |
```
|
201 |
|
202 |
+
# 📧 Contact
|
203 |
|
204 |
[[email protected]](mailto:[email protected])
|
205 |
|
206 |
+
# 🔥 How to Get Started with LINCE-ZERO
|
207 |
|
208 |
+
Use the code below to get started with LINCE-ZERO!
|
209 |
|
210 |
```py
|
211 |
import torch
|