iperbole nielsr HF Staff commited on
Commit
7f52a09
·
verified ·
1 Parent(s): 82d8aae

Add pipeline tag, library name and Github link (#1)

Browse files

- Add pipeline tag, library name and Github link (30e8a45a3c07fed245f5689cec62ecc389cccd12)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - it
5
  - en
 
 
 
6
  ---
7
 
8
  # Mistral-7B-v0.1-Italian-FVT
@@ -14,9 +16,9 @@ language:
14
 
15
  The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
16
 
17
- *Mistral-v0.1-Italian-FVT* is a continual trained mistral model, after tokenizer substitution.
18
 
19
- The tokenizer of this models after adaptation is the same of [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
20
 
21
  **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
22
 
@@ -24,7 +26,7 @@ The tokenizer of this models after adaptation is the same of [Minverva-3B](https
24
 
25
  ## Data used for the adaptation
26
 
27
- The **Mistral-7B-v0.1-Adapted** model are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
28
  The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
29
 
30
 
@@ -32,7 +34,7 @@ The data are extracted to be skewed toward Italian language with a ration of one
32
 
33
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
34
 
35
- Make sure to update your transformers installation via pip install --upgrade transformers.
36
 
37
  ```python
38
  import transformers
@@ -47,6 +49,8 @@ pipeline = transformers.pipeline(
47
  pipeline("Cosa si può fare in una bella giornata di sole?")
48
  ```
49
 
 
 
50
  ## Citation
51
 
52
  If you use any part of this work, please consider citing the paper as follows:
 
1
  ---
 
2
  language:
3
  - it
4
  - en
5
+ license: apache-2.0
6
+ pipeline_tag: text-generation
7
+ library_name: transformers
8
  ---
9
 
10
  # Mistral-7B-v0.1-Italian-FVT
 
16
 
17
  The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**.
18
 
19
+ *Mistral-v0.1-Italian-FVT* is a continually trained Mistral model, after tokenizer substitution.
20
 
21
+ The tokenizer of this model after adaptation is the same as [Minverva-3B](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0).
22
 
23
  **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR
24
 
 
26
 
27
  ## Data used for the adaptation
28
 
29
+ The **Mistral-7B-v0.1-Adapted** model is trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX).
30
  The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX.
31
 
32
 
 
34
 
35
  You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
36
 
37
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
38
 
39
  ```python
40
  import transformers
 
49
  pipeline("Cosa si può fare in una bella giornata di sole?")
50
  ```
51
 
52
+ Code: https://github.com/Andrew-Wyn/Italian-LLM-Adaptation
53
+
54
  ## Citation
55
 
56
  If you use any part of this work, please consider citing the paper as follows: