Update README.md
Browse files
README.md
CHANGED
@@ -5639,9 +5639,9 @@ configs:
|
|
5639 |
path: zulu/test-*
|
5640 |
---
|
5641 |
|
5642 |
-
 using the NLLB 3.3B parameter machine translation model.
|
5671 |
-
3. Aya Dataset: We release the [Aya Dataset](https://huggingface.co/datasets/
|
5672 |
|
5673 |
## Load with Datasets
|
5674 |
To load this dataset with Datasets, you'll need to install Datasets as `pip install datasets --upgrade` and then use the following code:
|
@@ -5676,7 +5676,7 @@ To load this dataset with Datasets, you'll need to install Datasets as `pip inst
|
|
5676 |
```python
|
5677 |
from datasets import load_dataset
|
5678 |
|
5679 |
-
dataset = load_dataset("
|
5680 |
```
|
5681 |
In the above code snippet, "english" refers to a subset of the aya_collection. You can load other subsets by specifying its name at the time of loading the dataset.
|
5682 |
|
@@ -5878,7 +5878,7 @@ PS: Templated data also includes Mozambican Portuguese, which doesn't have its o
|
|
5878 |
|
5879 |
|
5880 |
## Authorship
|
5881 |
-
- **Publishing Organization:** [Cohere
|
5882 |
- **Industry Type:** Not-for-profit - Tech
|
5883 |
- **Contact Details:** https://cohere.com/research/aya
|
5884 |
|
|
|
5639 |
path: zulu/test-*
|
5640 |
---
|
5641 |
|
5642 |
+

|
5643 |
|
5644 |
+
****This is a re-upload of the [aya_collection](https://huggingface.co/datasets/CohereLabs/aya_collection), and only differs in the structure of upload. While the original [aya_collection](https://huggingface.co/datasets/CohereLabs/aya_collection) is structured by folders split according to dataset name, this dataset is split by language. We recommend you use this version of the dataset if you are only interested in downloading all of the Aya collection for a single or smaller set of languages.****
|
5645 |
|
5646 |
# Dataset Summary
|
5647 |
The Aya Collection is a massive multilingual collection consisting of 513 million instances of prompts and completions covering a wide range of tasks.
|
|
|
5654 |
- **Aya Datasets Family:**
|
5655 |
| Name | Explanation |
|
5656 |
|------|--------------|
|
5657 |
+
| [aya_dataset](https://huggingface.co/datasets/CohereLabs/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
|
5658 |
+
| [aya_collection](https://huggingface.co/datasets/CohereLabs/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages. This collection structured based on dataset level subsets. An alternative version of the collection structured by language subsets is also available.|
|
5659 |
+
| [aya_collection_language_split](https://huggingface.co/datasets/CohereLabs/aya_collection_language_split) | Aya Collection structured based on language level subsets. |
|
5660 |
+
| [aya_evaluation_suite](https://huggingface.co/datasets/CohereLabs/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
|
5661 |
+
| [aya_redteaming](https://huggingface.co/datasets/CohereLabs/aya_redteaming)| A red-teaming dataset consisting of harmful prompts in 8 languages across 9 different categories of harm with explicit labels for "global" and "local" harm.|
|
5662 |
|
5663 |
|
5664 |
# Dataset
|
|
|
5668 |
|
5669 |
1. Templated data: We collaborated with fluent speakers to create templates that allowed for the automatic expansion of existing datasets into various languages.
|
5670 |
2. Translated data: We translated a hand-selected subset of 19 datasets into 101 languages (114 dialects) using the NLLB 3.3B parameter machine translation model.
|
5671 |
+
3. Aya Dataset: We release the [Aya Dataset](https://huggingface.co/datasets/CohereLabs/aya_dataset) as a subset of the overall collection. This is the only dataset in the collection that is human-annotated in its entirety.
|
5672 |
|
5673 |
## Load with Datasets
|
5674 |
To load this dataset with Datasets, you'll need to install Datasets as `pip install datasets --upgrade` and then use the following code:
|
|
|
5676 |
```python
|
5677 |
from datasets import load_dataset
|
5678 |
|
5679 |
+
dataset = load_dataset("CohereLabs/aya_collection_language_split", "english")
|
5680 |
```
|
5681 |
In the above code snippet, "english" refers to a subset of the aya_collection. You can load other subsets by specifying its name at the time of loading the dataset.
|
5682 |
|
|
|
5878 |
|
5879 |
|
5880 |
## Authorship
|
5881 |
+
- **Publishing Organization:** [Cohere Labs](https://cohere.com/research)
|
5882 |
- **Industry Type:** Not-for-profit - Tech
|
5883 |
- **Contact Details:** https://cohere.com/research/aya
|
5884 |
|