Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
File size: 3,373 Bytes
e76be1c 78108d3 e76be1c 78108d3 e76be1c bb8789e 42b4045 bb8789e 42b4045 bb8789e 6893992 bb8789e 6893992 44fce3a 6893992 bb8789e 69db905 78108d3 f2ad4e1 44fce3a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
## Working with dataset locally
A huggingface datasets repository is a GitHub repository like any other. You can simply download it like so:
```bash
git clone https://huggingface.co/datasets/danish-foundation-models/danish-dynaword
cd danish-dynaword
```
You can the work with the dataset locally like so:
```py
from datasets import load_dataset
name = "../." # instead of "danish-foundation-models/danish-dynaword"
dataset = load_dataset("../.", split="train")
# make transformations here
```
> Note: While it is local Huggingface still uses a cache, therefore you might need to reset it after changes have been made to see that it works correctly. You can do this by deleting the cached files which you can locate using `dataset.cache_files`.
## Installing dependencies
This repo comes with a few dependencies you need to install to make this run. It uses a [makefile](https://opensource.com/article/18/8/what-how-makefile) to run commands and a [uv](https://docs.astral.sh/uv/) for package management. Once you have uv installed you can install the dependencies using:
```bash
make install
```
## Running dataset tests
This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
```bash
make test
```
## Submitting a PR
Creating a PR on Huggingface is a bit different from creating one on Github.
1) Go to the community tab on huggingface press *new pull request* and choose *on your machine*. Specify the title of the your PR. Then you can simply:
```bash
git fetch origin refs/pr/{PR NUMBER}:pr/{PR NUMBER}
git checkout pr/{PR NUMBER}
# make your changes here
# push to hub
git push origin pr/{PR NUMBER}:refs/pr/{PR NUMBER}
```
Before you make the PR do be sure to make sure that you have completed the following checklist.
### Checklist
- [ ] I have run the test suite using `make test` and all tests pass
- [ ] I have added/changed a dataset and have
- [ ] I have updated descriptive statistics using `make update-descriptive-statistics`
- [ ] I have bumped the version use `make bump-version`
### Examples of Previous PRs
To see example PR you can see the following:
- [Restructuring columns in the dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/11)
- [Adding a new dataset](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/15)
- Updated [dataset description and metadata](https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/20)
## Frequently asked questions
### Do you accept synthetic dataets
Yes we do generally accept synthetic datasets since it will likely be a promising research direction for low- to mid-resource languages.
However, you should be aware that synthetic dataset will probably require a more detailed examination and description.
We will for instance examine the quality of the synthetic subset and whether the model used for the creation permits resharing of the synthetic data under permissible licenses.
### Do you accept non-Danish data
Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text. |