Datasets:

Modalities:
Image
Text
Formats:
json
Libraries:
Datasets
pandas
License:
librarian-bot commited on
Commit
43b4d36
·
verified ·
1 Parent(s): ec8d7e2

Librarian Bot: Add language metadata for dataset

Browse files

This pull request aims to enrich the metadata of your dataset by adding language metadata to `YAML` block of your dataset card `README.md`.

How did we find this information?

- The librarian-bot downloaded a sample of rows from your dataset using the [dataset-server](https://huggingface.co/docs/datasets-server/) library
- The librarian-bot used a language detection model to predict the likely language of your dataset. This was done on columns likely to contain text data.
- Predictions for rows are aggregated by language and a filter is applied to remove languages which are very infrequently predicted
- A confidence threshold is applied to remove languages which are not confidently predicted

The following languages were detected with the following mean probabilities:

- English (en): 99.98%


If this PR is merged, the language metadata will be added to your dataset card. This will allow users to filter datasets by language on the [Hub](https://huggingface.co/datasets).
If the language metadata is incorrect, please feel free to close this PR.

To merge this PR, you can use the merge button below the PR:
![Screenshot 2024-02-06 at 15.27.46.png](https://cdn-uploads.huggingface.co/production/uploads/63d3e0e8ff1384ce6c5dd17d/1PRE3CoDpg_wfThC6U1w0.png)

This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bots). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to

@davanstrien
.

Files changed (1) hide show
  1. README.md +54 -52
README.md CHANGED
@@ -1,59 +1,61 @@
1
  ---
 
 
2
  license: bsd-3-clause
3
  dataset_name: phantom-wiki
4
  configs:
5
- - config_name: text-corpus
6
- data_files:
7
- - split: depth_6_size_26_seed_1
8
- path: depth_6_size_26_seed_1/articles.json
9
- - split: depth_6_size_50_seed_1
10
- path: depth_6_size_50_seed_1/articles.json
11
- - split: depth_6_size_100_seed_1
12
- path: depth_6_size_100_seed_1/articles.json
13
- - split: depth_6_size_500_seed1
14
- path: depth_6_size_500_seed_1/articles.json
15
- - split: depth_8_size_26_seed_1
16
- path: depth_8_size_26_seed_1/articles.json
17
- - split: depth_8_size_50_seed_1
18
- path: depth_8_size_50_seed_1/articles.json
19
- - split: depth_8_size_100_seed_1
20
- path: depth_8_size_100_seed_1/articles.json
21
- - split: depth_8_size_500_seed_1
22
- path: depth_8_size_500_seed_1/articles.json
23
- - split: depth_10_size_26_seed_1
24
- path: depth_10_size_26_seed_1/articles.json
25
- - split: depth_10_size_50_seed_1
26
- path: depth_10_size_50_seed_1/articles.json
27
- - split: depth_10_size_100_seed_1
28
- path: depth_10_size_100_seed_1/articles.json
29
- - split: depth_10_size_500_seed_1
30
- path: depth_10_size_500_seed_1/articles.json
31
- - config_name: question-answer
32
- data_files:
33
- - split: depth_6_size_26_seed_1
34
- path: depth_6_size_26_seed_1/questions.json
35
- - split: depth_6_size_50_seed_1
36
- path: depth_6_size_50_seed_1/questions.json
37
- - split: depth_6_size_100_seed_1
38
- path: depth_6_size_100_seed_1/questions.json
39
- - split: depth_6_size_500_seed1
40
- path: depth_6_size_500_seed_1/questions.json
41
- - split: depth_8_size_26_seed_1
42
- path: depth_8_size_26_seed_1/questions.json
43
- - split: depth_8_size_50_seed_1
44
- path: depth_8_size_50_seed_1/questions.json
45
- - split: depth_8_size_100_seed_1
46
- path: depth_8_size_100_seed_1/questions.json
47
- - split: depth_8_size_500_seed_1
48
- path: depth_8_size_500_seed_1/questions.json
49
- - split: depth_10_size_26_seed_1
50
- path: depth_10_size_26_seed_1/questions.json
51
- - split: depth_10_size_50_seed_1
52
- path: depth_10_size_50_seed_1/questions.json
53
- - split: depth_10_size_100_seed_1
54
- path: depth_10_size_100_seed_1/questions.json
55
- - split: depth_10_size_500_seed_1
56
- path: depth_10_size_500_seed_1/questions.json
57
  ---
58
 
59
  # Dataset Card for Dataset Name
 
1
  ---
2
+ language:
3
+ - en
4
  license: bsd-3-clause
5
  dataset_name: phantom-wiki
6
  configs:
7
+ - config_name: text-corpus
8
+ data_files:
9
+ - split: depth_6_size_26_seed_1
10
+ path: depth_6_size_26_seed_1/articles.json
11
+ - split: depth_6_size_50_seed_1
12
+ path: depth_6_size_50_seed_1/articles.json
13
+ - split: depth_6_size_100_seed_1
14
+ path: depth_6_size_100_seed_1/articles.json
15
+ - split: depth_6_size_500_seed1
16
+ path: depth_6_size_500_seed_1/articles.json
17
+ - split: depth_8_size_26_seed_1
18
+ path: depth_8_size_26_seed_1/articles.json
19
+ - split: depth_8_size_50_seed_1
20
+ path: depth_8_size_50_seed_1/articles.json
21
+ - split: depth_8_size_100_seed_1
22
+ path: depth_8_size_100_seed_1/articles.json
23
+ - split: depth_8_size_500_seed_1
24
+ path: depth_8_size_500_seed_1/articles.json
25
+ - split: depth_10_size_26_seed_1
26
+ path: depth_10_size_26_seed_1/articles.json
27
+ - split: depth_10_size_50_seed_1
28
+ path: depth_10_size_50_seed_1/articles.json
29
+ - split: depth_10_size_100_seed_1
30
+ path: depth_10_size_100_seed_1/articles.json
31
+ - split: depth_10_size_500_seed_1
32
+ path: depth_10_size_500_seed_1/articles.json
33
+ - config_name: question-answer
34
+ data_files:
35
+ - split: depth_6_size_26_seed_1
36
+ path: depth_6_size_26_seed_1/questions.json
37
+ - split: depth_6_size_50_seed_1
38
+ path: depth_6_size_50_seed_1/questions.json
39
+ - split: depth_6_size_100_seed_1
40
+ path: depth_6_size_100_seed_1/questions.json
41
+ - split: depth_6_size_500_seed1
42
+ path: depth_6_size_500_seed_1/questions.json
43
+ - split: depth_8_size_26_seed_1
44
+ path: depth_8_size_26_seed_1/questions.json
45
+ - split: depth_8_size_50_seed_1
46
+ path: depth_8_size_50_seed_1/questions.json
47
+ - split: depth_8_size_100_seed_1
48
+ path: depth_8_size_100_seed_1/questions.json
49
+ - split: depth_8_size_500_seed_1
50
+ path: depth_8_size_500_seed_1/questions.json
51
+ - split: depth_10_size_26_seed_1
52
+ path: depth_10_size_26_seed_1/questions.json
53
+ - split: depth_10_size_50_seed_1
54
+ path: depth_10_size_50_seed_1/questions.json
55
+ - split: depth_10_size_100_seed_1
56
+ path: depth_10_size_100_seed_1/questions.json
57
+ - split: depth_10_size_500_seed_1
58
+ path: depth_10_size_500_seed_1/questions.json
59
  ---
60
 
61
  # Dataset Card for Dataset Name