librarian-bot commited on
Commit
a6b7910
·
verified ·
1 Parent(s): a94979d

Librarian Bot: Add language metadata for dataset

Browse files

This pull request aims to enrich the metadata of your dataset by adding language metadata to `YAML` block of your dataset card `README.md`.

How did we find this information?

- The librarian-bot downloaded a sample of rows from your dataset using the [dataset-server](https://huggingface.co/docs/datasets-server/) library
- The librarian-bot used a language detection model to predict the likely language of your dataset. This was done on columns likely to contain text data.
- Predictions for rows are aggregated by language and a filter is applied to remove languages which are very infrequently predicted
- A confidence threshold is applied to remove languages which are not confidently predicted

The following languages were detected with the following mean probabilities:

- English (en): 98.52%


If this PR is merged, the language metadata will be added to your dataset card. This will allow users to filter datasets by language on the [Hub](https://huggingface.co/datasets).
If the language metadata is incorrect, please feel free to close this PR.

To merge this PR, you can use the merge button below the PR:
![Screenshot 2024-02-06 at 15.27.46.png](https://cdn-uploads.huggingface.co/production/uploads/63d3e0e8ff1384ce6c5dd17d/1PRE3CoDpg_wfThC6U1w0.png)

This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bots). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to

@davanstrien
.

Files changed (1) hide show
  1. README.md +137 -135
README.md CHANGED
@@ -1,138 +1,140 @@
1
  ---
 
 
2
  configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path:
7
- - "*/*.gz"
8
- - config_name: arxiv_abstracts
9
- data_files:
10
- - split: train
11
- path:
12
- - "arxiv_abstracts/*.gz"
13
- - config_name: arxiv_papers
14
- data_files:
15
- - split: train
16
- path:
17
- - "arxiv_papers/*.gz"
18
- - config_name: biodiversity_heritage_library
19
- data_files:
20
- - split: train
21
- path:
22
- - "biodiversity_heritage_library/*.gz"
23
- - config_name: caselaw_access_project
24
- data_files:
25
- - split: train
26
- path:
27
- - "caselaw_access_project/*.gz"
28
- - config_name: cccc
29
- data_files:
30
- - split: train
31
- path:
32
- - "cccc/*.gz"
33
- - config_name: data_provenance_initiative
34
- data_files:
35
- - split: train
36
- path:
37
- - "data_provenance_initiative/*.gz"
38
- - config_name: foodista
39
- data_files:
40
- - split: train
41
- path:
42
- - "foodista/*.gz"
43
- - config_name: library_of_congress
44
- data_files:
45
- - split: train
46
- path:
47
- - "library_of_congress/*.gz"
48
- - config_name: news
49
- data_files:
50
- - split: train
51
- path:
52
- - "news/*.gz"
53
- - config_name: openalex
54
- data_files:
55
- - split: train
56
- path:
57
- - "openalex/*.gz"
58
- - config_name: peS2o
59
- data_files:
60
- - split: train
61
- path:
62
- - "peS2o/*.gz"
63
- - config_name: pre_1929_books
64
- data_files:
65
- - split: train
66
- path:
67
- - "pre_1929_books/*.gz"
68
- - config_name: project_gutenberg
69
- data_files:
70
- - split: train
71
- path:
72
- - "project_gutenberg/*.gz"
73
- - config_name: public_domain_review
74
- data_files:
75
- - split: train
76
- path:
77
- - "public_domain_review/*.gz"
78
- - config_name: pubmed
79
- data_files:
80
- - split: train
81
- path:
82
- - "pubmed/*.gz"
83
- - config_name: python_enhancement_proposals
84
- data_files:
85
- - split: train
86
- path:
87
- - "python_enhancement_proposals/*.gz"
88
- - config_name: regulations
89
- data_files:
90
- - split: train
91
- path:
92
- - "regulations/*.gz"
93
- - config_name: stackexchange
94
- data_files:
95
- - split: train
96
- path:
97
- - "stackexchange/*.gz"
98
- - config_name: stackv2
99
- data_files:
100
- - split: train
101
- path:
102
- - "stackv2/*.gz"
103
- - config_name: ubuntu_irc
104
- data_files:
105
- - split: train
106
- path:
107
- - "ubuntu_irc/*.gz"
108
- - config_name: uk_hansard
109
- data_files:
110
- - split: train
111
- path:
112
- - "uk_hansard/*.gz"
113
- - config_name: usgpo
114
- data_files:
115
- - split: train
116
- path:
117
- - "usgpo/*.gz"
118
- - config_name: uspto
119
- data_files:
120
- - split: train
121
- path:
122
- - "uspto/*.gz"
123
- - config_name: wikimedia
124
- data_files:
125
- - split: train
126
- path:
127
- - "wikimedia/*.gz"
128
- - config_name: wikiteam
129
- data_files:
130
- - split: train
131
- path:
132
- - "wikiteam/*.gz"
133
- - config_name: youtube
134
- data_files:
135
- - split: train
136
- path:
137
- - "youtube/*.gz"
138
  ---
 
1
  ---
2
+ language:
3
+ - en
4
  configs:
5
+ - config_name: default
6
+ data_files:
7
+ - split: train
8
+ path:
9
+ - '*/*.gz'
10
+ - config_name: arxiv_abstracts
11
+ data_files:
12
+ - split: train
13
+ path:
14
+ - arxiv_abstracts/*.gz
15
+ - config_name: arxiv_papers
16
+ data_files:
17
+ - split: train
18
+ path:
19
+ - arxiv_papers/*.gz
20
+ - config_name: biodiversity_heritage_library
21
+ data_files:
22
+ - split: train
23
+ path:
24
+ - biodiversity_heritage_library/*.gz
25
+ - config_name: caselaw_access_project
26
+ data_files:
27
+ - split: train
28
+ path:
29
+ - caselaw_access_project/*.gz
30
+ - config_name: cccc
31
+ data_files:
32
+ - split: train
33
+ path:
34
+ - cccc/*.gz
35
+ - config_name: data_provenance_initiative
36
+ data_files:
37
+ - split: train
38
+ path:
39
+ - data_provenance_initiative/*.gz
40
+ - config_name: foodista
41
+ data_files:
42
+ - split: train
43
+ path:
44
+ - foodista/*.gz
45
+ - config_name: library_of_congress
46
+ data_files:
47
+ - split: train
48
+ path:
49
+ - library_of_congress/*.gz
50
+ - config_name: news
51
+ data_files:
52
+ - split: train
53
+ path:
54
+ - news/*.gz
55
+ - config_name: openalex
56
+ data_files:
57
+ - split: train
58
+ path:
59
+ - openalex/*.gz
60
+ - config_name: peS2o
61
+ data_files:
62
+ - split: train
63
+ path:
64
+ - peS2o/*.gz
65
+ - config_name: pre_1929_books
66
+ data_files:
67
+ - split: train
68
+ path:
69
+ - pre_1929_books/*.gz
70
+ - config_name: project_gutenberg
71
+ data_files:
72
+ - split: train
73
+ path:
74
+ - project_gutenberg/*.gz
75
+ - config_name: public_domain_review
76
+ data_files:
77
+ - split: train
78
+ path:
79
+ - public_domain_review/*.gz
80
+ - config_name: pubmed
81
+ data_files:
82
+ - split: train
83
+ path:
84
+ - pubmed/*.gz
85
+ - config_name: python_enhancement_proposals
86
+ data_files:
87
+ - split: train
88
+ path:
89
+ - python_enhancement_proposals/*.gz
90
+ - config_name: regulations
91
+ data_files:
92
+ - split: train
93
+ path:
94
+ - regulations/*.gz
95
+ - config_name: stackexchange
96
+ data_files:
97
+ - split: train
98
+ path:
99
+ - stackexchange/*.gz
100
+ - config_name: stackv2
101
+ data_files:
102
+ - split: train
103
+ path:
104
+ - stackv2/*.gz
105
+ - config_name: ubuntu_irc
106
+ data_files:
107
+ - split: train
108
+ path:
109
+ - ubuntu_irc/*.gz
110
+ - config_name: uk_hansard
111
+ data_files:
112
+ - split: train
113
+ path:
114
+ - uk_hansard/*.gz
115
+ - config_name: usgpo
116
+ data_files:
117
+ - split: train
118
+ path:
119
+ - usgpo/*.gz
120
+ - config_name: uspto
121
+ data_files:
122
+ - split: train
123
+ path:
124
+ - uspto/*.gz
125
+ - config_name: wikimedia
126
+ data_files:
127
+ - split: train
128
+ path:
129
+ - wikimedia/*.gz
130
+ - config_name: wikiteam
131
+ data_files:
132
+ - split: train
133
+ path:
134
+ - wikiteam/*.gz
135
+ - config_name: youtube
136
+ data_files:
137
+ - split: train
138
+ path:
139
+ - youtube/*.gz
140
  ---