KaraKaraWitch commited on
Commit
365cd4e
·
0 Parent(s):

Inital Commit

Browse files
.gitattributes ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+
57
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - crowdsourced
6
+ license:
7
+ - cc-by-sa-3.0
8
+ task_categories:
9
+ - text-generation
10
+ - fill-mask
11
+ task_ids:
12
+ - language-modeling
13
+ - masked-language-modeling
14
+ source_datasets:
15
+ - original
16
+ language:
17
+ - ar
18
+ - de
19
+ - en
20
+ - es
21
+ - fa
22
+ - fr
23
+ - he
24
+ - hi
25
+ - id
26
+ - it
27
+ - ja
28
+ - ko
29
+ - nl
30
+ - pl
31
+ - pt
32
+ - ru
33
+ - sv
34
+ - tr
35
+ - uk
36
+ - vi
37
+ - zh
38
+ pretty_name: SuperWIKI v1.5
39
+ ---
40
+
41
+
42
+ ## Dataset Details
43
+
44
+ ![](SuperWikipedia-15.png "Based off from Wikipe-tan (Maid, cyan hair, short hair) and Wikipedia's globe logo.")
45
+
46
+ *Waifu to catch your attention.*
47
+
48
+ ### Dataset Description
49
+
50
+ *SuperWIKI-1.5* is a **~18.23B** Tokens (llama-2-7b-chat-tokenizer) / **~15.17B** Tokens (RWKV Tokenizer) multilingual dataset of Wikipedia articles, curated from Wikipedia HTML dumps.
51
+ It serves as a training resource for large language models and other NLP tasks.
52
+ This card details the dataset's origin, content, and limitations.
53
+
54
+ - **Curated by:** KaraKaraWitch
55
+ - **Funded by [optional]:** Recursal.ai (I work there lol)
56
+ - **Shared by [optional]:** KaraKaraWitch
57
+ - **Language(s) (NLP):** Multiple (see Supported Languages)
58
+ - **License:** cc-by-sa-4.0
59
+
60
+ SuperWIKI-1.5 was created under time constraints for the release of [EagleX v1](https://huggingface.co/recursal/EagleX_1-7T_HF), and may contain biases in selection.
61
+
62
+ **We highly recommend using the improved version, *SuperWikipedia-NEXT* instead.**
63
+
64
+ ### Dataset Summary
65
+
66
+ Wikipedia dataset containing cleaned articles of all languages.
67
+ The dataset is manually built from Wikipedia HTML dumps with each split for each language.
68
+ Each example contains the content of one full Wikipedia article.
69
+
70
+ ### Supported Tasks and Leaderboards
71
+
72
+ Primarily used for language modeling
73
+
74
+ ### Languages
75
+
76
+ The dataset includes Wikipedia articles from the following languages:
77
+
78
+ ```
79
+ ar.wikipedia.org
80
+ de.wikipedia.org
81
+ en.wikipedia.org
82
+ es.wikipedia.org
83
+ fa.wikipedia.org
84
+ fr.wikipedia.org
85
+ he.wikipedia.org
86
+ hi.wikipedia.org
87
+ id.wikipedia.org
88
+ it.wikipedia.org
89
+ ja.wikipedia.org
90
+ ko.wikipedia.org
91
+ nl.wikipedia.org
92
+ pl.wikipedia.org
93
+ pt.wikipedia.org
94
+ ru.wikipedia.org
95
+ simple.wikipedia.org
96
+ sv.wikipedia.org
97
+ tr.wikipedia.org
98
+ uk.wikipedia.org
99
+ vi.wikipedia.org
100
+ zh.wikipedia.org
101
+ ```
102
+
103
+ *`.wikipedia.org`* extensions have been added for your convenience.
104
+
105
+
106
+ ### Selection Bias
107
+
108
+ Unlike SuperWikipedia-NEXT, the language selection in SuperWIKI-1.5 was manual and may favor certain languages (e.g., CJK and European languages).
109
+
110
+ ### Filtering
111
+
112
+ The filtering process is documented in code but not well-organized. We recommend reviewing the code directly for details.
113
+
114
+ ### Data Instances
115
+
116
+ Refer to this sample to see all the fields:
117
+
118
+ ```json
119
+ {
120
+ "id": 4024053,
121
+ "title": "Tharman Shanmugaratnam",
122
+ "url": "https://en.wikipedia.org/wiki/Tharman_Shanmugaratnam",
123
+ "stub": false,
124
+ "template": [
125
+ "Efn",
126
+ "C-SPAN",
127
+ "S-aft",
128
+ "S-new",
129
+ "Reflist",
130
+ "Cite news",
131
+ "S-par",
132
+ "Cite journal",
133
+ "Short description",
134
+ "EngvarB"
135
+ ],
136
+ "category": [
137
+ "Finance ministers of Singapore",
138
+ "Singaporean Hindus",
139
+ "Alumni of Wolfson College, Cambridge",
140
+ "Deputy Prime Ministers of Singapore",
141
+ "Ministers for Manpower of Singapore",
142
+ "Presidents of Singapore",
143
+ "Singaporean people of Sri Lankan descent",
144
+ "Singaporean people of Tamil descent",
145
+ "Articles with WorldCat Entities identifiers",
146
+ "Articles with GND identifiers",
147
+ "Articles with VIAF identifiers",
148
+ ],
149
+ "license": [
150
+ "Creative Commons Attribution Share Alike 3.0 Unported"
151
+ ],
152
+ "wikitext": "<...TRUNCATED SAMPLE...> '''Tharman Shanmugaratnam'''{{efn|{{lang-ta|தர்மன் சண்முகரத்தினம்}}}} (born 25 February 1957), also known [[mononymously]] as '''Tharman''', is a Singaporean politician and economist who has been serving as the ninth [[president of Singapore]] since 2023 after winning the [[2023 Singaporean presidential election|2023 presidential election]].\n\nPrior to his presidency, Tharman served as [[Senior Minister of Singapore]] between 2019 and 2023, [[Coordinating Minister for Social Policies (Singapore)|Coordinating Minister for Social Policies]] between 2015 and 2023, and Chairman of the [[Monetary Authority of Singapore]] between 2011 and 2023.<ref name=\"Parliament Profile\"/> \n\nTharman is an economist in roles principally related to economic and social policies. He has also led various international councils and panels simultaneously. Tharman chairs the Board of Trustees of the [[Group of Thirty]], a global council of economic and financial leaders from the public and private sectors and academia. He also co-chairs the Global Commission on the Economics of Water with [[Ngozi Okonjo-Iweala|Ngozi Owonjo-Iweala]], [[Mariana Mazzucato]] and [[Johan Rockström]]. Its initial recommendations helped shape the outcomes of the UN Water Conference in March 2023. Tharman has also been co-chair of the [[G20]] High Level Independent Panel on Global Financing for Pandemic Preparedness and Response since 2021. In 2017, Tharman was appointed to chair the G20 Eminent Persons Group on Global Financial Governance.\n\nA former member of the governing [[People's Action Party]] (PAP), he was the... <...TRUNCATED SAMPLE...>",
153
+ "lang": "en",
154
+ "abstract": "Tharman Shanmugaratnam, also known mononymously as Tharman, is a Singaporean politician and economist who has been serving as the ninth president of Singapore since 2023. Prior to his presidency, Tharman served as Senior Minister of Singapore between 2019 and 2023, Coordinating Minister for Social Policies between 2015 and 2023, and Chairman of the Monetary Authority of Singapore between 2011 and 2023. Tharman is an economist in roles principally related to economic and social policies. He has also led various international councils and panels simultaneously. Tharman chairs the Board of Trustees of the Group of Thirty, a global council of economic and financial leaders from the public and private sectors and academia. He also co-chairs the Global Commission on the Economics of Water with Ngozi Owonjo-Iweala, Mariana Mazzucato and Johan Rockström. Its initial recommendations helped shape the outcomes of the UN Water Conference in March 2023. Tharman has also been co-chair of the G20 High Level Independent Panel on Global Financing for Pandemic Preparedness and Response since 2021. In 2017, Tharman was appointed to chair the G20 Eminent Persons Group on Global Financial Governance. <...TRUNCATED SAMPLE...>",
155
+ "boxes_filters": [],
156
+ "infobox_html": [
157
+ "<...TRUNCATED SAMPLE...>"
158
+ ],
159
+ "figures_dict": [
160
+ {
161
+ "file_url": "./File:Mr_Tharman_at_Bloomberg_New_Economy_Forum.jpg",
162
+ "caption": ""
163
+ }
164
+ ],
165
+ "text": "9th President of Singapore\n\n\n**Tharman Shanmugaratnam** (born 25 February 1957), also known mononymously as **Tharman**, is a Singaporean politician and economist who has been serving as the ninth president of Singapore since 2023. Prior to his presidency, Tharman served as Senior Minister of Singapore between 2019 and 2023, Coordinating Minister for Social Policies between 2015 and 2023, and Chairman of the Monetary Authority of Singapore between 2011 and 2023.\n\nTharman is an economist in roles principally related to economic and social policies. He has also led various international councils and panels simultaneously. <...TRUNCATED SAMPLE...>"
166
+ }
167
+ ```
168
+
169
+ ### Data Fields
170
+
171
+ `id`: The article ID in question
172
+ `title`: The title of the Wikipedia article.
173
+ `url`: The URL of the article.
174
+ `stub`: A boolean (true/false) flag marked as a stub from the CSS selector `.metadata.plainlinks`.
175
+ `template`: A list of templates found in the article.
176
+ `text`: The HTML Text (After post-processing) from SuperWIKI converted to Markdown with links removed and formatting (Bold, italics) kept.
177
+ `license`: The license for the article.
178
+ `wikitext`: The wikitext. Not used but can be used as a reference.
179
+ `lang`: The language. Should be the same as wiki. (For `simplewiki` it should be `en`)
180
+ `boxes_filters`: Also known as `rituals` found in the original SuperWIKI. These are extracted from the CSS selectors `.ombox` and `.ambox`.
181
+ `infobox_html`: A list of side infoboxes that are extracted out from the text.
182
+ `figures_dict`: A list of figures used in the article. Again, extracted out from the text.
183
+ `text`: The Markdown text. This is what you should probably use or need for LLM training.
184
+
185
+
186
+ ### Dataset Curators
187
+
188
+ KaraKaraWitch. (I typically hang out in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.)
189
+
190
+ I'd be happy if you could spread the word and recommend this dataset over wikitext for your use cases `:)`
191
+
192
+ ### Licensing Information
193
+
194
+ Most of Wikipedia's text and many of its images are co-licensed under the
195
+ [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
196
+ (CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
197
+ (GFDL) (un-versioned, with no invariant sections, front-cover texts, or back-cover texts).
198
+
199
+ Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
200
+ text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
201
+ the text.
202
+
203
+ ### Citation Information
204
+
205
+ ```
206
+ @misc{superwiki,
207
+ title = {SuperWIKI-1.5},
208
+ author = {KaraKaraWitch, recursal.ai},
209
+ year = {2023},
210
+ howpublished = {\url{https://huggingface.co/datasets/recursal/SuperWiki-1.5}},
211
+ }
212
+ ```
Scripts/JSONLMerge.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import multiprocessing
2
+ import pathlib
3
+ import warnings
4
+
5
+ import ftfy
6
+ import orjson
7
+ import tqdm
8
+ import typer
9
+
10
+ app = typer.Typer()
11
+
12
+
13
+ @app.command()
14
+ def json_lfix(in_file: pathlib.Path, output_file: pathlib.Path):
15
+ with open(in_file, "rb") as fin, open(output_file, "wb") as fout:
16
+ for line in tqdm.tqdm(fin, desc=f"{in_file.name}"):
17
+ try:
18
+ z = orjson.loads(line.rstrip())
19
+ except orjson.JSONDecodeError:
20
+ print(line)
21
+ continue
22
+ if z.get("htmltext_filtered") and not z.get("text"):
23
+ try:
24
+ fxed, explained = ftfy.fix_and_explain(z["htmltext_filtered"])
25
+ with warnings.catch_warnings(record=True) as w:
26
+ z["text"] = fxed.encode("utf-8", errors="replace").decode(
27
+ "unicode_escape", errors="ignore"
28
+ )
29
+ if len(w) > 0 and issubclass(
30
+ w[-1].category, DeprecationWarning
31
+ ):
32
+ # TBH, I have no idea why this would be fucky.
33
+ # - Shinon (This happens for Persian Wikipedia for example.)
34
+ pass
35
+ # print(explained,"Escape Fail?")
36
+ del z["htmltext_filtered"]
37
+ except Exception as e:
38
+ # Eventually I would like to properly clean it. So here is a bandaid solution for this.
39
+ print("Closing eyes", e)
40
+ continue
41
+ fout.write(orjson.dumps(z))
42
+ fout.write(b"\n")
43
+
44
+
45
+ @app.command()
46
+ def main(in_folder: pathlib.Path, output_folder: pathlib.Path, processes:int=64):
47
+ print(in_folder,output_folder,processes)
48
+ with multiprocessing.Pool(processes) as pool:
49
+ tasks = []
50
+ for file in pathlib.Path(in_folder).glob("*.jsonl"):
51
+ tasks.append(
52
+ pool.apply_async(json_lfix, args=(file, output_folder / file.name))
53
+ )
54
+ for task in tasks:
55
+ task.wait()
56
+ pool.close()
57
+ pool.join()
58
+
59
+ @app.command()
60
+ def fd_processor(in_folder: pathlib.Path, out_folder: pathlib.Path, processes:int=64):
61
+ for sub_folder in in_folder.iterdir():
62
+ (out_folder / f"{sub_folder.stem}").mkdir(exist_ok=True)
63
+ main(sub_folder, out_folder / f"{sub_folder.stem}", processes=processes)
64
+
65
+
66
+ if __name__ == "__main__":
67
+ app()
Scripts/RojaStringRemixer.py ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import multiprocessing
2
+ import pathlib
3
+ import re
4
+ import time
5
+ import traceback
6
+
7
+ import orjson
8
+ import tqdm
9
+ import typer
10
+ from bs4 import BeautifulSoup, Tag
11
+ from markdownify import MarkdownConverter, chomp
12
+
13
+ CONCURRENT = 64
14
+
15
+
16
+ class WikiConverter(MarkdownConverter):
17
+ def convert_a(self, el, text, convert_as_inline):
18
+ prefix, suffix, text = chomp(text)
19
+ if not text:
20
+ return ""
21
+ return "%s%s%s" % (prefix, text, suffix)
22
+
23
+ integer_rgx = re.compile("^[0-9]*$")
24
+
25
+ @staticmethod
26
+ def is_intable(string: str):
27
+ if not string or not string.isdigit():
28
+ return False
29
+ if WikiConverter.integer_rgx.match(string):
30
+ return True
31
+
32
+ def convert_img(self, el, text, convert_as_inline):
33
+ convert_as_inline = True
34
+ alt = el.attrs.get("alt", None) or ""
35
+ if (
36
+ convert_as_inline
37
+ and el.parent.name not in self.options["keep_inline_images_in"]
38
+ ):
39
+ return alt
40
+ return alt
41
+
42
+ def convert_li(self, el, text, convert_as_inline):
43
+ parent = el.parent
44
+ if parent is not None and parent.name == "ol":
45
+ start = parent.get("start")
46
+ if start and WikiConverter.is_intable(start.strip()):
47
+ start = int(start.strip())
48
+ else:
49
+ start = 1
50
+ bullet = "%s." % (start + parent.index(el))
51
+ else:
52
+ depth = -1
53
+ while el:
54
+ if el.name == "ul":
55
+ depth += 1
56
+ el = el.parent
57
+ bullets = self.options["bullets"]
58
+ bullet = bullets[depth % len(bullets)]
59
+ return "%s %s\n" % (bullet, (text or "").strip())
60
+
61
+
62
+ class MultilangWikipediaProcessor:
63
+ def __init__(self) -> None:
64
+ self.md = WikiConverter()
65
+
66
+ def is_stub(self, soup: BeautifulSoup):
67
+ for plainlinks in soup.select(".metadata.plainlinks"):
68
+ if "stub" in plainlinks.get("id", "") or "stub" in plainlinks.get(
69
+ "class", []
70
+ ):
71
+ return True
72
+ return False
73
+
74
+ def rital_ambox(self, input_soup: BeautifulSoup):
75
+ ambox_classes = []
76
+
77
+ for ambox in selects:
78
+ if ambox is not None:
79
+ ambox_classes.append(ambox.get("class"))
80
+ for ambox in selects:
81
+ ambox.decompose()
82
+ return input_soup, ambox_classes
83
+
84
+ def rital_ombox(self, input_soup: BeautifulSoup):
85
+ ombox_classes = []
86
+ selects = input_soup.select('table[class~="ombox"]')
87
+ for ombox in selects:
88
+ if ombox is not None:
89
+ ombox_classes.append(ombox["class"])
90
+ for ombox in selects:
91
+ ombox.decompose()
92
+ return input_soup, ombox_classes
93
+
94
+ def table_filtration(self, input_soup: BeautifulSoup, title):
95
+ for table in input_soup.select("table"):
96
+ tds = len(table.find_all("td"))
97
+ texsize = len(table.get_text().replace(" ", ""))
98
+ if tds >= texsize and texsize < 50:
99
+ print(table.get_text().replace(" ", ""))
100
+ print("Removing table from", title, ". TD exceeds Content")
101
+ return input_soup
102
+
103
+ all_selectors = [
104
+ "style", # Remove styling
105
+ "sup.reference", # Seems to still exist across
106
+ "table.nomobile", # Seems to still exist across
107
+ ]
108
+
109
+ def process_infobox(self, infobox: Tag):
110
+ return str(infobox)
111
+
112
+ def process_figures(self, figure: Tag):
113
+ figure_data = {}
114
+ fig_a = figure.find("a")
115
+ fig_cap = figure.find("figcaption")
116
+ if fig_a:
117
+ figure_data["file_url"] = fig_a.get("href", None)
118
+ else:
119
+ figure_data["file_url"] = None
120
+ if fig_cap:
121
+ figure_data["caption"] = fig_cap.get_text()
122
+ else:
123
+ figure_data["caption"] = None
124
+ if figure_data["caption"] == figure_data["file_url"] is None:
125
+ return None
126
+ return figure_data
127
+
128
+ def convert_soup(self, input_soup: BeautifulSoup):
129
+ [i.unwrap() for i in input_soup.select('[data-mw^="interface"]')]
130
+ [i.decompose() for i in input_soup.select(", ".join(self.all_selectors))]
131
+ title = input_soup.select_one("title").extract()
132
+ # titletext = title.get_text()
133
+
134
+ for i in input_soup.select(".mw-collapsible"):
135
+ hidden = i.select_one("div.hidden-content")
136
+ if hidden:
137
+ # Expose collapsed content
138
+ hidden["class"].remove("hidden-content")
139
+ for i in input_soup.select("[data-mw]"):
140
+ i["data-mw"] = ""
141
+
142
+ ifbs = [i.extract() for i in input_soup.select("table.infobox")]
143
+ ifbs += [i.extract() for i in input_soup.select("table.sidebar.vcard.hlist")]
144
+ ifbs += [i.extract() for i in input_soup.select("table.infobox.vcard")]
145
+ ifbs = [self.process_infobox(ifb) for ifb in ifbs]
146
+ figures = [
147
+ self.process_figures(fig.extract())
148
+ for fig in input_soup.select('figure[typeof^="mw:File/Thumb"]')
149
+ ]
150
+
151
+ return input_soup, ifbs, figures, title
152
+
153
+ def convert(self, wiki_data: bytes):
154
+ data = orjson.loads(wiki_data.rstrip(b"\n"))
155
+ try:
156
+ templates = [
157
+ ":".join(template["name"].split(":")[1:])
158
+ for template in data.get("templates", [])
159
+ ]
160
+ categories = [
161
+ ":".join(category["name"].split(":")[1:])
162
+ for category in data.get("categories", [])
163
+ ]
164
+ if not data["article_body"].get("wikitext"):
165
+ return None
166
+ soup = BeautifulSoup(data["article_body"]["html"], "lxml")
167
+ is_stub = self.is_stub(soup)
168
+ soup, infobox, figures, title = self.convert_soup(soup)
169
+
170
+ # soup, issues, issue_selectors = self.remove_templates(soup, all_templates)
171
+ soup, amboxes = self.rital_ambox(soup)
172
+ soup, omboxes = self.rital_ombox(soup)
173
+ soup = self.table_filtration(soup, title)
174
+ text = (
175
+ self.md.convert_soup(soup)
176
+ .strip()
177
+ .replace("\n\n", "\n")
178
+ .replace("\n\n\n", "\n\n")
179
+ )
180
+ return orjson.dumps(
181
+ {
182
+ "id": data["identifier"],
183
+ "title": data["name"],
184
+ "url": data["url"],
185
+ "stub": is_stub,
186
+ "template": templates,
187
+ "category": categories,
188
+ "license": [lic["name"] for lic in data["license"]],
189
+ "text": text,
190
+ "wikitext": data["article_body"].get("wikitext"),
191
+ "lang": data["in_language"]["identifier"],
192
+ "abstract": data.get("abstract", ""),
193
+ "boxes_filters": amboxes + omboxes,
194
+ "infobox_html": infobox,
195
+ "figures_dict": figures,
196
+ }
197
+ )
198
+ except Exception as e:
199
+ print(f"Exception at Soup Conversion: {e} [{data['name']}]")
200
+ return None
201
+
202
+
203
+ queue = multiprocessing.Queue(maxsize=1024 * 1024)
204
+
205
+
206
+ def worker(worker_idx: int, prefix_path: pathlib.Path):
207
+ if prefix_path.parent:
208
+ prefix_path.parent.mkdir(exist_ok=True, parents=True)
209
+ processor = MultilangWikipediaProcessor()
210
+ prefix_path = prefix_path.with_name(
211
+ f"{prefix_path.name}-{str(worker_idx).zfill(2)}.jsonl"
212
+ )
213
+ with open(prefix_path, "wb") as f:
214
+ while True:
215
+ data = queue.get()
216
+ if data is None:
217
+ break
218
+ # print(data[:16])
219
+ parsed = processor.convert(data)
220
+ if parsed:
221
+ f.write(parsed)
222
+ f.write(b"\n")
223
+
224
+
225
+ app = typer.Typer()
226
+
227
+
228
+ def err_cb(err: Exception):
229
+ print(traceback.format_exception(err)[0])
230
+
231
+
232
+ @app.command()
233
+ def main(folder: pathlib.Path, prefix_path: pathlib.Path, processes: int = 64):
234
+ # orjson.loads()
235
+ with multiprocessing.Pool(processes=processes) as pool:
236
+ workers = [
237
+ pool.apply_async(
238
+ worker, args=(worker_idx, prefix_path), error_callback=err_cb
239
+ )
240
+ for worker_idx in range(processes)
241
+ ]
242
+ for file in pathlib.Path(folder).glob("*.ndjson"):
243
+ with open(file, "rb") as fp:
244
+ pbar = tqdm.tqdm(desc=f"{file.name}")
245
+ for line in fp:
246
+ # print(line[:16])
247
+ queue.put(line)
248
+ pbar.update(1)
249
+ print("processed", file.name)
250
+ pbar.close()
251
+ for _ in range(processes):
252
+ queue.put(None)
253
+ while not queue.empty():
254
+ print("Waiting for empty queue")
255
+ time.sleep(5)
256
+ print("Queue is empty. Waiting for threads to finish.")
257
+ patience = 120
258
+ # processor_thr = 0.05
259
+ while True:
260
+ workers_done = 0
261
+ for running_worker in workers:
262
+ if running_worker.ready():
263
+ workers_done += 1
264
+ if workers_done == processes:
265
+ break
266
+ if patience <= 0:
267
+ # Screw it.
268
+ pool.terminate()
269
+ break
270
+ else:
271
+ if workers_done > processes // 1.5:
272
+ print("Waiting for dangling processes to complete.")
273
+ time.sleep(10)
274
+ patience -= 1
275
+
276
+ # pool.join()
277
+
278
+
279
+ if __name__ == "__main__":
280
+ app()
Scripts/build-jsonl-folder.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ cur_dir = os.path.dirname(os.path.realpath(__file__))
4
+
5
+ # mkdir jsonl
6
+ jsonl_dir = os.path.join(cur_dir, 'jsonl')
7
+ if not os.path.exists(jsonl_dir):
8
+ os.makedirs(jsonl_dir)
9
+
10
+ # list files in jsonl-all directory
11
+ jsonl_all_dir = os.path.join(cur_dir, 'jsonl-all')
12
+
13
+ file_sizes = {}
14
+
15
+ for file in os.listdir(jsonl_all_dir):
16
+ file_path = os.path.join(jsonl_all_dir, file)
17
+ file_size = os.path.getsize(file_path)
18
+ file_sizes[file] = file_size
19
+
20
+ # sort files by size (keep dictionary)
21
+ sorted_files = sorted(file_sizes.items(), key=lambda x: x[1], reverse=True)
22
+
23
+ def widen_variance(file_sizes, reduction_factor):
24
+ sorted_files = sorted(file_sizes.items(), key=lambda x: x[1], reverse=True)
25
+ print(sorted_files)
26
+
27
+ for i, (file, size) in enumerate(sorted_files):
28
+ if i == 0:
29
+ continue
30
+
31
+ sorted_files[i] = (file, round(size * (reduction_factor / (i/10))))
32
+
33
+ return sorted_files
34
+
35
+ # increase the file size distribution
36
+ targeted_file_sizes = widen_variance(file_sizes, 0.00005)
37
+
38
+ print("Sorted files by size:")
39
+ for file, size in sorted_files:
40
+ print(f"{file}: {size}")
41
+
42
+ print("Targeted file sizes:")
43
+ for file, size in targeted_file_sizes:
44
+ print(f"{file}: {size}")
45
+
46
+ # targeted to dict
47
+ targeted_file_sizes = dict(targeted_file_sizes)
48
+
49
+ for file in targeted_file_sizes:
50
+ size = targeted_file_sizes[file]
51
+ file_path = os.path.join(jsonl_all_dir, file)
52
+ file_path_target = os.path.join(jsonl_dir, file)
53
+
54
+ os.system(f"head -c {size} {file_path} > {file_path_target}")
55
+ print(f"Created {file_path_target} with size {size} bytes")
56
+
57
+ with open(file_path_target, 'rb+') as file: # Open the file in read-write binary mode
58
+ file.seek(0, 2) # Move to the end of the file
59
+ file_size = file.tell() # Get the total file size
60
+ for i in range(file_size - 1, -1, -1): # Start from the last byte and move backwards
61
+ file.seek(i) # Move the cursor to the ith position from the start
62
+ if file.read(1) == b'\n': # If a newline character is found
63
+ file.truncate(i + 1) # Truncate the file from this point
64
+ break # Exit the loop after truncating
65
+
66
+
67
+ print("Done!")
Scripts/dictation_processor.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import multiprocessing
2
+ import pathlib
3
+ import traceback
4
+
5
+ from bs4 import BeautifulSoup
6
+ import orjson
7
+ import unidecode
8
+ from RojaStringRemixer import MultilangWikipediaProcessor
9
+
10
+ mp = MultilangWikipediaProcessor()
11
+
12
+ class WikitonaryProcessor(MultilangWikipediaProcessor):
13
+
14
+ def soup_section(self, input_soup: BeautifulSoup, title:str):
15
+ # Wiktionary is typically single layered
16
+ languages = []
17
+ lang_buffer = []
18
+ sub_cat_buffer = []
19
+ # print(input_soup)
20
+ # for html_element in
21
+
22
+ for sect in input_soup.select(".mw-body-content section"):
23
+ sect.unwrap()
24
+
25
+ [meta.decompose() for meta in input_soup.select(".mw-body-content meta")]
26
+ [meta.decompose() for meta in input_soup.select(".mw-body-content link")]
27
+ [meta.decompose() for meta in input_soup.select(".disambig-see-also")]
28
+ # print(input_soup.selec("section"))
29
+ # print([i.name for i in list(input_soup.select_one(".mw-body-content").children)])
30
+ for html_element in input_soup.select_one(".mw-body-content").children:
31
+ if html_element.decomposed or html_element is None:
32
+ continue
33
+ if html_element.name == "h1":
34
+ print("[?] Expecting lowest header element to be h1")
35
+ continue
36
+ if html_element.name == "h2":
37
+ # lang_buffer.append(sub_cat_buffer)
38
+ if sub_cat_buffer:
39
+ lang_buffer.append(sub_cat_buffer)
40
+ sub_cat_buffer = [self.md.convert_soup(html_element).strip()]
41
+ else:
42
+ sub_cat_buffer = [self.md.convert_soup(html_element).strip()]
43
+ if lang_buffer:
44
+ languages.append(lang_buffer)
45
+ lang_buffer = []
46
+
47
+
48
+ elif html_element.name == "h3":
49
+ lang_buffer.append(sub_cat_buffer)
50
+ sub_cat_buffer = [self.md.convert_soup(html_element).strip()]
51
+ else:
52
+ if html_element.get_text().strip():
53
+ sub_cat_buffer.append(self.md.convert_soup(html_element).strip())
54
+ if sub_cat_buffer:
55
+ lang_buffer.append(sub_cat_buffer)
56
+ if lang_buffer:
57
+ languages.append(lang_buffer)
58
+ # print(languages)
59
+ final_txt = {}
60
+ for root_lang in languages:
61
+ lang_data = []
62
+ for sub_pair in root_lang[1:]:
63
+ concat = " \n".join(sub_pair[1:])
64
+ lang_data.append(f"## {sub_pair[0]}\n\n{concat}")
65
+ lang_final = f"# {title.title()}: {root_lang[0][0]}\n\n" + "\n\n".join(lang_data)
66
+ txt = unidecode.unidecode(lang_final.replace("“","\"").replace("”","\"").replace("\t"," ")).encode('utf-8', errors='replace').decode('unicode_escape', errors='ignore')
67
+ final_txt[root_lang[0][0].lower()] = txt
68
+ # print(final_txt)
69
+ return final_txt
70
+
71
+ all_selectors = [
72
+ "style", # Remove styling
73
+ "sup.reference", # Seems to still exist across
74
+ "table.nomobile", # Seems to still exist across
75
+ "div.sister-wikipedia",
76
+ ".floatright",
77
+ ]
78
+
79
+ def convert_soup(self, input_soup: BeautifulSoup):
80
+ # Remove Generic notices
81
+ [i.unwrap() for i in input_soup.select('[data-mw^="interface"]')]
82
+ [i.decompose() for i in input_soup.select(", ".join(self.all_selectors))]
83
+ [i.decompose() for i in input_soup.select("[class*=\"NavHead\"]")]
84
+ title = input_soup.select_one("title").extract()
85
+
86
+ for i in input_soup.select(".mw-collapsible"):
87
+ hidden = i.select_one("div.hidden-content")
88
+ if hidden:
89
+ # Expose collapsed content
90
+ hidden["class"].remove("hidden-content")
91
+ # Cleanup meta
92
+ for i in input_soup.select("[data-mw]"):
93
+ del i["data-mw"]
94
+ for i in input_soup.select("[rel]"):
95
+ del i["rel"]
96
+ for i in input_soup.select("link[href]"):
97
+ del i["href"]
98
+ for i in input_soup.select("link[typeof]"):
99
+ del i["typeof"]
100
+ for i in input_soup.select("[id]"):
101
+ del i["id"]
102
+ for i in input_soup.select("[about]"):
103
+ del i["about"]
104
+ for i in input_soup.select("[lang]"):
105
+ del i["lang"]
106
+
107
+ ifbs = [i.extract() for i in input_soup.select("table.infobox")]
108
+ ifbs += [i.extract() for i in input_soup.select("table.sidebar.vcard.hlist")]
109
+ ifbs += [i.extract() for i in input_soup.select("table.infobox.vcard")]
110
+ ifbs = [self.process_infobox(ifb) for ifb in ifbs]
111
+ [fig.decompose ()for fig in input_soup.select('figure[typeof^="mw:File/Thumb"]')]
112
+ return input_soup, ifbs, None, title
113
+
114
+ def convert(self, wiki_data: bytes):
115
+ data = orjson.loads(wiki_data.rstrip(b"\n"))
116
+ try:
117
+ templates = [
118
+ ":".join(template["name"].split(":")[1:])
119
+ for template in data.get("templates", [])
120
+ ]
121
+ categories = [
122
+ ":".join(category["name"].split(":")[1:])
123
+ for category in data.get("categories", [])
124
+ ]
125
+ if not data["article_body"].get("wikitext"):
126
+ return None
127
+ soup = BeautifulSoup(data["article_body"]["html"], "lxml")
128
+ is_stub = self.is_stub(soup)
129
+ soup, infobox, figures, title = self.convert_soup(soup)
130
+ sections = self.soup_section(soup, data["name"])
131
+ text = list(sections.values())[0][2:]
132
+
133
+
134
+ return orjson.dumps(
135
+ {
136
+ "id": data["identifier"],
137
+ "title": data["name"],
138
+ "url": data["url"],
139
+ "text": text,
140
+ "stub": is_stub,
141
+ "template": templates,
142
+ "category": categories,
143
+ "license": [lic["name"] for lic in data["license"]],
144
+ "wikitext": data["article_body"].get("wikitext"),
145
+ "lang": data["in_language"]["identifier"],
146
+ "abstract": data.get("abstract", ""),
147
+ "sections": sections,
148
+ "infobox_html": infobox,
149
+ "figures_dict": figures,
150
+ }
151
+ )
152
+ except Exception as e:
153
+ print(f"Exception at Soup Conversion: {e} [{data['name']}]\n{traceback.format_exception(e)[-1]}")
154
+ return None
155
+
156
+ mp = WikitonaryProcessor()
157
+
158
+ fs = [
159
+ pathlib.Path(f"wiktionary/enwiktionary_namespace_0_{i}.ndjson") for i in range(16)
160
+ ]
161
+
162
+
163
+
164
+ # for line in pathlib.Path("")
165
+ def main():
166
+ with open("en-wiktionary.jsonl","wb") as fout:
167
+ with multiprocessing.Pool(32) as pool:
168
+
169
+ for file in fs:
170
+ tasks = []
171
+ with open(file,"rb") as f:
172
+ for line in f:
173
+ tasks.append(pool.apply_async(mp.convert,(line,)))
174
+ if len(tasks) % 5000 == 0:
175
+ for task in tasks:
176
+ r = task.get()
177
+ if r is None:
178
+ continue
179
+ fout.write(r + b"\n")
180
+ tasks = []
181
+ for task in tasks:
182
+ r = task.get()
183
+ if r is None:
184
+ continue
185
+ fout.write(r + b"\n")
186
+
187
+ # mp.convert(pathlib.Path("misbriefs.json").read_bytes())
188
+ main()
Scripts/sv-robot.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pathlib, typer, tqdm
2
+ import orjson, re
3
+
4
+ app = typer.Typer()
5
+
6
+ rgx = re.compile(r"\|Lsjbot\|")
7
+
8
+
9
+ @app.command()
10
+ def main(in_folder: pathlib.Path, output_file: pathlib.Path):
11
+ taints = 0
12
+ with open(output_file, "wb") as fout:
13
+ for file in pathlib.Path(in_folder).iterdir():
14
+ if file.is_file() and file.suffix.endswith(".jsonl"):
15
+ with open(file, "rb") as f:
16
+ for line in tqdm.tqdm(f):
17
+ data = orjson.loads(line)
18
+ if rgx.findall(data["wikitext"]):
19
+ # print("TAINT:", data["title"], "Appears to be a lsjbot article generated.")
20
+ taints += 1
21
+ else:
22
+ fout.write(line.rstrip())
23
+ fout.write(b"\n")
24
+ print(f"Tainted LSJBot: {taints} found.")
25
+
26
+
27
+ if __name__ == "__main__":
28
+ app()
SuperWikipedia-15.png ADDED

Git LFS Details

  • SHA256: 24bc9ce2cd78e814cecd96e52159e296cdbf19ba1f81392d32beeac791785ed9
  • Pointer size: 131 Bytes
  • Size of remote file: 901 kB
data/arwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13f7a4e28d68fddef5dae900064f37f1991d28c6d0a7b36318928318ce820e02
3
+ size 2846803
data/dewiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:495625faca2c7a8d486afaed4f3bc4258f1a86d96827e490957c5e6e22646aae
3
+ size 12034401
data/en-wiktionary.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54d6a911690df2ceafe8faed9d3670c73f5459996b50ca1ff0e071984cd8574a
3
+ size 104758
data/enwiki-xaa.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4711038d2ffece3458b9dcee06f0032c3b8d792c4a7dccc57104d01e91c78e3f
3
+ size 32212214114
data/enwiki-xab.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd6d77ce2a528323ef314b7e5c1545044cf6aeb4052dd87bed61adf69976f54b
3
+ size 32212223948
data/enwiki-xac.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1183da4e4de2752c3f717f2bec31c3bc97be5eb3af98fd7495643c4610374ccf
3
+ size 32212249626
data/enwiki-xad.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:201017c21a2e984e5796df158c96f11825a2f2ce3e32a9c9f9d8df550b653f51
3
+ size 32212135111
data/enwiki-xae.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e8f6442bfaeee59ea0bff15ddd846630c87120eb644b658d144678cb4f1291c
3
+ size 19801453020
data/eswiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89f4d1743a040767dd11d4ab4de5de9a5857200db38bcb1f69ae8a89811fdd04
3
+ size 3416500
data/fawiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f158d8155c870b7531e174ca95e17c260b64adf284f829156498add5e5b5378
3
+ size 678892
data/frwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25604a360851826b6e6cb484a8b11fb8e88af7cc97908f078937edd4636abc06
3
+ size 5668208
data/hewiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e221996edab578c7b583294487110c32ef131e933e8757b3fbab922c1193a718
3
+ size 224832
data/hiwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc903f72e9a9a6c661aa800a2890bf6f585bbed50fac933d19de54768950a3c8
3
+ size 102979
data/idwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:beb4d4afb453e6c1e29c7991aaf41d994ddc89603a1a53fe251c3ca3122a7b73
3
+ size 281479
data/itwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c19ca26605ad42f1ffd2f1ddae4237c27b02f78de5159382052374357bc91b5
3
+ size 2050331
data/jawiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:595064cfa7527d10b962a51483d428bdd664f843e75064235858c9ca843cf5df
3
+ size 7859247
data/kowiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85f297f46af85de02a39fc445a1c816ab70ab1328fb58257ea31aef4f81ec49f
3
+ size 360340
data/nlwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca7187b06ef8c7fdcbf5d493c8a219193023510d126cdecbdb68b23f3cb969ee
3
+ size 950257
data/plwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0376e147532e3d71c688c5282d146400e8032b4d0cb8d4a64553f56f9edbb4e
3
+ size 1249925
data/ptwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba819adfa6fd0c79e9638b8c4a4b5f6deae05c4e2b1fe13d75b7dfed016738dc
3
+ size 769919
data/ruwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c655432bf2cd117c25a3826dff5056f6624e310a04d8dccdb432c2a74c8b889
3
+ size 39379542
data/simplewiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1f82b8a8317cfc654d44e0256315a71ceca61ceaaedd9c43d560be51e30e9a2
3
+ size 17689
data/svwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d9a64dec335b1fab46c5740fbb6d6ac8bc6a212b4b21ce2a59cd02119ae382b
3
+ size 1384773
data/trwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7db9bd33e46610d2e37e53fe7849f978919afb06c2e685f0d67e4caa0c9e735
3
+ size 247004
data/ukwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ba40caae6b49f30b6307a3cb43a8102306147a76ec37f0c5f18bdb7dd4828af
3
+ size 4424364
data/viwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1468fb872ba01186a54bfc9562c85a1f3f6ccb146ccb7c6bc4523c20af32e176
3
+ size 532044
data/zhwiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:291eaba79a7222d2408077b42f427c182431caceaaa40152b483d9bbee347f7a
3
+ size 1696036