Language samples
It looks like this is the only pre-training data that provides en and zh language codes.
How can I get only samples from those languages (is there a mapping that says something like "chunks 120-150, 151pt1 Chinese"?)
cc @bartowski wip log: https://gist.github.com/robbiemu/2796f81798e0fdcd891f9e1fd13b8097
-- edit: we can close this, I didnt realize how good language categorization libraries are. I just downloaded and sampled high-confidence ones. This leads to overshoot, but since we're not likely going to be using a lot of data, like you'd use in pre-training, I think its fine. (Solution now in that gist)
additionally, for language competencies specifically, (and not per register, like the community dataset and I imagine the post-training data in general), am I safe to just sample from a couple sources here like this and sea-commoncrawl-high-quality ? -- I mean that in the sense that my goal is to find representative sample data sufficient to measure loss across different languages when we quantize it.
looking at the first chunk, it looks like it is shuffled, unmarked samples. but the fact that they are in JSON still gives me some hope, was the upstream categorized by language? or is there another datasource I should look at?
we can close this, I didnt realize how good language categorization libraries are. I just downloaded and sampled high-confidence ones. This leads to overshoot, but since we're not likely going to be using a lot of data, like you'd use in pre-training, I think it's fine.
solution: used lingua-language-detector since it supports all 3 languages I was missing
@robbiemu Thanks for reminder! We will enhance the data card to make it clearer and more user-friendly.