Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Danish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

I have extracted comments from Wikipedia, but it would be good if it would be checked by a native speaker. I did the bumping and the test, but make update-descriptive-statistics failed here:
ValueError("SHA %s could not be resolved, git returned: %r" % (tokens[0], header_line.strip()))

Danish Foundation Models org

Hi @robvanderg - first of all, thanks for the PR!

Regarding the error: Hmm, odd. Can I ask you to paste the entire traceback, then I will take a look at it (it is too long, feel free to use a fold-down menu to hide it)

A few general comments:

  • Seems like you did not install the project (which is why you need the sys module)
  • Seems like your script uses specific dependencies (fasttext), you can specify these using script-level dependencies (e.g., as seen here)
  • Could I ask you to refactor the script to use an if `name == "main" to allow us to import the script without running it?
  • Can you add a link to an example to the extended description of the dataset - also add a few lines about what a "comment section" on Wikipedia contains.
  • You seem to use a specific git. Could you install that directly using the script-level dependencies, so that we can also lock the revision?
  • I would refactor the os.system to ideally call the functions directly within the Python implementation or use the subprocess module.

Data

I did a check on some of the documents, and generally, I think they look good.

A few of the issues I found is:

  • Lack of formatting, e.g. the following is an extraction from this page:
[...] Vi forudsætter at verden er fuld af fornuftige personer, og at disse på kollektivt plan kan arbejde sig frem til en fornuftig konsensus på de enkelte artikler, på trods af nogle få egoistiske og destruktive elementer. Det kaldes optimisme.
Særlinge.
"Særlinge offentliggør konstant latterlige teorier på Internettet. De vil komme her og ødelægge alting."
[...]

Even though Særlinge is a header. If possible, I would love it if we could convert it to Markdown.

This formatting also seems like it prevents us from distinguishing between different comment types. E.g. in this example:

[...] jeg synes bare at vores fælles mål bør være kvalitet frem for kvantitet - jeg håber I er enige og vil tage ovenstående til efterretning.
- 15. nov 2003 kl.02:37 (CET)
Ja, og så er der ikke engang nogen, der har skrevet artiklerne Sisyfos og Sisyfosarbejde ordentligt endnu! ;-) Men det må indrømmes: man er og bliver selv den allerdårligste korrekturlæser at sætte på opgaven. 18. nov 2003 kl. 16:32 (CET)
Jeg er 100 % enig i hvad Kaare skriver (15. nov 2003 kl.02:37 (CET)).
Det er urimeligt at forlange at man skal være "ekspert" i hvad man bidrager til den danske (eller svenske) Wikipedia [...]
Sebastjan 19. nov. 2003 kl. 09:30 (CET)

There also seem to be some cases where text (I have only seen it with hyperlinked text) is missing, e.g., in the above, "Kåre Thor Olsen (Kaare)" is missing.

Documentation

We should add a note on personally identifiable information in this dataset. This should also be added to the main README.

I am not sure what the best course of action is here. Clearly, these people have a personal page that allows for redistribution - I will look more into this once our data manager is back from holiday, but since it allows for redistribution, I think we are fine.

By the way, this got me thinking: where are the about pages stored?

The error occurs on data/danske-taler/descriptive_stats.json

dynaword) rob@rob-itu:~/Projects/danish-dynaword$ make update-descriptive-statistics
--- 🚀 Recomputing Descriptive statistics ---
uv run src/dynaword/update_descriptive_statistics.py
Uninstalled 1 package in 21ms
Installed 1 package in 44ms
2025-07-22 08:24:39,126 - INFO - descriptive statistics for 'ai-aktindsigt' is already up to date, skipping.
2025-07-22 08:24:39,136 - INFO - descriptive statistics for 'cellar' is already up to date, skipping.
Traceback (most recent call last):
  File "/home/rob/Projects/danish-dynaword/src/dynaword/update_descriptive_statistics.py", line 167, in <module>
    main(
  File "/home/rob/Projects/danish-dynaword/src/dynaword/update_descriptive_statistics.py", line 159, in main
    update_dataset(dataset_name, force=force)
  File "/home/rob/Projects/danish-dynaword/src/dynaword/update_descriptive_statistics.py", line 87, in update_dataset
    elif check_is_ancestor(ancestor_rev=last_update, rev=rev):
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/src/dynaword/git_utilities.py", line 31, in check_is_ancestor
    return repo.is_ancestor(repo.commit(ancestor_rev), repo.commit(rev))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/repo/base.py", line 726, in commit
    return self.rev_parse(str(rev) + "^0")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/repo/fun.py", line 284, in rev_parse
    obj = name_to_object(repo, rev[:start])
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/repo/fun.py", line 205, in name_to_object
    return Object.new_from_sha(repo, hex_to_bin(hexsha))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/objects/base.py", line 149, in new_from_sha
    oinfo = repo.odb.info(sha1)
            ^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/db.py", line 41, in info
    hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/cmd.py", line 1679, in get_object_header
    return self.__get_object_header(cmd, ref)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/cmd.py", line 1663, in __get_object_header
    return self._parse_object_header(cmd.stdout.readline())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/git/cmd.py", line 1624, in _parse_object_header
    raise ValueError("SHA %s could not be resolved, git returned: %r" % (tokens[0], header_line.strip()))
ValueError: SHA b'bcb374cb39c593a23e26534d5f2f182dee3edceb' could not be resolved, git returned: b'bcb374cb39c593a23e26534d5f2f182dee3edceb missing'
make: *** [makefile:20: update-descriptive-statistics] Error 1

Updating the statistics through the python command with a single dataset worked fine though.

  • It was installed, I didnt activate the environment, added a note to CONTRIBUTING.md now.
  • Done
  • Done
  • Not sure what you mean by "Can you add a link to an example to the extended description of the dataset", do you mean in the wiki-comments.md?
  • Do you refer to the wikiextractor?, yes, you need a specific version made by me, otherwise you won't get the comments. It would be neatest if it can be integrated indeed (including installation as well as direct python calls). I probably don't have time for that soon though..
  • I think the first steps make more sense in a .sh file to be honest, but I have now minimized the use of os.system to only the wikiextractor

Data

  • There was a parameter for this in the wikextractor code apparently, just no way to set it, I now enabled it, and it has a ## prefix
  • Yes, there are missing "entites", this is a known issue, and can be observed in many NLP datasets. The wikiextractor github contains this issue: https://github.com/attardi/wikiextractor/issues/33 , I would actually be curious if there is a more updated package to clean wikipedia dumps somewhere. I have now excluded lines with a {{}} in them, which makes the size smaller, but removes some of those cases.

By the way, this got me thinking: where are the about pages stored?

In the hjælp pages I think, if you change category == 'Wikipedia" to category == 'Hjælp' in the code, they are included. We could also turn it into an `cateogry in ['Hjælp', 'Wikiepdia'], but then perhaps the name should be changed to wiki-misc

Question:

  • How can I change the sample in the readme?, wikicomment_0 is not the most illustrative
  • There are also still some unresolved links, you can easily find them by looking for "[", I am not sure why wikiextractor does not catch them. We could simply search for instances with [[.|.]], and then only keep the text after the | to resolve this, that should be correct in 99% of the cases.
Danish Foundation Models org

Hi @robvanderg , sorry for the delay on this!

I have fixed the issue in:
https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/85

Should be merged soon and then you should be able to merge from main (you will need to rebump the version).

Which simplified the CI a bit (at least makes it a lot faster)

Not sure what you mean by "Can you add a link to an example to the extended description of the dataset", do you mean in the wiki-comments.md?

Yes! So simply add a link to an example of a comment section (so that people can go and see what it is)

Yes, there are missing "entites", this is a known issue, and can be observed in many NLP datasets. The wikiextractor github contains this issue: https://github.com/attardi/wikiextractor/issues/33 , I would actually be curious if there is a more updated package to clean wikipedia dumps somewhere. I have now excluded lines with a {{}} in them, which makes the size smaller, but removes some of those cases.

Oh, but isn't that weird? Then we will have a wiki page with some lines removed instead of just a word?

Also can you add the description of the error to a section like this: Opportunities for Improvement

How can I change the sample in the readme?, wikicomment_0 is not the most illustrative

shuffling the data is probably the easiest way to do it (the CI just takes the first one)

There are also still some unresolved links, you can easily find them by looking for "[", I am not sure why wikiextractor does not catch them. We could simply search for instances with [[.|.]], and then only keep the text after the | to resolve this, that should be correct in 99% of the cases.

I am not entirely sure if this just removes the problem or the linl


Is the GitHub repository the one where you make changes? If so, maybe we should "lock" the revision so that this script doesn't end up breaking in the future?

Otherwise, I think we are getting to a point where we can merge this PR

Also, you mentioned that it was easy to adapt this to Wikipedia as well. I would love an update on the current wiki (which is currently quite outdated)

KennethEnevoldsen changed pull request status to open
Danish Foundation Models org

Hi @robvanderg , sorry for the delay on this!

I have fixed the issue in:
https://huggingface.co/datasets/danish-foundation-models/danish-dynaword/discussions/85

Should be merged soon and then you should be able to merge from main (you will need to rebump the version).

Which simplified the CI a bit (at least makes it a lot faster)

Not sure what you mean by "Can you add a link to an example to the extended description of the dataset", do you mean in the wiki-comments.md?

Yes! So simply add a link to an example of a comment section (so that people can go and see what it is)

Yes, there are missing "entites", this is a known issue, and can be observed in many NLP datasets. The wikiextractor github contains this issue: https://github.com/attardi/wikiextractor/issues/33 , I would actually be curious if there is a more updated package to clean wikipedia dumps somewhere. I have now excluded lines with a {{}} in them, which makes the size smaller, but removes some of those cases.

Oh, but isn't that weird? Then we will have a wiki page with some lines removed instead of just a word?

Also can you add the description of the error to a section like this: Opportunities for Improvement

How can I change the sample in the readme?, wikicomment_0 is not the most illustrative

shuffling the data is probably the easiest way to do it (the CI just takes the first one)

There are also still some unresolved links, you can easily find them by looking for "[", I am not sure why wikiextractor does not catch them. We could simply search for instances with [[.|.]], and then only keep the text after the | to resolve this, that should be correct in 99% of the cases.

I am not entirely sure if this just removes the problem or the linl


Is the GitHub repository the one where you make changes? If so, maybe we should "lock" the revision so that this script doesn't end up breaking in the future?

Otherwise, I think we are getting to a point where we can merge this PR

Also, you mentioned that it was easy to adapt this to Wikipedia as well. I would love an update on the current wiki (which is currently quite outdated)

I will update the missing things later (am on holiday currently).

I do not update the github repository more than this one. For the Wikipedia extension I would like to find out a bit more about the missing entities, to see where they actually come from (the xml, or wikiextractor), and how we can more accurately detect them if we can't avoid them (I would also compare some faulty sentences to the gigaword one to see if the issue is also there). I will not have time for this very soon though.

Danish Foundation Models org

Perfectly fine - no rush, great if you get the chance to take a look at it

Cannot merge
This branch has merge conflicts in the following files:
  • CHANGELOG.md
  • README.md
  • pyproject.toml
  • test_results.log

Sign up or log in to comment