Questions on Deduplication Strategy, Temporal Metadata and Representation of Structured Content
I hope you're all doing well. I'm reaching out to better understand some aspects of the MegaMath dataset, especially regarding deduplication strategies, temporal metadata, and content categorization. Any insights from the community would be greatly appreciated.
Temporal Deduplication
Does MegaMath perform deduplication across different time points? For example:
If a webpage with the same URL (or near-identical content) appears in multiple years (e.g., 2013, 2014, 2015), is it deduplicated?
If so:
Which version is retained by default — earliest, latest, or highest-quality snapshot?
Are there specific tools or metrics used to evaluate content equivalence over time?Proposed Deduplication Strategy
We’re considering implementing a deduplication strategy based on the following rules:
(1) Default Rule: Retain only the first occurrence of a webpage.
(2) Exception Rule: Keep a subsequent crawl if:
- The content has undergone significant modification (e.g., expanded depth/breadth), or
- The new version is of higher quality.
Could the community share thoughts on this approach?
Does this align with best practices?
Are there known pitfalls or alternative strategies we should consider?
Has something like this already been implemented or is it under development in MegaMath?
If not, are there recommended tools or toolchains for implementing such a strategy at scale?
How can one define "significant content difference" or "higher quality"? E.g.:
- Semantic similarity thresholds?
- Content length or structural changes?
- Quality heuristics (e.g., readability, domain authority)?
- Temporal Metadata
Could someone also confirm whether the dataset includes timestamps or modification history for each entry, such as:
Original creation or generation time
Update or revision timestamps
Versioning information or change logs
4.Representation of Structured Content
Finally, I’d like to ask about how structured elements such as tables, figures, and mathematical formulas are represented in the dataset:
Are tables preserved in their original structure (e.g., HTML or Markdown format), or are they flattened into plain text?
Are figures and associated captions included? If so, are image URLs or binary data retained?
Are mathematical formulas encoded using LaTeX, MathML, or another format? Are they preserved accurately without loss of structure?
Overall, does the dataset maintain the semantic structure of complex documents, or are they simplified during preprocessing?
Thank you all very much for your time and contributions to this project. Looking forward to hearing your thoughts!