---
license: mit
---
# 🗿 Megalith-10m
### What is Megalith-10m?
![](megalith_banner.jpg)
Megalith-10m is a dataset of ~10 million links to Flickr images that were categorized as "photo" with [license info](https://www.flickr.com/services/api/flickr.photos.licenses.getInfo.htm) of:
* [No known copyright restrictions (Flickr commons)](https://www.flickr.com/commons/usage), or
* [United States Government Work](https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_the_United_States), or
* [Public Domain Dedication (CC0)](https://creativecommons.org/publicdomain/zero/1.0/), or
* [Public Domain Mark](https://en.wikipedia.org/wiki/Public_Domain_Mark)
### What's the intended use of Megalith-10m?
Megalith-10m is intended to contain only links to wholesome unedited uncopyrighted photographs - the sort of images that we humans see when we walk around outside.
I collected Megalith-10m for the purpose of training neural networks, but you're welcome to use Megalith-10m for whatever you want.
Of course, I recommend conducting your own independent analysis of content and copyright status before using Megalith-linked images in Serious Projects.
### Where can I get text captions for Megalith-10m?
* [DrawThings.ai](https://drawthings.ai) have uploaded [`megalith-10m-sharecap`](https://huggingface.co/datasets/drawthingsai/megalith-10m-sharecap) (captions made with [ShareCaptioner](https://huggingface.co/Lin-Chen/ShareCaptioner))
* [AI Picasso](https://aipicasso.app) have uploaded [`megalith-10m-florence2`](https://huggingface.co/datasets/aipicasso/megalith-10m-florence2) (captions made with [Florence 2](https://huggingface.co/microsoft/Florence-2-large))
* [CaptionEmporium](https://huggingface.co/CaptionEmporium) have uploaded [`flickr-megalith-10m-internvl2-multi-caption`](https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption) (captions made with [InternVL2-8B](https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption/blob/main/OpenGVLab/InternVL2-8B) as well as shorter single-sentence captions made by summarizing the InternVL2/Florence2/ShareCaptioner results with [Llama3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct))
* [DrawThings.ai](https://drawthings.ai) is [working on](https://x.com/liuliu/status/1816318789795078307) further captioning with [MoonDream2](https://moondream.ai)
### How can I efficiently download the images referenced by Megalith-10m?
* [DrawThings.ai](https://drawthings.ai) has archived the images linked by Megalith-10m here: https://huggingface.co/datasets/drawthingsai/megalith-10m
* If you want to download Megalith-10m images directly from Flickr, I posted a sample [downloading command](https://huggingface.co/datasets/madebyollin/megalith-10m/discussions/2#6693f3a7e05c3f1e0e0d62c1) you can use with [img2dataset](https://github.com/rom1504/img2dataset/)
### How was Megalith-10m collected?
I used the Flickr API to query for photos matching some basic criteria (SFW photo with CC0 / public domain license info), which gave me around 12 million links.
I then used various filtering strategies to exclude ~2m image links which didn't appear to point to wholesome public-domain minimally-edited photos.
These filtering strategies included:
1. Account-level filtering, based on
1. Manual adjudication for the top 5000 most prolific accounts
2. Repeated-watermark detection
2. Photo-level filtering, based on
1. Image metadata
1. Mention of copyright restrictions in the EXIF tags
2. Mention of copyright restrictions in the text description
2. Image content
1. Duplicate detection
2. CLIP-assisted checking for
1. Clearly non-photo images (illustrations, screenshots, 3d renders, etc.)
2. Clearly non-wholesome images (violence, nudity, etc.)
3. Minimum-resolution enforcement (at least 256x256 pixels)
4. Manual spot-checking of some images and metadata
### What content does Megalith-10m contain?
The [demo notebook](./Megalith_Demo_Notebook.ipynb) shows a random sample of 100 images being loaded from the links in Megalith-10m.
Based on this random sample, I would estimate the following dataset statistics:
* 5-7% of images may have minor edits or annotations (timestamps, color grading, borders, etc.)
* 1-2% of images may be copyright-constrained (watermarks or text descriptions cast doubt on the license metadata)
* 1-2% of images may be non-wholesome (guns, suggestive poses, etc.)
* 1-2% of images may be non-photos (paintings, screenshots, etc.)
### Is 10 million images really enough to teach a neural network about the visual world?
For the parts of the visual world that are well-represented in Megalith-10m, definitely!
Projects like [CommonCanvas](https://arxiv.org/abs/2310.16825), [Mitsua Diffusion](https://huggingface.co/Mitsua/mitsua-diffusion-one), and [Matryoshka Diffusion](https://arxiv.org/abs/2310.15111)
have shown that you can train useable generative models on similarly-sized image datasets.
Of course, many parts of the world aren't well-represented in Megalith-10m, so you'd need additional data to learn about those.
### What have people done with Megalith-10m?
1. AI Picasso have successfully trained a full text-to-image model [CommonArt β](https://huggingface.co/aipicasso/commonart-beta) on Megalith-10m (and other open datasets).
2. I've successfully trained small [text-to-image models](https://x.com/madebyollin/status/1788282620981497981) on Megalith-10m for my own education.
3. Megalith-10m was among the datasets used to train [Janus](https://github.com/deepseek-ai/Janus), DeepSeek's AR model for multimodal understanding and generation