|
--- |
|
dataset_info: |
|
- config_name: default |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: filename |
|
dtype: string |
|
- name: mimetype |
|
dtype: string |
|
- name: width |
|
dtype: int64 |
|
- name: height |
|
dtype: int64 |
|
- name: file_url |
|
dtype: string |
|
- name: file_size |
|
dtype: int64 |
|
- name: small_url |
|
dtype: string |
|
- name: medium_url |
|
dtype: string |
|
- name: large_url |
|
dtype: string |
|
- name: hash |
|
dtype: string |
|
- name: source |
|
dtype: string |
|
- name: primary_tag |
|
dtype: string |
|
- name: tags |
|
sequence: string |
|
- name: tag_info |
|
struct: |
|
- name: character |
|
sequence: string |
|
- name: game |
|
sequence: string |
|
- name: group |
|
sequence: string |
|
- name: mangaka |
|
sequence: string |
|
- name: meta |
|
sequence: string |
|
- name: movie |
|
sequence: string |
|
- name: outfit |
|
sequence: string |
|
- name: series |
|
sequence: string |
|
- name: source |
|
sequence: string |
|
- name: source-copyright |
|
sequence: string |
|
- name: studio |
|
sequence: string |
|
- name: theme |
|
sequence: string |
|
- name: unknown |
|
sequence: string |
|
- name: vtuber |
|
sequence: string |
|
- name: image |
|
dtype: image |
|
splits: |
|
- name: train |
|
num_bytes: 5671866874980.54 |
|
num_examples: 3843124 |
|
download_size: 5691651895890 |
|
dataset_size: 5671866874980.54 |
|
- config_name: metadata |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: filename |
|
dtype: string |
|
- name: mimetype |
|
dtype: string |
|
- name: width |
|
dtype: int64 |
|
- name: height |
|
dtype: int64 |
|
- name: file_url |
|
dtype: string |
|
- name: file_size |
|
dtype: int64 |
|
- name: small_url |
|
dtype: string |
|
- name: medium_url |
|
dtype: string |
|
- name: large_url |
|
dtype: string |
|
- name: hash |
|
dtype: string |
|
- name: source |
|
dtype: string |
|
- name: primary_tag |
|
dtype: string |
|
- name: tags |
|
sequence: string |
|
- name: tag_info |
|
struct: |
|
- name: character |
|
sequence: string |
|
- name: game |
|
sequence: string |
|
- name: group |
|
sequence: string |
|
- name: mangaka |
|
sequence: string |
|
- name: meta |
|
sequence: string |
|
- name: movie |
|
sequence: string |
|
- name: outfit |
|
sequence: string |
|
- name: series |
|
sequence: string |
|
- name: source |
|
sequence: string |
|
- name: source-copyright |
|
sequence: string |
|
- name: studio |
|
sequence: string |
|
- name: theme |
|
sequence: string |
|
- name: unknown |
|
sequence: string |
|
- name: vtuber |
|
sequence: string |
|
- name: mod |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 3537321017 |
|
num_examples: 3843124 |
|
download_size: 963680600 |
|
dataset_size: 3537321017 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- config_name: metadata |
|
data_files: |
|
- split: train |
|
path: metadata/train-* |
|
task_categories: |
|
- text-to-image |
|
language: |
|
- en |
|
tags: |
|
- anime |
|
- image |
|
- mirror |
|
pretty_name: Zerochan |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
# CommonArch - Zerochan |
|
|
|
## Description |
|
|
|
This dataset contains a dump of image resources from the website Zerochan, encompassing a wide variety of anime-style illustrations. |
|
It aims to provide a comprehensive collection for training anime-style image generation models and other related research. |
|
|
|
This dataset contains **3,843,124** illustrations scraped up to 2024-10-19 11:57:31 UTC.and is regularly updated every 3 months via web crawling. |
|
The next update is scheduled for the end of March 2025. |
|
|
|
## Dataset Details |
|
|
|
* **Number of Images:** 3,843,124 |
|
* **Image Quality:** Generally of "medium" quality. |
|
* **Image Format:** Includes various formats such as JPG, PNG, and WEBP. |
|
* **Storage Format:** Parquet |
|
|
|
This dataset is stored in Parquet format to facilitate integration with the Hugging Face ecosystem. |
|
The original image formats (JPG, PNG, WEBP) are preserved during storage. |
|
|
|
**Data Sampling:** The files have been re-ordered during conversion to ensure each file is a reasonably random sample of the parent dataset. |
|
You can easily obtain a random subset by loading only a portion of the files (e.g., `"hf://datasets/zenless-lab/zerochan/data/train-0000[12]-*.parquet"`). |
|
|
|
## Image Quality Distribution |
|
|
|
(*** WIP ***) |
|
|
|
## Data Origin and Processing |
|
|
|
This dataset is converted from the DeepGHS's [deepghs/zerochan_full](https://huggingface.co/datasets/deepghs/zerochan_full). |
|
The content is updated every 3 months through web crawling. |
|
|
|
## Columns |
|
|
|
The dataset contains the following columns: |
|
|
|
* **id**: Unique identifier (unsorted). |
|
* **filename**: Original filename from `deepghs/zerochan_full`, usually in the format `<id>.<ext>`. |
|
* **minetype**: The MIME type of the downloaded image (e.g., `image/jpeg`, `image/png`). |
|
* **width**: Original image width. |
|
* **height**: Original image height. |
|
* **file_url**: Original image URL. |
|
* **file_size**: Image file size (in bytes). |
|
* **small_url**: URL of the small preview image. |
|
* **medium_url**: URL of the medium preview image. |
|
* **large_url**: URL of the large preview image. |
|
* **hash**: Hash of the image. |
|
* **source**: Source of the image (only available for some rows). |
|
* **primary_tag**: Primary tag associated with the image. |
|
* **tags**: Original tags associated with the image. |
|
* **tag_info**: Structured information about the tags, categorized by character, copyright, etc. |
|
* **image**: The dumped original image file (binary data). |
|
|
|
## Intended Use |
|
|
|
This dataset is primarily intended for training anime-style image generation models. |
|
It can also be used for other research purposes related to image analysis and style transfer. |
|
|
|
## Usage |
|
The `image` column contains a dictionary with keys `bytes` and `path`. When using the `datasets` library, the images are automatically converted to PIL Images. |
|
However, when using other libraries, you may need to pre-process the images to convert them to PIL format. |
|
Here are some examples of how to load and use the dataset with different libraries: |
|
|
|
**Using `datasets`:** |
|
|
|
```python |
|
from datasets import load_dataset |
|
ds = load_dataset("parquet", data_files="hf://datasets/zenless-lab/zerochan/data/train-0000[12]-*.parquet", split="train") |
|
print(ds) |
|
# Access the first image (automatically converted to PIL Image) |
|
image = ds[0]['image'] |
|
image.show() |
|
# Access the tags for the first image |
|
tags = ds[0]['tags'] |
|
print(tags) |
|
``` |
|
|
|
**Using `Dask`:** |
|
|
|
```python |
|
import dask.dataframe as dd |
|
from PIL import Image |
|
import io |
|
def convert_image(row): |
|
if not isinstance(row, dict): |
|
return None |
|
img_bytes = io.BytesIO(row["bytes"]) |
|
img = Image.open(img_bytes) |
|
img = img.convert("RGB") |
|
return img |
|
df = dd.read_parquet("hf://datasets/zenless-lab/zerochan/data/train-0000[12]-*.parquet") |
|
df["image"] = df["image"].map(convert_image) |
|
print(df.head(10)) # to avoid pulling all data during debugging. |
|
``` |
|
|
|
**Using `Polars`:** |
|
|
|
```python |
|
import polars as pl |
|
from PIL import Image |
|
import io |
|
df = pl.read_parquet('hf://datasets/zenless-lab/zerochan/data/train-0000[12]-*.parquet') |
|
df = df.with_columns( |
|
pl.col("image").map_elements(lambda x: Image.open(io.BytesIO(x["bytes"])), return_dtype=pl.Object) |
|
) |
|
print(df) |
|
``` |
|
|
|
## License |
|
|
|
**Important Notice Regarding Licensing:** |
|
|
|
This dataset does not currently have a defined license. |
|
**Before using this dataset, it is your responsibility to ensure that your use complies with the copyright laws in your jurisdiction.** |
|
You should determine whether your use qualifies for an exception or limitation to copyright, such as fair use (in the US) or similar provisions like |
|
Article 30-4 of the Japanese Copyright Act or the EU Directive on Copyright in the Digital Single Market. |
|
**This is not legal advice; please consult with a legal professional to assess the risks associated with your intended use.** |
|
|
|
## Contributing |
|
|
|
Contributions to this dataset are welcome! You can contribute by: |
|
|
|
* Reporting issues or suggesting improvements. |
|
* Submitting new images (please ensure they comply with the website's terms of service). |
|
* Correcting or improving tags and metadata. |
|
|
|
## Contact |
|
|
|
If you have any questions or issues, please feel free to open an issue on the [Hugging Face Dataset repository](https://huggingface.co/datasets/zenless-lab/zerochan). |
|
|
|
## Future Updates |
|
|
|
The dataset will be updated approximately every three months with new images. |