cc3m_tokenized / README.md
darshanmakwana's picture
Update README.md
69b8f32 verified
metadata
dataset_info:
  features:
    - name: __key__
      dtype: string
    - name: image_tokens
      sequence: int64
    - name: text_tokens
      sequence: int64
    - name: text
      dtype: string
    - name: data
      dtype: string
  splits:
    - name: train
      num_bytes: 2727128395
      num_examples: 2905954
    - name: validation
      num_bytes: 12618157
      num_examples: 13443
  download_size: 964606495
  dataset_size: 2739746552
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

Experiments for training Auto Regressive models for text-to-image generation

This dataset is derived from conceptual captions (CC3M) which contains roughly 3.3M image and caption pairs. For images we use 1d-tokenizer by bytedance which tokenizes a 256 * 256 image into 32 tokens while still achieving SOTA fidelity ratio. For text we train a BPE based tokenizer on the image captions dataset with a vocab size set to 30K, where 4096 tokens where used to represent images, 9 to represent some special tokens and the remaining 25895 tokens for text

Visualization

example 1 example 2 example 3 example 4

Inference

For generating images download and save the image_tokenizer and checkpoint-20000 in the root dir of this repo then run infer.py with your prompt

Training Procedure

For training we prompt the model to generate an image based on a text such as: "a river has burst it 's banks and has spread out onto arable farmland alongside<|startofimage|><|image:2931|><|image:560|><|image:763|><|image:1539|><|image:3161|><|image:1997|><|image:3376|><|image:510|><|image:3036|><|image:1585|><|image:1853|><|image:1970|><|image:2687|><|image:1436|><|image:2213|><|image:3968|><|image:3999|><|image:877|><|image:725|><|image:3013|><|image:438|><|image:3159|><|image:2936|><|image:3003|><|image:2261|><|image:2137|><|image:3821|><|image:1513|><|image:3536|><|image:311|><|image:494|><|image:413|><|endofimage|>". We use use cross entropy loss with logits masked for the audio tokens as it showed performance improvements for speech-to-text tasks and employ the standard cross entorpy loss over the masked logits

Train Iter hard rock artist performing music football player during a match concept vector illustration showing a flag police officer and soldiers arrest military combatant bird on a tree
5000
6000
7000
8000
9000
10000
11000