Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Tags:
code
Libraries:
Datasets
Dask
License:
webcode2m / README.md
davanstrien's picture
davanstrien HF Staff
Add abstract from the paper to give a bit more context for what the dataset is about.
c0b3054 verified
|
raw
history blame
2.25 kB
metadata
license: mit
dataset_info:
  features:
    - name: image
      dtype: image
    - name: bbox
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 4189624847.8344607
      num_examples: 19514
    - name: test
      num_bytes: 523649431.37584555
      num_examples: 2439
    - name: val
      num_bytes: 523864129.7896938
      num_examples: 2440
  download_size: 4642434734
  dataset_size: 5237138409
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: val
        path: data/val-*
task_categories:
  - image-to-text
tags:
  - code
pretty_name: vision2ui
size_categories:
  - 100M<n<1B

Automatically generating UI code from webpage design visions can significantly alleviate the burden of developers, enabling beginner developers or designers to directly generate Web pages from design diagrams. Currently, prior research has accomplished the objective of generating UI code from rudimentary design visions or sketches through designing deep neural networks. Inspired by the groundbreaking advancements achieved by Multimodal Large Language Models (MLLMs), the automatic generation of UI code from high-fidelity design images is now emerging as a viable possibility. Nevertheless, our investigation reveals that existing MLLMs are hampered by the scarcity of authentic, high-quality, and large-scale datasets, leading to unsatisfactory performance in automated UI code generation. To mitigate this gap, we present a novel dataset, termed VISION2UI, extracted from real-world scenarios, augmented with comprehensive layout information, tailored specifically for finetuning MLLMs in UI code generation. Specifically, this dataset is derived through a series of operations, encompassing collecting, cleaning, and filtering of the open-source Common Crawl dataset. In order to uphold its quality, a neural scorer trained on labeled samples is utilized to refine the data, retaining higher-quality instances. Ultimately, this process yields a dataset comprising 2,000 (Much more is coming soon) parallel samples encompassing design visions and UI code.

The paper can be accessed at: https://arxiv.org/abs/2404.06369

Much more data is coming soon!