Datasets:

Modalities:
Image
Tags:
code
Libraries:
Datasets
License:
RyanLi0802 commited on
Commit
d530601
·
1 Parent(s): 5c42c40

added example usage to readme

Browse files
Files changed (1) hide show
  1. README.md +137 -2
README.md CHANGED
@@ -6,8 +6,143 @@ tags:
6
 
7
  The Sketch2Code dataset consists of 731 human-drawn sketches paired with 484 real-world webpages from the [Design2Code dataset](https://huggingface.co/datasets/SALT-NLP/Design2Code), serving to benchmark Vision-Language Models (VLMs) on converting rudimentary sketches into web design prototypes.
8
 
9
- Each example consists of a pair of source HTML and rendered webpage screenshot (stored in `webpages/` directory under name `{webpage_id}.html` and `{webpage_id}.png`), as well as 1 to 3 sketches drawn by human annotators (stored in `sketches/` directory under name {webpage_id}_{sketch_id}.png).
10
 
11
  Note that all images in these webpages are replaced by a blue placeholder image (rick.jpg).
12
 
13
- Please refer to our [Project Page](https://salt-nlp.github.io/Sketch2Code-Project-Page/) for more detailed information.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  The Sketch2Code dataset consists of 731 human-drawn sketches paired with 484 real-world webpages from the [Design2Code dataset](https://huggingface.co/datasets/SALT-NLP/Design2Code), serving to benchmark Vision-Language Models (VLMs) on converting rudimentary sketches into web design prototypes.
8
 
9
+ Each example consists of a pair of source HTML and rendered webpage screenshot (stored in `webpages/` directory under name `{webpage_id}.html` and `{webpage_id}.png`), as well as 1 to 3 sketches drawn by human annotators (stored in `sketches/` directory under name `{webpage_id}_{sketch_id}.png`).
10
 
11
  Note that all images in these webpages are replaced by a blue placeholder image (rick.jpg).
12
 
13
+ Please refer to our [Project Page](https://salt-nlp.github.io/Sketch2Code-Project-Page/) for more detailed information.
14
+
15
+
16
+ ### Example Usage
17
+ You can download the full dataset through this [link](https://huggingface.co/datasets/SALT-NLP/Sketch2Code/resolve/main/sketch2code_dataset_v1.zip?download=true). After unzipping, all 731 sketches (`{webpage_id}_{sketch_id}.png`) and 484 webpage screenshots + HTMLs (`{webpage_id}.html` and `{webpage_id}.png`) will be appear flattened under `sketch2code_dataset_v1_cleaned/`. We also include `rick.jpg` which is used to render the image placeholder in the HTML code.
18
+
19
+ Alternatively, you may access the data online through `huggingface_hub`. Below is a sample script to access the data via `huggingface_hub` and generate predictions using Llava-1.6-8b:
20
+ ```
21
+ import os
22
+ import re
23
+ import torch
24
+
25
+ from PIL import Image
26
+ from tqdm import tqdm
27
+ from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
28
+ from huggingface_hub import HfApi, hf_hub_download
29
+
30
+ def extract_html(code):
31
+ # re.DOTALL allows the dot (.) to match newlines as well
32
+ matches = re.findall(r'```(.*?)```', code, re.DOTALL)
33
+ if matches:
34
+ return matches[-1] # Return the last match found
35
+ else:
36
+ return None
37
+
38
+ def cleanup_response(response):
39
+ if not response:
40
+ return None
41
+ if '<!DOCTYPE' not in response and '<html>' not in response:
42
+ # invalid html, return none
43
+ return None
44
+ ## simple post-processing
45
+ if response[ : 3] == "```":
46
+ response = response[3 :].strip()
47
+ if response[-3 : ] == "```":
48
+ response = response[ : -3].strip()
49
+ if response[ : 4] == "html":
50
+ response = response[4 : ].strip()
51
+
52
+ ## strip anything before '<!DOCTYPE'
53
+ if '<!DOCTYPE' in response:
54
+ response = response.split('<!DOCTYPE', 1)[1]
55
+ response = '<!DOCTYPE' + response
56
+
57
+ ## strip anything after '</html>'
58
+ if '</html>' in response:
59
+ response = response.split('</html>')[0] + '</html>'
60
+ return response
61
+
62
+
63
+ def llava_call(model, processor, user_message, image, history=None):
64
+ def parse_resp(text_output):
65
+ idx = text_output.rfind("assistant")
66
+
67
+ if idx > -1:
68
+ return text_output[idx+len("assistant"):].strip()
69
+ else:
70
+ return text_output
71
+
72
+ if not history:
73
+ conversation = [
74
+ {
75
+
76
+ "role": "user",
77
+ "content": [
78
+ {"type": "text", "text": user_message},
79
+ {"type": "image"},
80
+ ],
81
+ },
82
+ ]
83
+ else:
84
+ conversation = history
85
+ conversation.append({
86
+ "role": "user",
87
+ "content": [
88
+ {"type": "text", "text": user_message},
89
+ ],
90
+ })
91
+ prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
92
+ inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device)
93
+ output = parse_resp(processor.decode(model.generate(**inputs, max_new_tokens=4096, do_sample=True, temperature=0.5, repetition_penalty=1.1)[0], skip_special_tokens=True))
94
+
95
+ conversation.append({
96
+ "role": "assistant",
97
+ "content": [
98
+ {"type": "text", "text": output}
99
+ ]
100
+ })
101
+
102
+ return output, conversation
103
+
104
+
105
+ api = HfApi(token="your_hf_access_token")
106
+ repo_id = "SALT-NLP/Sketch2Code"
107
+
108
+ files = api.list_repo_files(repo_id, repo_type="dataset")
109
+ sketch_files = [file for file in files if file.startswith('sketches/')][:5] # running only the first 5 sketches
110
+
111
+
112
+ prompt = '''You are an expert web developer who specializes in HTML and CSS. A user will provide you with a sketch design of the webpage following the wireframing conventions, where images are represented as boxes with an "X" inside, and texts are replaced with curly lines. You need to return a single html file that uses HTML and CSS to produce a webpage that strictly follows the sketch layout. Include all CSS code in the HTML file itself. If it involves any images, use "rick.jpg" as the placeholder name. You should try your best to figure out what text should be placed in each text block. In you are unsure, you may use "lorem ipsum..." as the placeholder text. However, you must make sure that the positions and sizes of these placeholder text blocks matches those on the provided sketch.
113
+
114
+ Do your best to reason out what each element in the sketch represents and write a HTML file with embedded CSS that implements the design. Do not hallucinate any dependencies to external files. Pay attention to things like size and position of all the elements, as well as the overall layout. You may assume that the page is static and ignore any user interactivity.
115
+
116
+ Here is a sketch design of a webpage. Could you write a HTML+CSS code of this webpage for me?
117
+
118
+ Please format your code as
119
+ ```
120
+ {{HTML_CSS_CODE}}
121
+ ```
122
+ Remember to use "rick.jpg" as the placeholder for any images'''
123
+
124
+ model_name = "llava-hf/llama3-llava-next-8b-hf"
125
+ processor = LlavaNextProcessor.from_pretrained(model_name)
126
+ model = LlavaNextForConditionalGeneration.from_pretrained(
127
+ model_name,
128
+ device_map="auto",
129
+ load_in_8bit=True,
130
+ torch_dtype=torch.float16
131
+ )
132
+
133
+ for sketch_file in tqdm(sketch_files):
134
+ sketch_path = hf_hub_download(repo_id=repo_id, repo_type="dataset", filename=sketch_file)
135
+ sketch = Image.open(sketch_path)
136
+
137
+ agent_resp, _ = llava_call(model, processor, prompt, sketch)
138
+ html_response = cleanup_response(extract_html(agent_resp))
139
+
140
+ if not html_response:
141
+ html_response = "Error: HTML not Generated"
142
+
143
+ output_path = sketch_path.split('/')[-1].replace(".png", ".html")
144
+ with open(output_path, 'w', encoding='utf-8') as f:
145
+ f.write(html_response)
146
+
147
+ print(f"Output saved to {output_path}")
148
+ ```