gvecchio commited on
Commit
5056a9b
β€’
1 Parent(s): 0f95e92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +167 -3
README.md CHANGED
@@ -1,3 +1,167 @@
1
- ---
2
- license: openrail
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail
3
+ datasets:
4
+ - gvecchio/MatSynth
5
+ language:
6
+ - en
7
+ library_name: diffusers
8
+ pipeline_tag: text-to-image
9
+ tags:
10
+ - material
11
+ - pbr
12
+ - svbrdf
13
+ - 3d
14
+ - texture
15
+ inference: false
16
+ ---
17
+
18
+ # StableMaterials
19
+
20
+ **StableMaterials** is a diffusion-based model designed for generating photorealistic physical-based rendering (PBR) materials. This model integrates semi-supervised learning with Latent Diffusion Models (LDMs) to produce high-resolution, tileable material maps from text or image prompts. StableMaterials can infer both diffuse (Basecolor) and specular (Roughness, Metallic) properties, as well as the material mesostructure (Height, Normal). 🌟
21
+
22
+ <center>
23
+ <img src="https://gvecchio.com/stablematerials/static/images/teaser.jpg" style="border-radius:10px;">
24
+ </center>
25
+
26
+ ⚠️ This repo contains the weight and the pipeline code for the **base model** in both the LDM and LCM verisons. The refiner model, along with its pipeline and the inpainting pipeline, will be released shortly.
27
+
28
+ ## Model Architecture
29
+
30
+ <center>
31
+ <img src="https://gvecchio.com/stablematerials/static/images/architecture.png" style="border-radius:10px;">
32
+ </center>
33
+
34
+ ### 🧩 Base Model
35
+ The base model generates low-resolution (512x512) material maps using a compression VAE (Variational Autoencoder) followed by a latent diffusion process. The architecture is based on the MatFuse adaptation of the LDM paradigm, optimized for material map generation with a focus on diversity and high visual fidelity. πŸ–ΌοΈ
36
+
37
+ ### πŸ”‘ Key Features
38
+ - **Semi-Supervised Learning**: The model is trained using both annotated and unannotated data, leveraging adversarial training to distill knowledge from large-scale pretrained image generation models. πŸ“š
39
+ - **Knowledge Distillation**: Incorporates unannotated texture samples generated using the SDXL model into the training process, bridging the gap between different data distributions. 🌐
40
+ - **Latent Consistency**: Employs a latent consistency model to facilitate fast generation, reducing the inference steps required to produce high-quality outputs. ⚑
41
+ - **Feature Rolling**: Introduces a novel tileability technique by rolling feature maps for each convolutional and attention layer in the U-Net architecture. 🎒
42
+
43
+ ## Intended Use
44
+
45
+ StableMaterials is designed for generating high-quality, realistic PBR materials for applications in computer graphics, such as video game development, architectural visualization, and digital content creation. The model supports both text and image-based prompting, allowing for versatile and intuitive material generation. πŸ•ΉοΈπŸ›οΈπŸ“Έ
46
+
47
+ ## πŸ§‘β€πŸ’» Usage
48
+
49
+ To generate materials using the StableMaterials base model, use the following code snippet:
50
+
51
+ ### Standard model
52
+
53
+ ```python
54
+ from diffusers import DiffusionPipeline
55
+ from diffusers.utils import load_image
56
+
57
+ # Load pipeline enabling the execution of custom code
58
+ pipe = DiffusionPipeline.from_pretrained(
59
+ "gvecchio/StableMaterials",
60
+ trust_remote_code=True,
61
+ torch_dtype=torch.float16
62
+ )
63
+
64
+ # Text prompt example
65
+ material = pipeline(
66
+ prompt="Old rusty metal bars with peeling paint",
67
+ guidance_scale=10.0,
68
+ tileable=True,
69
+ num_images_per_prompt=1,
70
+ num_inference_steps=50,
71
+ ).images[0]
72
+
73
+ # Image prompt example
74
+ material = pipeline(
75
+ prompt=load_image("path/to/input_image.jpg"),
76
+ guidance_scale=10.0,
77
+ tileable=True,
78
+ num_images_per_prompt=1,
79
+ num_inference_steps=50,
80
+ ).images[0]
81
+
82
+ # The output will include basecolor, normal, height, roughness, and metallic maps
83
+ basecolor = image.basecolor
84
+ normal = image.normal
85
+ height = image.height
86
+ roughness = image.roughness
87
+ metallic = image.metallic
88
+ ```
89
+
90
+ ### Consistency model
91
+
92
+ ```python
93
+ from diffusers import DiffusionPipeline, LCMScheduler, UNet2DConditionModel
94
+ from diffusers.utils import load_image
95
+
96
+ # Load LCM distilled unet
97
+ unet = UNet2DConditionModel.from_pretrained(
98
+ "gvecchio/StableMaterials",
99
+ subfolder="unet_lcm",
100
+ torch_dtype=torch.float16,
101
+ )
102
+
103
+ # Load pipeline enabling the execution of custom code
104
+ pipe = DiffusionPipeline.from_pretrained(
105
+ "gvecchio/StableMaterials",
106
+ trust_remote_code=True,
107
+ unet=unet,
108
+ torch_dtype=torch.float16
109
+ )
110
+
111
+ # Replace scheduler with LCM scheduler
112
+ pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
113
+
114
+ pipe.to(device)
115
+
116
+ # Text prompt example
117
+ material = pipeline(
118
+ prompt="Old rusty metal bars with peeling paint",
119
+ guidance_scale=10.0,
120
+ tileable=True,
121
+ num_images_per_prompt=1,
122
+ num_inference_steps=4, # LCM enables fast generation in as few as 4 steps
123
+ ).images[0]
124
+
125
+ # Image prompt example
126
+ material = pipeline(
127
+ prompt=load_image("path/to/input_image.jpg"),
128
+ guidance_scale=10.0,
129
+ tileable=True,
130
+ num_images_per_prompt=1,
131
+ num_inference_steps=4,
132
+ ).images[0]
133
+
134
+ # The output will include basecolor, normal, height, roughness, and metallic maps
135
+ basecolor = image.basecolor
136
+ normal = image.normal
137
+ height = image.height
138
+ roughness = image.roughness
139
+ metallic = image.metallic
140
+ ```
141
+
142
+ ## πŸ—‚οΈ Training Data
143
+
144
+ The model is trained on a combined dataset from MatSynth and Deschaintre et al., including 6,198 unique PBR materials. It also incorporates 4,000 texture-text pairs generated from the SDXL model using various prompts. πŸ”
145
+
146
+ ## πŸ”§ Limitations
147
+
148
+ While StableMaterials shows robust performance, it has some limitations:
149
+ - It may struggle with complex prompts describing intricate spatial relationships. 🧩
150
+ - It may not accurately represent highly detailed patterns or figures. 🎨
151
+ - It occasionally generates incorrect reflectance properties for certain material types. ✨
152
+
153
+ Future updates aim to address these limitations by incorporating more diverse training prompts and improving the model's handling of complex textures.
154
+
155
+
156
+ ## πŸ“– Citation
157
+
158
+ If you use this model in your research, please cite the following paper:
159
+
160
+ ```
161
+ @article{vecchio2024stablematerials,
162
+ title={StableMaterials: Enhancing Diversity in Material Generation via Semi-Supervised Learning},
163
+ author={Vecchio, Giuseppe},
164
+ journal={arXiv preprint arXiv:2406.09293},
165
+ year={2024}
166
+ }
167
+ ```