bardofcodes commited on
Commit
7be9180
·
1 Parent(s): ae73afb
Files changed (3) hide show
  1. README.md +126 -1
  2. assets/arch_1.png +0 -0
  3. assets/arch_2.png +0 -0
README.md CHANGED
@@ -10,4 +10,129 @@ tags:
10
  - Editing
11
  - Analogy
12
  - Patterns
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - Editing
11
  - Analogy
12
  - Patterns
13
+ ---
14
+
15
+
16
+ # Pattern Analogies V1.0 Model Card
17
+
18
+ This respository contains TriFuser --- a diffusion model trained for analogical editing of pattern images as a part of our recent tech report **"Pattern Analogies: Learning to Perform Programmatic Image Edits by Analogy"**.
19
+
20
+
21
+ ## Abstract
22
+
23
+ Pattern images are everywhere in the digital and physical worlds, and tools to edit them are valuable. But editing pattern images is tricky: desired edits are often \emph{programmatic}: structure-aware edits that alter the underlying program which generates the pattern. One could attempt to infer this underlying program, but current methods for doing so struggle with complex images and produce unorganized programs that make editing tedious. In this work, we introduce a novel approach to perform programmatic edits on pattern images. By using a \emph{pattern analogy}---a pair of simple patterns to demonstrate the intended edit---and a learning-based generative model to execute these edits, our method allows users to intuitively edit patterns. To enable this paradigm, we introduce SplitWeaver, a domain-specific language that, combined with a framework for sampling synthetic pattern analogies, enables the creation of a large, high-quality synthetic training dataset.We also present TriFuser, a Latent Diffusion Model (LDM) designed to overcome critical issues that arise when naively deploying LDMs to this task. Extensive experiments on real-world, artist-sourced patterns reveals that our method faithfully performs the demonstrated edit while also generalizing to related pattern styles beyond its training distribution.
24
+
25
+ Please check out our [preprint]() for more information.
26
+
27
+ ## Model Details
28
+
29
+ TriFuser model uses the image-variation model of [Versatile Diffusion](https://huggingface.co/shi-labs/versatile-diffusion) as the starting point. It takes three images as input, (A, A*, B), and generates image B* as output, which satisfies the analogical relation A:A*::B:B*. The figure below shows the architecture of TriFuser in detail. Please read our pre-print for more information.
30
+
31
+
32
+
33
+ One single flow of Versatile Diffusion contains a VAE, a diffuser, and a context encoder, and thus handles one task (e.g., text-to-image) under one data type (e.g., image) and one context type (e.g., text). The multi-flow structure of Versatile Diffusion shows in the following diagram:
34
+
35
+ <p align="center">
36
+ <img src="https://huggingface.co/bardofcodes/pattern_analogies/resolve/main/assets/arch_1.png" width="99%">
37
+ </p>
38
+ <p align="center">
39
+ <img src="https://huggingface.co/bardofcodes/pattern_analogies/resolve/main/assets/arch_1.png" width="99%">
40
+ </p>
41
+
42
+ - **Developed by:** Aditya Ganeshan, Thibault Groueix, Paul Guerrero, Radomír Měch, Matthew Fisher and Daniel Ritchie
43
+ - **Model type:** Diffusion-based image2image generative model
44
+ - **Language(s):** English
45
+ - **License:** Adobe Research License
46
+ - **Resources for more information:** More information along with training code will be released in this [GitHub Repository](https://github.com/bardofcodes/pattern_analogies).
47
+
48
+ ## Citation
49
+
50
+ TBD
51
+
52
+ ## Usage
53
+
54
+ You can use the model with the [🧨Diffusers library](https://github.com/huggingface/diffusers).
55
+
56
+
57
+ ### PatternAnalogiesTrifuser
58
+
59
+ This repository contains example inputs to demonstrate the model's capabilities. Please change `EXAMPLE_ID` from 0-9 to check out the different examples.
60
+
61
+ ```py
62
+ import requests
63
+ import torch as th
64
+ from PIL import Image
65
+ from io import BytesIO
66
+ import matplotlib.pyplot as plt
67
+ from PIL import Image, ImageOps
68
+ from diffusers import DiffusionPipeline
69
+
70
+ SEED = 1729
71
+ DEVICE = th.device("cuda")
72
+ DTYPE = th.float16
73
+ FIG_K = 3
74
+ EXAMPLE_ID = 0
75
+
76
+ # Now we need to do the trick
77
+ pretrained_path = "bardofcodes/pattern_analogies"
78
+ new_pipe = DiffusionPipeline.from_pretrained(
79
+ pretrained_path,
80
+ custom_pipeline=pretrained_path,
81
+ trust_remote_code=True
82
+ )
83
+
84
+ img_urls = [
85
+ f"https://huggingface.co/bardofcodes/pattern_analogies/resolve/main/examples/{EXAMPLE_ID}_a.png",
86
+ f"https://huggingface.co/bardofcodes/pattern_analogies/resolve/main/examples/{EXAMPLE_ID}_a_star.png",
87
+ f"https://huggingface.co/bardofcodes/pattern_analogies/resolve/main/examples/{EXAMPLE_ID}_b.png",
88
+ ]
89
+ images = []
90
+ for url in img_urls:
91
+ response = requests.get(url)
92
+ image = Image.open(BytesIO(response.content)).convert("RGB")
93
+ images.append(image)
94
+
95
+ pipe_input = [tuple(images)]
96
+
97
+ pipe = new_pipe.to(DEVICE, DTYPE)
98
+ var_images = pipe(pipe_input, num_inference_steps=50, num_images_per_prompt=3,).images
99
+
100
+ plt.figure(figsize=(3*FIG_K, 2*FIG_K))
101
+ plt.axis('off')
102
+ plt.legend(framealpha=1)
103
+ plt.rcParams['legend.fontsize'] = 'large'
104
+ for i in range(6):
105
+ if i == 0:
106
+ plt.subplot(2, 3, i+1)
107
+ val_image = img1
108
+ label_str = "A"
109
+ elif i == 1:
110
+ plt.subplot(2, 3, i+1)
111
+ val_image = alt_img1
112
+ label_str = "A*"
113
+ elif i == 2:
114
+ plt.subplot(2, 3, i+1)
115
+ val_image = img2
116
+ label_str = "Target"
117
+ else:
118
+ plt.subplot(2, 3,i + 1)
119
+ val_image = var_images[i-3]
120
+ label_str = f"Variation {i-2}"
121
+
122
+ val_image = ImageOps.expand(val_image,border=2,fill='black')
123
+ plt.imshow(val_image)
124
+ plt.scatter([], [], c="r", label=label_str)
125
+ plt.legend(loc="lower right")
126
+ plt.axis('off')
127
+ plt.subplots_adjust(wspace=0.01, hspace=0.01)
128
+ ```
129
+
130
+ ### Full GitHub Repository
131
+
132
+ Will be released soon.
133
+
134
+ ## Cautions, Biases, and Content Acknowledgment
135
+
136
+ We would like the raise the awareness of users of this demo of its potential issues and concerns. Like previous large foundation models, the use of our model could be problematic in some cases, partially due to the imperfect training data and pretrained network (VAEs / context encoders) with limited scope. We welcome researchers and users to report issues with the HuggingFace community discussion feature or email the authors.
137
+
138
+ However, since our model targets the task of editing images, it is strongly guided by the user input. To the best of our knowledge, with sanitized inputs, our model outputs sanitized outputs consistently.
assets/arch_1.png ADDED
assets/arch_2.png ADDED