wdcqc commited on
Commit
8b593bc
1 Parent(s): 61fec7b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ tags:
4
+ - pytorch
5
+ - diffusers
6
+ - stable-diffusion
7
+ - text-to-image
8
+ - diffusion-models-class
9
+ - dreambooth-hackathon
10
+ - landscape
11
+ widget:
12
+ - text: isometric scspace terrain, corgi
13
+ ---
14
+
15
+ # DreamBooth model for Starcraft:Remastered terrain
16
+
17
+ This is a Stable Diffusion model fine-tuned on Starcraft terrain images on the Space Platform tileset with DreamBooth. It can be used by modifying the `instance_prompt`: **isometric scspace terrain**
18
+
19
+ It was trained on 32x32 terrain images from 265 melee maps including original Blizzard maps and those downloaded from scmscx.com and broodwarmaps.com.
20
+
21
+ To run the demo with the ability to generate map files directly/with more coherence, use this notebook on Colab:
22
+
23
+ <ADD_NOTEBOOK_LINK_HERE>
24
+
25
+ In addition to Dreambooth, a custom VAE model (`AutoencoderTile`) is trained to encode and decode the latents to/from tileset probabilities ("waves") and then generated as Starcraft maps.
26
+
27
+ A WFC Guidance, inspired by the Wave Function Collapse algorithm, is also added to the pipeline. For more information about guidance please see this page: [Fine-Tuning, Guidance and Conditioning](https://github.com/huggingface/diffusion-models-class/tree/main/unit2)
28
+
29
+ This model was created as part of the DreamBooth Hackathon. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
30
+
31
+ ## Description
32
+
33
+
34
+ This is a Stable Diffusion model fine-tuned on starcraft terrain images for the landscape theme.
35
+
36
+
37
+ ## Usage
38
+
39
+ First clone the git repository:
40
+
41
+ ```bash
42
+ git clone https://github.com/wdcqc/WaveFunctionDiffusion.git
43
+ ```
44
+
45
+ Then create a Jupyter notebook under the repository folder:
46
+
47
+ ```python
48
+ # Load pipeline
49
+ from wfd.wf_diffusers import WaveFunctionDiffusionPipeline
50
+ from wfd.wf_diffusers import AutoencoderTile
51
+
52
+ wfc_data_path = "tile_data/wfc/platform_32x32.npz"
53
+
54
+ # Use CUDA (otherwise it will take 15 minutes)
55
+ device = "cuda"
56
+
57
+ tilenet = AutoencoderTile.from_pretrained(
58
+ "wdcqc/starcraft-platform-terrain-32x32",
59
+ subfolder="tile_vae"
60
+ ).to(device)
61
+ pipeline = WaveFunctionDiffusionPipeline.from_pretrained(
62
+ "wdcqc/starcraft-platform-terrain-32x32",
63
+ tile_vae = tilenet,
64
+ wfc_data_path = wfc_data_path
65
+ )
66
+ pipeline.to(device)
67
+
68
+ # Generate pipeline output
69
+ # need to include the dreambooth keyword "isometric scspace terrain"
70
+ pipeline_output = pipeline(
71
+ "isometric scspace terrain, corgi",
72
+ num_inference_steps = 50,
73
+ wfc_guidance_start_step = 20,
74
+ wfc_guidance_strength = 5,
75
+ wfc_guidance_final_steps = 20,
76
+ wfc_guidance_final_strength = 10,
77
+ )
78
+ image = pipeline_output.images[0]
79
+
80
+ # Display raw generated image
81
+ from IPython.display import display
82
+ display(image)
83
+
84
+ # Display generated image as tiles
85
+ wave = pipeline_output.waves[0]
86
+ tile_result = wave.argmax(axis=2)
87
+
88
+ from wfd.scmap import demo_map_image
89
+ display(demo_map_image(tile_result, wfc_data_path = wfc_data_path))
90
+
91
+ # Generate map file
92
+ from wfd.scmap import tiles_to_scx
93
+ import random, time
94
+
95
+ tiles_to_scx(
96
+ tile_result,
97
+ "outputs/generated_{}_{:04d}.scx".format(time.strftime("%Y%m%d_%H%M%S"), random.randint(0, 1e4)),
98
+ wfc_data_path = wfc_data_path
99
+ )
100
+
101
+ # Open the generated map file in `outputs` folder with Scmdraft 2
102
+ ```