File size: 5,485 Bytes
5e25f59 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: apache-2.0
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- FLUX.1-dev
- science
- materiomics
- bio-inspired
- materials science
- generative AI for science
datasets:
- lamm-mit/leaf-flux-images-and-captions
instance_prompt: <leaf microstructure>
widget: []
---
# FLUX.1 [dev] Fine-tuned with Leaf Images
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
Install ```diffusers```
```raw
pip install -U diffusers
```
## Model description
These are LoRA adaption weights for the FLUX.1 [dev] model (```black-forest-labs/FLUX.1-dev```). The base model is, and you must first get access to it before loading this LoRA adapter.
This LoRA adapter has rank=64 and alpha=64, trained for 4,000 steps. Earlier checkpoints are available in this repository as well (you can load these via the ```adapter``` parameter, see example below).
## Trigger keywords
The model was fine-tuned with a set of ~1,600 images of biological materials, structures, shapes and other images of nature, using the keyword \bioinspired\>.
You should use \<bioinspired\> to trigger these features during image generation.
## How to use
Defining some helper functions:
```python
import os
from datetime import datetime
from PIL import Image
def generate_filename(base_name, extension=".png"):
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
return f"{base_name}_{timestamp}{extension}"
def save_image(image, directory, base_name="image_grid"):
filename = generate_filename(base_name)
file_path = os.path.join(directory, filename)
image.save(file_path)
print(f"Image saved as {file_path}")
def image_grid(imgs, rows, cols, save=True, save_dir='generated_images', base_name="image_grid",
save_individual_files=False):
if not os.path.exists(save_dir):
os.makedirs(save_dir)
assert len(imgs) == rows * cols
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols * w, rows * h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i % cols * w, i // cols * h))
if save_individual_files:
save_image(img, save_dir, base_name=base_name+f'_{i}-of-{len(imgs)}_')
if save and save_dir:
save_image(grid, save_dir, base_name)
return grid
```
### Text-to-image
Model loading:
```python
from diffusers import FluxPipeline
import torch
repo_id = 'lamm-mit/bioinspired-L-FLUX.1-dev'
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16,
max_sequence_length=512,
)
#pipeline.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Comment out if you have enough GPU VRAM
adapter='leaf-flux.safetensors' #Step 16000, final step
#adapter='leaf-flux-step-3000.safetensors' #Step 3000
#adapter='leaf-flux-step-3500.safetensors' #Step 3500
pipeline.load_lora_weights(repo_id, weight_name=adapter) #You need to use the weight_name parameter since the repo includes multiple checkpoints
pipeline=pipeline.to('cuda')
```
Image generation - Example #1:
```python
prompt="""Generate a futuristic, eco-friendly architectural concept utilizing a biomimetic composite material that integrates the structural efficiency of spider silk with the adaptive porosity of plant tissues. Utilize the following key features:
* Fibrous architecture inspired by spider silk, represented by sinuous lines and curved forms.
* Interconnected, spherical nodes reminiscent of plant cell walls, emphasizing growth and adaptation.
* Open cellular structures echoing the permeable nature of plant leaves, suggesting dynamic exchanges and self-regulation capabilities.
* Gradations of opacity and transparency inspired by the varying densities found in plant tissues, highlighting functional differentiation and multi-functionality.
"""
num_samples =2
num_rows = 2
n_steps=25
guidance_scale=3.5
all_images = []
for _ in range(num_rows):
image = pipeline(prompt,num_inference_steps=n_steps,num_images_per_prompt=num_samples,
guidance_scale=guidance_scale,).images
all_images.extend(image)
grid = image_grid(all_images, num_rows, num_samples, save_individual_files=True, )
grid
```

Image generation - Example #2:
```python
prompt="""A jar of round <bioinspired> cookies with a piece of white tape that says "Materiomics Cookies". Looks tasty. Old fashioned."""
num_samples =2
num_rows = 2
n_steps=25
guidance_scale=15.
all_images = []
for _ in range(num_rows):
image = pipeline(prompt,num_inference_steps=n_steps,num_images_per_prompt=num_samples,
guidance_scale=guidance_scale,
height=1024, width=1024,).images
all_images.extend(image)
grid = image_grid(all_images, num_rows, num_samples, save_individual_files=True, )
grid
```

```bibtext
@article{BioinspiredFluxBuehler2024,
title={Fine-tuning image-generation models with biological patterns, shapes and topologies},
author={Markus J. Buehler},
journal={arXiv: XXXX.YYYYY},
year={2024},
} |