File size: 4,284 Bytes
68cfe0e
 
7a6df0b
68cfe0e
7a6df0b
29f4422
 
 
68cfe0e
1af5080
 
0080925
1af5080
 
1348992
1af5080
 
 
 
 
 
1348992
 
 
 
1af5080
 
1348992
79dd8db
026bdff
1af5080
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1348992
1af5080
 
 
 
 
 
 
 
6e44f01
1af5080
 
 
 
 
 
 
 
73833b0
 
1af5080
 
 
 
 
 
79dd8db
 
 
1af5080
 
 
 
 
 
 
dd16da4
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
base_model:
  - stabilityai/stable-diffusion-3.5-medium
tags:
  - art
license: other
license_name: stabilityai-ai-community
license_link: LICENSE
---
# Bokeh 3.5 Medium
<div align="center">
<img src="show.jpg" alt="00205_" />
</div> 

Bokeh 3.5 Medium is based on **Stable Diffusion 3.5 Medium** as its foundation model, using a 5M high-resolution open-source dataset that underwent rigorous quality and **aesthetic screening** for post-training, ensuring **excellent image quality**, **high fidelity of natural images**, preservation of fine **details**, and enhanced **controllability**.

This model is released under the Stability Community License.
For more details, visit [Tensor.Art](https://tensor.art) or [TusiArt](https://tusiart.com) to explore additional resources and useful information.

## Overview

- Continued training on **SD3.5M**, utilizing carefully curated high-resolution training data to achieve excellent image quality.
- Trained with mixed short/long natural language captions.
  - **Short Captions:** Focus on the core subject content of the image.
  - **Long Captions:** Provide broader descriptions of the scene environment and atmosphere.
- **Recommended Resolutions:**  
  `1920x1024`, `1728x1152`, `1152x1728`, `1280x1664`, `1440x1440`
- Powerful customized **fine-tuning performance** that can be widely used for **downstream production tasks**.
- Powerful customized **fine-tuning performance** that can be widely used for **downstream production tasks**.
- Achieve **8~10step** image generation through strong distillation technology, with high-resolution images generated in just 5 seconds on a 3090-level GPU with some quality loss. You can use the [8steps lora](bokeh_8steps_turboX_lora.safetensors)  with the base checkpoint or use the [8step checkpoint](bokeh_8steps_turboX.safetensors).

## Advantages

### 🖼️ High-Quality Image Generation
- **State-of-the-art visual fidelity** with improved detail extraction and **aesthetic consistency**.
- **Enhanced resolution support** up to **200W pixels**, ensuring highly detailed image outputs.
- **Carefully curated dataset** ensures better composition, lighting, and overall artistic appeal.

### 🎯 Powerful Custom Fine-Tuning
- **Exceptional LoRA training support**, making it highly effective for:
  - Photography
  - 3D Rendering
  - Illustration
  - Concept Art

### ⚡ Efficient Inference & Training
- **Low hardware requirements for inference:**
  - **Medium model:** 9GB VRAM (without T5)
  - **Full weights inference:** 16GB VRAM (suitable for local deployment)
- **LoRA fine-tuning VRAM requirement:** 12GB - 32GB

## Known Issues

- **Potential human anatomy inconsistencies.**
- **Limited ability to generate photorealistic images.**
- **Some concepts may suffer from aesthetic quality issues.**


## Prompting Guide

### Use a structured prompt combining:
- **Main subject** (e.g., `"Close-up of a macaw"`)  
- **Detailed features** (e.g., `"vivid feathers, sharp beak"`)  
- **Background environment** (e.g., `"dimly lit environment"`)  
- **Atmospheric description** (e.g., `"soft warm lighting, cinematic mood"`)   
- **Optimal token length:** **30-70 tokens**.  

## Example Output
Using diffusers:
```python
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("tensorart/bokeh_3.5_medium", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")

image = pipe(
    "Close-up of a macaw, dimly lit environment",
    num_inference_steps=28,
    guidance_scale=4,
    height=1920,
    width=1024,
    negative_prompt="anime,cartoon,bad hands,extra finger,blurred,text,watermark",
    negative_prompt_3=""
).images[0]
image.save("macaw.jpg")
```
Using comfyui:
To use this workflow in **ComfyUI**, download the JSON file and load it:

[Base Model Workflow](bk_workflow.json)

[8steps-TurboX Workflow](bokeh_turboX.json)

### 🔧 Training Tools
- **Kohya_ss:** [GitHub Repository](https://github.com/bmaltais/kohya_ss.git)
- **Simple Tuner:** [GitHub Repository](https://github.com/bghira/SimpleTuner)

## Contact
* Website: https://tensor.art  https://tusiart.com
* Developed by: TensorArt
* ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63044d493926de1f7ec709f4/nB79189jY20Qn2KD97Y0w.jpeg)