gamzekocc_fluxx / README.md
codermert's picture
Update README.md
662f218 verified
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Gamzekocc_Fluxx
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
# 1. Realistic Vision modelini yükle
pipeline = AutoPipelineForText2Image.from_pretrained(
"SG161222/Verus_Vision_1.0b",
torch_dtype=torch.float16
).to("cuda")
# 2. LoRA'yı zorla yükle (alpha=0.5 ile gücünü azaltın)
pipeline.load_lora_weights(
"codermert/gamzekocc_fluxx",
weight_name="lora.safetensors",
adapter_name="fluxx",
cross_attention_scale=0.5 # LoRA etkisini hafiflet
)
# 3. Promptta hem trigger hem stil vurgusu yapın
image = pipeline(
prompt="portrait of TOK, <fluxx>, photorealistic, 8K", # TOK + adapter_name
negative_prompt="blurry, deformed"
).images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)