metadata
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
instance_prompt: TOK
Gamzekocc_Fluxx
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
Trigger words
You should use TOK
to trigger the image generation.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image, AutoencoderKL
import torch
# 1. Ana modeli yükle
pipeline = AutoPipelineForText2Image.from_pretrained(
"SG161222/Realistic_Vision_V6.0_B1_noVAE",
torch_dtype=torch.float16,
variant="fp16"
).to("cuda")
# 2. VAE ekle (opsiyonel)
pipeline.vae = AutoencoderKL.from_pretrained(
"stabilityai/sd-vae-ft-mse",
torch_dtype=torch.float16
).to("cuda")
# 3. LoRA'yı yükle
pipeline.load_lora_weights(
"codermert/gamzekocc_fluxx",
weight_name="lora.safetensors",
adapter_name="fluxx_style"
)
# 4. Görüntü oluştur
image = pipeline(
prompt="portrait of a cyber ninja, <fluxx_style>, ultra-detailed, 8K",
negative_prompt="blurry, cartoon, deformed",
num_inference_steps=30
).images[0]
image.save("cyber_ninja.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers