tugce2-lora / README.md
codermert's picture
Update README.md
26f0892 verified
|
raw
history blame
1.8 kB
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: DHANUSH
---
# Tugce_Flux
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `tugce` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from fastapi import FastAPI, HTTPException
from fastapi.responses import FileResponse
import torch
from diffusers import AutoPipelineForText2Image
import io
app = FastAPI()
# Model ve LoRA'yı yükle
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('codermert/tugce2-lora', weight_name='flux_train_replicate.safetensors')
@app.post("/generate_image")
async def generate_image(prompt: str, width: int, height: int):
try:
image = pipeline(
prompt,
width=width,
height=height
).images[0]
img_byte_arr = io.BytesIO()
image.save(img_byte_arr, format='PNG')
img_byte_arr.seek(0)
return FileResponse(img_byte_arr, media_type="image/png")
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)