File size: 3,512 Bytes
d6fecdc
 
 
84e13e3
bf0dd24
84e13e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d6fecdc
 
 
 
 
dfbbec0
 
 
 
 
d6fecdc
0869ff7
d6fecdc
 
c98e166
ced6d1e
d6fecdc
c98e166
 
ced6d1e
 
 
 
286535e
ced6d1e
 
 
 
 
 
 
 
d749e78
ced6d1e
 
c98e166
d6fecdc
 
38e8d05
d6fecdc
38e8d05
 
 
 
 
 
 
 
 
d6fecdc
 
 
 
 
 
 
 
 
 
 
 
00bbdac
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- image-generation
- flux
- safetensors
widget:
  - text: >-
          A young Indian Monk having coffee , sitting near beach.
    output:
      url: https://raw.githubusercontent.com/Nitin13-git/Mahaveer-jain-Image-generation/refs/heads/main/Vivekananda(3).jpeg
  - text: >-
           A young Indian Monk having coffee.
    output:
      url: https://raw.githubusercontent.com/Nitin13-git/Mahaveer-jain-Image-generation/refs/heads/main/Vivekananda(4).jpeg

  - text: >-
          A young Indian Monk.
    output:
      url: https://raw.githubusercontent.com/Nitin13-git/Mahaveer-jain-Image-generation/refs/heads/main/Vivekananda(1).png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A young Indian Monk
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: A young Indian Monk having coffe at effil tower
  output:
    url: images/example_986yl2x9g.png

---
# Flux_finetune_with_swami_Vivekananda 

<Gallery />

## Inference

```python

import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.load_lora_weights("Nitin9sitare/Flux_finetune_with_swami_Vivekananda_final", weight_name="Nitin9sitare/Flux_finetune_with_swami_Vivekananda_final.safetensors")
pipe.fuse_lora(lora_scale=1.0)
pipe.to("cuda")

prompt = "A young Indian monk with sunglasses in a cricket ground, holding a bat in his hand."

image = pipe(prompt, 
             num_inference_steps=24, 
             guidance_scale=3.5,
             width=512, height=512,
            ).images[0]
image.save(f"example.png")
```

## Model description
This model leverages the powerful Flux architecture, a groundbreaking advancement in the world of Generative AI, specifically designed to generate personalized images. While popular tools like DALLE and MidJourney are great for generic imagery, they fall short when it comes to producing high-quality images of specific individuals. Flux revolutionizes this process by making it easy and affordable to fine-tune models with just a small set of personal images.

In this model, we demonstrate how you can fine-tune the Flux model using just 40 images, resulting in personalized image generation that captures unique facial features and characteristics with high fidelity. Whether you're looking to recreate images of yourself, historical figures, or specific individuals in creative settings, this model opens up new possibilities.

Key trigger words for this model include "A young Indian Monk", inspired by the legendary figure Swami Vivekananda, which helps direct the image generation towards that persona. The fine-tuning process is both efficient and cost-effective, running on Nvidia H100 GPUs for less than USD 2.0 (~INR 170), making this ideal for creative projects such as personalized portraits or thematic photoshoots.
Model Use:

    Trigger Word: "A young Indian Monk"
    Fine-tuned to evoke the likeness of Swami Vivekananda.

Download the model weights in the Safetensors format from the Files & Versions tab to get started.



## Trigger words

You should use `A young Indian Monk` to trigger the image generation.


## Download model

Weights for this model are available in Safetensors format.

[Download](/Nitin9sitare/Flux_finetune_with_swami_Vivekananda_final/tree/main) them in the Files & versions tab.