--- license: apache-2.0 language: - en tags: - flux - diffusers - lora - replicate - image-generation - flux-diffusers - diffusers - photo - realism - character - historical person - poetry - literature - history - archival base_model: "AlekseyCalvin/Colossus_2.1_dedistilled_by_AfroMan4peace" pipeline_tag: text-to-image library_name: diffusers emoji: 🔜 instance_prompt: Anna AKHMATOVA, blemished skin texture with slight wrinkles widget: - text: >- agitprop Constructivist poster of the poet Anna AKHMATOVA calling out "JOIN RCA!" in a speech bubble, over satirical cartoon of cool punky diverse teenage gen-z revolutionaries output: url: AkhmDedistilled1.jpg - text: >- vintage side-view photograph of young Anna AKHMATOVA, classic analog color photography output: url: AnnaPoeticsWill.jpg --- # Anna Akhmatova Flux Low-Rank Adapter (LoRA) Version 2 by SilverAgePoets.com Trained on a dataset of 60 vintage photos (most of them colorized by us and/or by [Klimbim](https://klimbim2020.wordpress.com/)).
And capturing the legendary **poet**:
**Anna Andreevna Akhmatova**
*(b.06/26/1889-d.03/05/1966)*
For this LoRA we used highly detailed manually-composed paragraph captions.
It was trained for 1600 steps (a 1300 checkpoint also added) at a Diffusion-Transformer Learning Rate of .0004, dim/alpha of 32, batch 1, AdamW8bit optimizer! Minimal synthetic data (just a few reluctant upscales), zero auto-generated captions!
**VERSION 3 NOTE:**
This third version of the Akhmatova LoRA was trained on the **Colossus 2.1 Dedistilled Flux model by AfroMan4Peace**, available [here](https://huggingface.co/AlekseyCalvin/Colossus_2.1_dedistilled_by_AfroMan4peace) in a diffusers format and [here at CivitAI](https://civitai.com/models/833086/colossus-project-flux).
As of writing this blurb, we haven't yet tested this LoRA enough to say much concretely, but our other adapters trained over de-distilled modifications of FLUX have been shown to be more versatile than most base-model trained LoRAs in regards to compatibility and output variability.
In parallel, we've also trained yet another Akhmatova LoRA (version 2) over a regular version of Flux, to enable a better basis for comparative testing. That version is available in a different repo [here](https://huggingface.co/AlekseyCalvin/Akhmatova_Flux_LoRA_SilverAgePoets_v2_regularFluxD).
**MORE INFO:**
This is a **rank-32 historical LoRA for Flux** (whether of a [Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), a [Schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), or a [Soon®](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) sort...)
Use it to diffusely diversify the presence of Akhmatova's deathless visage in our strange latter-day world! And once you're faced with this poet's iconic penetrating stare, do lend your ears to her as well: listen in to her voice! Wherefrom might this voice resound for you? A dusty paperback? Google search? Maybe a clip on YouTube? Or, say, your very memory reciting verses suddenly recalled?
In any case, we'll offer you some echoes to rely on, if you will: Namely, our **translations of Akhmatova's verse-works**, adapted from a proto-Soviet song-tongue into a Worldish one...
And found, along with many other poets' songs and tomes... Over **at [SilverAgePoets.com](https://www.silveragepoets.com/akhmatovamain)!** ## Trigger words You should use `AKHMATOVA` or `Anna Akhmatova` or `vintage autochrome photograph of Anna Akhmatova` to summon the poet's latent spirit. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('AlekseyCalvin/Akhmatova_Flux_LoRA_SilverAgePoets_v2_regularFluxD', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)