controlnet-dyamagishi/output
These are controlnet weights trained on cagliostrolab/animagine-xl-3.1 with new type of conditioning. You can find some example images below.
prompt: outdoors, scenery, cloud, multiple_girls, sky, day, tree, grass, architecture, 2girls, blue_sky, building, standing, skirt, long_hair, mountain, east_asian_architecture, from_behind, castle, facing_away, black_skirt, school_uniform, pagoda, waterfall, white_shirt, white_hair, shirt, cloudy_sky, bag
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for dyamagishi/output
Base model
stabilityai/stable-diffusion-xl-base-1.0
Finetuned
Linaqruf/animagine-xl-2.0
Finetuned
cagliostrolab/animagine-xl-3.0
Finetuned
cagliostrolab/animagine-xl-3.1