text
stringlengths 0
5.54k
|
---|
nsfw_content_detected (List[bool]) —
|
List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work”
|
(nsfw) content, or None if safety checking could not be performed.
|
Output class for Alt Diffusion pipelines.
|
__call__
|
(
|
*args
|
**kwargs
|
)
|
Call self as a function.
|
AltDiffusionPipeline
|
class diffusers.AltDiffusionPipeline
|
<
|
source
|
>
|
(
|
vae: AutoencoderKL
|
text_encoder: RobertaSeriesModelWithTransformation
|
tokenizer: XLMRobertaTokenizer
|
unet: UNet2DConditionModel
|
scheduler: KarrasDiffusionSchedulers
|
safety_checker: StableDiffusionSafetyChecker
|
feature_extractor: CLIPFeatureExtractor
|
requires_safety_checker: bool = True
|
)
|
Parameters
|
vae (AutoencoderKL) —
|
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
|
text_encoder (RobertaSeriesModelWithTransformation) —
|
Frozen text-encoder. Alt Diffusion uses the text portion of
|
CLIP,
|
specifically the clip-vit-large-patch14 variant.
|
tokenizer (XLMRobertaTokenizer) —
|
Tokenizer of class
|
XLMRobertaTokenizer.
|
unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
|
scheduler (SchedulerMixin) —
|
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
|
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
|
safety_checker (StableDiffusionSafetyChecker) —
|
Classification module that estimates whether generated images could be considered offensive or harmful.
|
Please, refer to the model card for details.
|
feature_extractor (CLIPFeatureExtractor) —
|
Model that extracts features from generated images to be used as inputs for the safety_checker.
|
Pipeline for text-to-image generation using Alt Diffusion.
|
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
|
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
|
__call__
|
<
|
source
|
>
|
(
|
prompt: typing.Union[str, typing.List[str]] = None
|
height: typing.Optional[int] = None
|
width: typing.Optional[int] = None
|
num_inference_steps: int = 50
|
guidance_scale: float = 7.5
|
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
|
num_images_per_prompt: typing.Optional[int] = 1
|
eta: float = 0.0
|
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
|
latents: typing.Optional[torch.FloatTensor] = None
|
prompt_embeds: typing.Optional[torch.FloatTensor] = None
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.