Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
PrunaAI
/
FLUX.1-schnell-8bit
like
2
Follow
Pruna AI
137
pruna-ai
Model card
Files
Files and versions
Community
1
52504ca
FLUX.1-schnell-8bit
1 contributor
History:
2 commits
johnrachwanpruna
Upload folder using huggingface_hub (
#1
)
52504ca
verified
4 months ago
.gitattributes
Safe
1.52 kB
initial commit
4 months ago
text_encoder_2.pt
pickle
Detected Pickle imports (30)
"optimum.quanto.nn.qlinear.QLinear"
,
"transformers.models.t5.modeling_t5.T5LayerNorm"
,
"torch._utils._rebuild_tensor_v3"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_wrapper_subclass"
,
"torch.storage.UntypedStorage"
,
"torch.nn.modules.sparse.Embedding"
,
"transformers.models.t5.modeling_t5.T5DenseGatedActDense"
,
"torch.nn.modules.container.ModuleList"
,
"transformers.models.t5.modeling_t5.T5EncoderModel"
,
"torch.BFloat16Storage"
,
"transformers.models.t5.modeling_t5.T5Block"
,
"collections.OrderedDict"
,
"transformers.models.t5.modeling_t5.T5Stack"
,
"torch.FloatStorage"
,
"transformers.models.t5.modeling_t5.T5LayerSelfAttention"
,
"torch.nn.modules.dropout.Dropout"
,
"transformers.models.t5.modeling_t5.T5LayerFF"
,
"optimum.quanto.tensor.qtype.qtype"
,
"torch.serialization._get_layout"
,
"__builtin__.set"
,
"torch.bfloat16"
,
"transformers.models.t5.modeling_t5.T5Attention"
,
"optimum.quanto.tensor.qbytes.QBytesTensor"
,
"transformers.activations.NewGELUActivation"
,
"transformers.models.t5.configuration_t5.T5Config"
,
"torch.float8_e4m3fn"
,
"torch._tensor._rebuild_from_type_v2"
,
"torch.device"
,
"torch._utils._rebuild_tensor_v2"
How to fix it?
4.9 GB
LFS
Upload folder using huggingface_hub (#1)
4 months ago
transformer.pt
pickle
Detected Pickle imports (41)
"torch._utils._rebuild_tensor_v3"
,
"torch.BFloat16Storage"
,
"diffusers.models.embeddings.Timesteps"
,
"torch._utils._rebuild_tensor_v2"
,
"diffusers.models.normalization.AdaLayerNormZero"
,
"optimum.quanto.tensor.qtype.qtype"
,
"torch.nn.modules.normalization.LayerNorm"
,
"diffusers.models.transformers.transformer_flux.FluxTransformer2DModel"
,
"torch.serialization._get_layout"
,
"torch.nn.modules.activation.SiLU"
,
"diffusers.models.attention_processor.FluxSingleAttnProcessor2_0"
,
"torch.nn.modules.container.ModuleList"
,
"torch.float8_e4m3fn"
,
"diffusers.models.embeddings.CombinedTimestepTextProjEmbeddings"
,
"diffusers.models.embeddings.PixArtAlphaTextProjection"
,
"diffusers.models.normalization.AdaLayerNormContinuous"
,
"torch._utils._rebuild_wrapper_subclass"
,
"torch.bfloat16"
,
"torch.Size"
,
"diffusers.models.activations.GELU"
,
"torch._tensor._rebuild_from_type_v2"
,
"collections.OrderedDict"
,
"optimum.quanto.tensor.qbytes.QBytesTensor"
,
"diffusers.models.transformers.transformer_flux.EmbedND"
,
"__builtin__.set"
,
"torch.storage.UntypedStorage"
,
"diffusers.models.attention_processor.Attention"
,
"torch.nn.modules.dropout.Dropout"
,
"diffusers.models.attention_processor.FluxAttnProcessor2_0"
,
"diffusers.configuration_utils.FrozenDict"
,
"torch.FloatStorage"
,
"diffusers.models.transformers.transformer_flux.FluxTransformerBlock"
,
"diffusers.models.normalization.AdaLayerNormZeroSingle"
,
"torch._utils._rebuild_parameter"
,
"diffusers.models.normalization.RMSNorm"
,
"diffusers.models.transformers.transformer_flux.FluxSingleTransformerBlock"
,
"diffusers.models.attention.FeedForward"
,
"torch.device"
,
"torch.nn.modules.activation.GELU"
,
"diffusers.models.embeddings.TimestepEmbedding"
,
"optimum.quanto.nn.qlinear.QLinear"
How to fix it?
11.9 GB
LFS
Upload folder using huggingface_hub (#1)
4 months ago