Update README.md
51f676f
verified
-
1.52 kB
initial commit
-
7.02 kB
Update README.md
-
0 Bytes
Create config.json
text_encoder_2.pt
Detected Pickle imports (30)
- "optimum.quanto.nn.qlinear.QLinear",
- "transformers.models.t5.modeling_t5.T5LayerNorm",
- "torch._utils._rebuild_tensor_v3",
- "torch._utils._rebuild_parameter",
- "torch._utils._rebuild_wrapper_subclass",
- "torch.storage.UntypedStorage",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.t5.modeling_t5.T5DenseGatedActDense",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.t5.modeling_t5.T5EncoderModel",
- "torch.BFloat16Storage",
- "transformers.models.t5.modeling_t5.T5Block",
- "collections.OrderedDict",
- "transformers.models.t5.modeling_t5.T5Stack",
- "torch.FloatStorage",
- "transformers.models.t5.modeling_t5.T5LayerSelfAttention",
- "torch.nn.modules.dropout.Dropout",
- "transformers.models.t5.modeling_t5.T5LayerFF",
- "optimum.quanto.tensor.qtype.qtype",
- "torch.serialization._get_layout",
- "__builtin__.set",
- "torch.bfloat16",
- "transformers.models.t5.modeling_t5.T5Attention",
- "optimum.quanto.tensor.qbytes.QBytesTensor",
- "transformers.activations.NewGELUActivation",
- "transformers.models.t5.configuration_t5.T5Config",
- "torch.float8_e4m3fn",
- "torch._tensor._rebuild_from_type_v2",
- "torch.device",
- "torch._utils._rebuild_tensor_v2"
How to fix it?
4.9 GB
Upload folder using huggingface_hub (#1)
transformer.pt
Detected Pickle imports (41)
- "torch._utils._rebuild_tensor_v3",
- "torch.BFloat16Storage",
- "diffusers.models.embeddings.Timesteps",
- "torch._utils._rebuild_tensor_v2",
- "diffusers.models.normalization.AdaLayerNormZero",
- "optimum.quanto.tensor.qtype.qtype",
- "torch.nn.modules.normalization.LayerNorm",
- "diffusers.models.transformers.transformer_flux.FluxTransformer2DModel",
- "torch.serialization._get_layout",
- "torch.nn.modules.activation.SiLU",
- "diffusers.models.attention_processor.FluxSingleAttnProcessor2_0",
- "torch.nn.modules.container.ModuleList",
- "torch.float8_e4m3fn",
- "diffusers.models.embeddings.CombinedTimestepTextProjEmbeddings",
- "diffusers.models.embeddings.PixArtAlphaTextProjection",
- "diffusers.models.normalization.AdaLayerNormContinuous",
- "torch._utils._rebuild_wrapper_subclass",
- "torch.bfloat16",
- "torch.Size",
- "diffusers.models.activations.GELU",
- "torch._tensor._rebuild_from_type_v2",
- "collections.OrderedDict",
- "optimum.quanto.tensor.qbytes.QBytesTensor",
- "diffusers.models.transformers.transformer_flux.EmbedND",
- "__builtin__.set",
- "torch.storage.UntypedStorage",
- "diffusers.models.attention_processor.Attention",
- "torch.nn.modules.dropout.Dropout",
- "diffusers.models.attention_processor.FluxAttnProcessor2_0",
- "diffusers.configuration_utils.FrozenDict",
- "torch.FloatStorage",
- "diffusers.models.transformers.transformer_flux.FluxTransformerBlock",
- "diffusers.models.normalization.AdaLayerNormZeroSingle",
- "torch._utils._rebuild_parameter",
- "diffusers.models.normalization.RMSNorm",
- "diffusers.models.transformers.transformer_flux.FluxSingleTransformerBlock",
- "diffusers.models.attention.FeedForward",
- "torch.device",
- "torch.nn.modules.activation.GELU",
- "diffusers.models.embeddings.TimestepEmbedding",
- "optimum.quanto.nn.qlinear.QLinear"
How to fix it?
11.9 GB
Upload folder using huggingface_hub (#1)