Update README.md
601779d
-
1.52 kB
initial commit
-
98 Bytes
Update README.md
-
661 Bytes
Upload LlamaForCausalLM
embeddings.pkl
Detected Pickle imports (32)
- "tokenizers.AddedToken",
- "transformers.models.mpnet.modeling_mpnet.MPNetSelfAttention",
- "sentence_transformers.SentenceTransformer.SentenceTransformer",
- "torch.storage._load_from_bytes",
- "torch._C._nn.gelu",
- "transformers.models.mpnet.modeling_mpnet.MPNetEncoder",
- "transformers.models.mpnet.modeling_mpnet.MPNetModel",
- "transformers.models.mpnet.modeling_mpnet.MPNetPooler",
- "transformers.models.mpnet.configuration_mpnet.MPNetConfig",
- "transformers.models.mpnet.modeling_mpnet.MPNetAttention",
- "transformers.models.mpnet.modeling_mpnet.MPNetOutput",
- "transformers.models.mpnet.tokenization_mpnet_fast.MPNetTokenizerFast",
- "sentence_transformers.models.Normalize.Normalize",
- "sentence_transformers.models.Pooling.Pooling",
- "transformers.models.mpnet.modeling_mpnet.MPNetLayer",
- "torch.nn.modules.activation.Tanh",
- "torch.nn.modules.container.ModuleList",
- "langchain.embeddings.huggingface.HuggingFaceEmbeddings",
- "transformers.models.mpnet.modeling_mpnet.MPNetEmbeddings",
- "torch._utils._rebuild_parameter",
- "transformers.models.mpnet.modeling_mpnet.MPNetIntermediate",
- "tokenizers.Tokenizer",
- "transformers.activations.GELUActivation",
- "torch.nn.modules.dropout.Dropout",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.nn.modules.linear.Linear",
- "sentence_transformers.models.Transformer.Transformer",
- "collections.OrderedDict",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.sparse.Embedding",
- "torch.device",
- "tokenizers.models.Model"
How to fix it?
439 MB
Upload embeddings.pkl
-
183 Bytes
Upload LlamaForCausalLM
-
9.88 GB
Upload LlamaForCausalLM
-
9.89 GB
Upload LlamaForCausalLM
-
7.18 GB
Upload LlamaForCausalLM
-
24 kB
Upload LlamaForCausalLM
-
414 Bytes
Upload tokenizer
-
1.84 MB
Upload tokenizer
-
500 kB
Upload tokenizer
-
1.71 kB
Upload tokenizer