lyangas
text key -> textB64
afc3da6
-
1.52 kB
initial commit
-
292 Bytes
init commit
-
198 Bytes
initial commit
-
2.08 kB
text key -> textB64
model_finetuned_clear.pkl
Detected Pickle imports (33)
- "transformers.models.bert.configuration_bert.BertConfig",
- "torch.nn.modules.normalization.LayerNorm",
- "transformers.activations.GELUActivation",
- "collections.OrderedDict",
- "tokenizers.Tokenizer",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.bert.modeling_bert.BertModel",
- "torch.storage._load_from_bytes",
- "numpy.dtype",
- "numpy.ndarray",
- "sklearn.linear_model._logistic.LogisticRegression",
- "torch._C._nn.gelu",
- "transformers.models.bert.modeling_bert.BertSelfOutput",
- "transformers.models.bert.modeling_bert.BertPooler",
- "transformers.models.bert.modeling_bert.BertAttention",
- "torch.nn.modules.container.ModuleList",
- "__main__.BertEmbedder",
- "transformers.models.bert.modeling_bert.BertOutput",
- "torch.nn.modules.linear.Linear",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.bert.modeling_bert.BertEncoder",
- "numpy.core.multiarray._reconstruct",
- "torch._utils._rebuild_parameter",
- "__main__.PredictModel",
- "transformers.models.bert.modeling_bert.BertEmbeddings",
- "transformers.models.bert.tokenization_bert_fast.BertTokenizerFast",
- "torch.float32",
- "transformers.models.bert.modeling_bert.BertSelfAttention",
- "torch.nn.modules.activation.Tanh",
- "tokenizers.models.Model",
- "transformers.models.bert.modeling_bert.BertLayer",
- "transformers.models.bert.modeling_bert.BertIntermediate",
- "torch.nn.modules.dropout.Dropout"
How to fix it?
435 MB
init commit
-
2.85 kB
init commit
-
81 Bytes
init commit