Fetching metadata from the HF Docker repository...
Update app.py
f9aa49d
verified
-
1.52 kB
initial commit
-
340 Bytes
initial commit
-
15 kB
Update app.py
-
18.3 MB
Upload 10 files
-
17.1 MB
Upload 10 files
-
67 Bytes
Upload 10 files
rf_model.pkl
Detected Pickle imports (7)
- "numpy.core.multiarray.scalar",
- "numpy.ndarray",
- "numpy.core.multiarray._reconstruct",
- "sklearn.tree._classes.DecisionTreeClassifier",
- "sklearn.ensemble._forest.RandomForestClassifier",
- "numpy.dtype",
- "sklearn.tree._tree.Tree"
How to fix it?
7.2 MB
Upload 10 files
-
18.1 MB
Upload 10 files
sql_tokenizer.pkl
Detected Pickle imports (4)
- "keras.src.legacy.preprocessing.text.Tokenizer",
- "collections.OrderedDict",
- "collections.defaultdict",
- "builtins.int"
How to fix it?
1.12 MB
Upload 10 files
svm_model.pkl
Detected Pickle imports (6)
- "numpy.core.multiarray.scalar",
- "scipy.sparse._csr.csr_matrix",
- "numpy.ndarray",
- "numpy.core.multiarray._reconstruct",
- "sklearn.svm._classes.SVC",
- "numpy.dtype"
How to fix it?
234 kB
Upload 10 files
tfidf_vectorizer.pkl
Detected Pickle imports (8)
- "numpy.core.multiarray.scalar",
- "scipy.sparse._csr.csr_matrix",
- "numpy.float64",
- "numpy.core.multiarray._reconstruct",
- "numpy.ndarray",
- "sklearn.feature_extraction.text.TfidfTransformer",
- "numpy.dtype",
- "sklearn.feature_extraction.text.TfidfVectorizer"
How to fix it?
2.58 MB
Upload 10 files
tokenizer.pkl
Detected Pickle imports (4)
- "keras.src.legacy.preprocessing.text.Tokenizer",
- "collections.OrderedDict",
- "collections.defaultdict",
- "builtins.int"
How to fix it?
952 kB
Upload 10 files