Model Description
This repository has two models - model_encode.pth
and model_fairtracks.pth
.
Both of these models are used by the attribute-standardizer
for standardizing the metadata based on user choice.
Files Description
- model_encode.pth : This has the ENCODE metadata trained model.
- model_fairtracks.pth : This has the FAIRTRACKS BLUEPRINT metadata trained model.
- vectorizer_encode.pkl : This is a pickle file which contains a serialized
CountVectorizer
instance from thescikit-learn
library. It is used for Bag of Words encoding which is used an an input to the model when the user selects ENCODE schema. - vectorizer_fairtracks.pkl: This is a pickle file which contains a serialized
CountVectorizer
instance from thescikit-learn
library. It is used for Bag of Words encoding which is used an an input to the model when the user selects FAIRTRACKS schema. - label_encoder_encode.pkl: This is a pickle file which contains the unqiue label values derived from the training data. The model classifies the output into these labels for ENCODE schema.
- label_encoder_fairtracks.pkl: This is a pickle file which contains the unqiue label values derived from the training data. The model classifies the output into these labels for FAIRTRACKS schema.
Usage
To load this model:
from huggingface_hub import hf_hub_download
model_fairtracks = hf_hub_download(repo_id="databio/attribute-standardizer-model6", filename="model_fairtracks.pth")
model_encode = hf_hub_download(repo_id="databio/attribute-standardizer-model6", filename="model_encode.pth")
To use this model, refer to the GitHub repository of bedmess
: