What if I want to run xlm-roberta-flash-implementation locally?

#64
by Nucleon-17th - opened

The model need to excute xlm-roberta-flash-implementation in the hub but i can't get Inernet connection at where I depoly this model.
I found the repository on the hub and cloned it but I can't make the AutoModel.from_pretrained load this repository form local files.

I am also running this in an airgapped environment. You need to adjust the paths in here: https://huggingface.co/jinaai/jina-embeddings-v3/blob/main/config.json

jupyterjazz changed discussion status to closed

Sign up or log in to comment