error when loading

#1
by manueltonneau - opened

Hi all, I'm trying to load your model but get this error:

OSError: Can't load tokenizer for 'Exqrch/IndoBERTweet-HateSpeech'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'Exqrch/IndoBERTweet-HateSpeech' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer.

could you please help?

this is when running:

from transformers import pipeline
model = 'Exqrch/IndoBERTweet-HateSpeech'
pipe = pipeline(task='text-classification', model=model)

Hello!
Yeah, we didn't initialize our own tokenizer, instead, we used a pre-existing one from "indolem/indobertweet-base-uncased"

You need to setup the tokenizer like this for it to run with pipeline:

from transformers import pipeline, AutoTokenizer

# Load the model and a compatible tokenizer
model = 'Exqrch/IndoBERTweet-HateSpeech'
tokenizer = AutoTokenizer.from_pretrained('indolem/indobertweet-base-uncased')

# Initialize the pipeline with the model and tokenizer
pipe = pipeline(task='text-classification', model=model, tokenizer=tokenizer)

# Example usage
text = "This is an example text"
result = pipe(text)
print(result)

Sign up or log in to comment