Package model correctly
#2
by
tcapelle
- opened
This PR adds a custom model class: MultiHeadDebertaForSequenceClassificationModel
.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("celadon", trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained("celadon", trust_remote_code=True)
model.eval()
sample_text = "A very gender inappropriate comment"
inputs = tokenizer(sample_text, return_tensors="pt", padding=True, truncation=True)
outputs = model(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'])
categories = ['Race/Origin', 'Gender/Sex', 'Religion', 'Ability', 'Violence']
predictions = outputs.logits.argmax(dim=-1).squeeze().tolist()
# Print the classification results for each category
print(f"Text: {sample_text}")
for i, category in enumerate(categories):
print(f"Prediction for Category {category}: {predictions[i]}")
# Text: A very gender inappropriate comment
# Prediction for Category Race/Origin: 0
# Prediction for Category Gender/Sex: 3
# Prediction for Category Religion: 0
# Prediction for Category Ability: 0
# Prediction for Category Violence: 0
We also added a pipeline config, so now you can use this model using
pipe = pipeline("text-classification", model="celadon", trust_remote_code=True)
result = pipe("A very gender inappropriate comment")
# [{'Race/Origin': 0, 'Gender/Sex': 3, 'Religion': 0, 'Ability': 0, 'Violence': 0}]
No more custom loading!
tcapelle
changed pull request title from
Upload CustomTextClassificationPipeline
to Package model correctly
This is ready to be merged @catherinearnett