Edit model card

This is a fine-tuned RoBERTa model that takes text (up to a few sentences) and predicts to what extent it contains empathic language.

Example classification:

import torch
import tensorflow as tf
from transformers import RobertaTokenizer, RobertaModel
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("paragon-analytics/bert_empathy")
model = AutoModelForSequenceClassification.from_pretrained("paragon-analytics/bert_empathy")

def roberta(x):
    encoded_input = tokenizer(x, return_tensors='pt')
    output = model(**encoded_input)
    scores = output[0][0].detach().numpy()
    scores = tf.nn.softmax(scores)
    return scores.numpy()[1]
Downloads last month
76
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.