bhavyagiri's picture
Update README.md
ab6b769
|
raw
history blame
930 Bytes
metadata
license: mit
datasets:
  - bhavyagiri/imdb-spoiler
language:
  - en
metrics:
  - accuracy
  - f1
pipeline_tag: text-classification
tags:
  - text-classification
  - pytorch
  - roberta
  - emotions
  - multi-class-classification
  - multi-label-classification
widget:
  - text: Jack Ryan is so amazing

The model trained from roberta-base on the imdb-spoiler dataset for classification.

imdb-spoiler is a subset of a large-dataset for classifying whether a movie review is a spoiler or not.

The model was trained using AutoModelForSequenceClassification.from_pretrained for 3 epochs with a learning rate of 2e-5 and weight decay of 0.01.

Evaluation using the dataset validation split gives:

  • F1 0.585
  • Accuracy 0.474