jmete's picture
Update README.md
a6952ce
|
raw
history blame
2.93 kB
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: tweet_instruct_detect
    results: []

tweet_instruct_detect

This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on an dataset combining manually labelled tweets into either instructions or spam, and pre-processed instructions from the flan dataset that are less than 250 characters long to be used as positive instructions. It achieves the following results on the evaluation set:

  • Loss: 0.1300
  • Accuracy: 0.9761

Model description

This model is trained to help determine if tweets are useful instructions. This can be used to filter the large corpus of tweet data online into useful instruction datasets for instruction fine-tuning.

Intended uses & limitations

Intended to be used to determine if tweets are useful instructions.

The model will be biased towards english data, and maybe be biased towards certain ways of phrasing "instructions". Instructions in this case may also be questions.

Current version of the model is very basic and can get confused by simple things. For example, simply adding a ? character will bias it heavily towards an instruction, even if using the same sentence so it is highly sensitive to certain characters and ways of phrasing things. This can hopefully be fixed by better training data or model tuning.

Training and evaluation data

Model was fine-tuned on a relatively small number of tweets and instructions.

Train data: 749 examples Test data: 251 examples

Out of the total number of examples, 526 of them were manually labelled tweets, most of which were spam due to the high noise ratio in tweets. Spam in this case can refer to actual spam, gibberish, or also statements that are generally fine but not useful as an instruction or question.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 47 0.3832 0.9562
No log 2.0 94 0.2004 0.9681
No log 3.0 141 0.1501 0.9721
No log 4.0 188 0.1362 0.9721
No log 5.0 235 0.1300 0.9761

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1
  • Datasets 2.9.0
  • Tokenizers 0.13.2