FIN_BERT_sentiment

This model is a fine-tuned version of bert-base-uncased on the financial_phrasebank dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4905
  • F1: 0.8891
  • Acc: 0.8886

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss F1 Acc
0.5295 1.0 211 0.3757 0.8731 0.8720
0.2174 2.0 422 0.3117 0.8911 0.8910
0.1129 3.0 633 0.4066 0.8886 0.8874
0.0459 4.0 844 0.4923 0.8896 0.8886
0.0275 5.0 1055 0.4905 0.8891 0.8886

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1
  • Datasets 3.1.0
  • Tokenizers 0.20.3

Code to use model as pipeline classifier

import matplotlib.pyplot as plt
import plotly.graph_objects as go
from IPython.display import display, HTML
import numpy as np
from transformers import pipeline
%matplotlib inline

# Pipelines
classifier = pipeline("text-classification", model="Sharpaxis/Finance_DistilBERT_sentiment", top_k=None)
pipe = pipeline("text-classification", model="Sharpaxis/News_classification_distilbert")

def finance_text_predictor(text):
    text = str(text)
    out = classifier(text)[0]
    type_news = pipe(text)[0]
    
    # Display news type and text in HTML
    if type_news['label'] == 'LABEL_1':
        display(HTML(f"""
        <div style="border: 2px solid red; padding: 10px; margin: 10px; background-color: #ffe6e6; color: black; font-weight: bold;">
            IMPORTANT TECH/FIN News<br>
            <div style="margin-top: 10px; font-weight: normal; font-size: 14px; color: darkred;">{text}</div>
        </div>
        """))
    elif type_news['label'] == 'LABEL_0':
        display(HTML(f"""
        <div style="border: 2px solid green; padding: 10px; margin: 10px; background-color: #e6ffe6; color: black; font-weight: bold;">
            NON IMPORTANT NEWS<br>
            <div style="margin-top: 10px; font-weight: normal; font-size: 14px; color: darkgreen;">{text}</div>
        </div>
        """))
    
    # Sentiment analysis scores
    scores = [sample['score'] for sample in out]
    labels = [sample['label'] for sample in out]
    label_map = {'LABEL_0': "Negative", 'LABEL_1': "Neutral", 'LABEL_2': "Positive"}
    sentiments = [label_map[label] for label in labels]
    
    print("SCORES")
    for i in range(len(scores)):
        print(f"{sentiments[i]} : {scores[i]:.4f}")
    
    print(f"Sentiment of text is {sentiments[np.argmax(scores)]}")
    
    # Bar chart for sentiment scores
    fig = go.Figure(
        data=[go.Bar(x=sentiments, y=scores, marker=dict(color=["red", "blue", "green"]), width=0.3)]
    )
    fig.update_layout(
        title="Sentiment Analysis Scores",
        xaxis_title="Sentiments",
        yaxis_title="Scores",
        template="plotly_dark"
    )
    fig.show()
Downloads last month
30
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Sharpaxis/FIN_BERT_sentiment

Finetuned
(3347)
this model

Dataset used to train Sharpaxis/FIN_BERT_sentiment

Evaluation results