File size: 4,034 Bytes
c00f54d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
047068c
c00f54d
 
 
 
 
 
 
047068c
c00f54d
047068c
c00f54d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13f9e76
c00f54d
13f9e76
c00f54d
 
 
 
 
 
 
 
 
 
 
 
3f408e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- f1
model-index:
- name: Finance_DistilBERT_sentiment
  results:
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: financial_phrasebank
      type: financial_phrasebank
      config: sentences_75agree
      split: train
      args: sentences_75agree
    metrics:
    - type: f1
      value: 0.9101001493367561
      name: F1
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Finance_DistilBERT_sentiment

This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- F1: 0.9101
- Acc: 0.9088

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 12

### Training results (Final epoch)

| Training Loss | Epoch | Step | Validation Loss | F1     | Acc    |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.0975        | 1.0   | 87   | 0.2763          | 0.9101 | 0.9088 |


### Framework versions

- Transformers 4.46.2
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
```python
import matplotlib.pyplot as plt
import plotly.graph_objects as go
from IPython.display import display, HTML
import numpy as np
from transformers import pipeline
%matplotlib inline

# Pipelines
classifier = pipeline("text-classification", model="Sharpaxis/Finance_DistilBERT_sentiment", top_k=None)
pipe = pipeline("text-classification", model="Sharpaxis/News_classification_distilbert")

def finance_text_predictor(text):
    text = str(text)
    out = classifier(text)[0]
    type_news = pipe(text)[0]
    
    # Display news type and text in HTML
    if type_news['label'] == 'LABEL_1':
        display(HTML(f"""
        <div style="border: 2px solid red; padding: 10px; margin: 10px; background-color: #ffe6e6; color: black; font-weight: bold;">
            IMPORTANT TECH/FIN News<br>
            <div style="margin-top: 10px; font-weight: normal; font-size: 14px; color: darkred;">{text}</div>
        </div>
        """))
    elif type_news['label'] == 'LABEL_0':
        display(HTML(f"""
        <div style="border: 2px solid green; padding: 10px; margin: 10px; background-color: #e6ffe6; color: black; font-weight: bold;">
            NON IMPORTANT NEWS<br>
            <div style="margin-top: 10px; font-weight: normal; font-size: 14px; color: darkgreen;">{text}</div>
        </div>
        """))
    
    # Sentiment analysis scores
    scores = [sample['score'] for sample in out]
    labels = [sample['label'] for sample in out]
    label_map = {'LABEL_0': "Negative", 'LABEL_1': "Neutral", 'LABEL_2': "Positive"}
    sentiments = [label_map[label] for label in labels]
    
    print("SCORES")
    for i in range(len(scores)):
        print(f"{sentiments[i]} : {scores[i]:.4f}")
    
    print(f"Sentiment of text is {sentiments[np.argmax(scores)]}")
    
    # Bar chart for sentiment scores
    fig = go.Figure(
        data=[go.Bar(x=sentiments, y=scores, marker=dict(color=["red", "blue", "green"]), width=0.3)]
    )
    fig.update_layout(
        title="Sentiment Analysis Scores",
        xaxis_title="Sentiments",
        yaxis_title="Scores",
        template="plotly_dark"
    )
    fig.show()