File size: 1,284 Bytes
48fc765
ffc7b2b
 
 
 
 
 
 
 
 
 
48fc765
 
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
 
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
 
 
 
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
ffc7b2b
48fc765
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
language: en
license: mit
tags:
- financial-qa
- distilgpt2
- fine-tuned
datasets:
- financial-qa
metrics:
- perplexity
---

# Financial QA Fine-Tuned Model

This model is a fine-tuned version of `distilgpt2` on financial question-answering data from Allstate's financial reports.

## Model description

The model was fine-tuned to answer questions about Allstate's financial reports and performance.

## Intended uses & limitations

This model is intended to be used for answering factual questions about Allstate's financial reports for 2022-2023.
It should not be used for financial advice or decision-making without verification from original sources.

## Training data

The model was trained on a custom dataset of financial QA pairs derived from Allstate's 10-K reports.

## Training procedure

The model was fine-tuned using the `Trainer` class from Hugging Face's Transformers library with the following parameters:
- Learning rate: default
- Batch size: 2
- Number of epochs: 3

## Evaluation results

The model achieved a final training loss of 0.44 and validation loss of 0.43.

## Limitations and bias

This model has limited knowledge only of Allstate's financial data and cannot answer questions about other companies or financial topics outside its training data.