language: en
license: mit
tags:
- financial-qa
- distilgpt2
- fine-tuned
datasets:
- financial-qa
metrics:
- perplexity
Financial QA Fine-Tuned Model
This model is a fine-tuned version of distilgpt2
on financial question-answering data from Allstate's financial reports.
Model description
The model was fine-tuned to answer questions about Allstate's financial reports and performance.
Intended uses & limitations
This model is intended to be used for answering factual questions about Allstate's financial reports for 2022-2023. It should not be used for financial advice or decision-making without verification from original sources.
Training data
The model was trained on a custom dataset of financial QA pairs derived from Allstate's 10-K reports.
Training procedure
The model was fine-tuned using the Trainer
class from Hugging Face's Transformers library with the following parameters:
- Learning rate: default
- Batch size: 2
- Number of epochs: 3
Evaluation results
The model achieved a final training loss of 0.44 and validation loss of 0.43.
Limitations and bias
This model has limited knowledge only of Allstate's financial data and cannot answer questions about other companies or financial topics outside its training data.