File size: 4,010 Bytes
c4cd94f 3ce6b4a c4cd94f 7539c18 7d8d79e c4cd94f 3ce6b4a c4cd94f 3ce6b4a c4cd94f 3ce6b4a 6f6e4ca 3ce6b4a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- huggingface
inference: true
license: apache-2.0
language:
- en
datasets:
- Josephgflowers/Finance-Instruct-500k
---
# Uploaded model
- **Developed by:** abhi9ab
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
---
# Model Card
The goal of this model is to enhance the base model's performance on financial tasks by fine-tuning it on a specialized financial dataset. Using LoRA, this model has been optimized for low-rank adaptation, allowing efficient fine-tuning with fewer resources.
---
## Model Details
- Base Model: [unsloth/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B)
- Model Type: Language Model (Distilled)
- Fine-Tuning Technique: LoRA (Low-Rank Adaptation)
- Fine-Tuned Model: DeepSeek-R1-Distill-Llama-8B-finance-v1
- Dataset: [Josephgflowers/Finance-Instruct-500k](https://huggingface.co/datasets/Josephgflowers/Finance-Instruct-500k) (reduced to 5k JSONL entries)
- Platform: Free-tier Kaggle Notebook
- Library: Hugging Face Transformers, Unsloth and Pytorch
This model is a fine-tuned version of the [unsloth/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B), utilizing LoRA for efficient parameter adaptation. It has been specifically tuned on a reduced version (5k) of the [Josephgflowers/Finance-Instruct-500k](https://huggingface.co/datasets/Josephgflowers/Finance-Instruct-500k) dataset to enhance performance in finance-related tasks.
---
## Intended Use
The model is intended for tasks related to financial question answering, generation, and instructions that require domain-specific knowledge in finance. It can also be used in other natural language understanding and generation tasks that benefit from fine-tuning on a finance-specific dataset.
---
## Dataset
The model was fine-tuned on a subset of the Finance-Instruct-500k dataset from Hugging Face, specifically reduced to 5,000 JSONL entries for the fine-tuning process. This dataset contains financial questions and answers, providing a rich set of examples for training the model.
---
## Training Data
- Dataset Name: [Josephgflowers/Finance-Instruct-500k](https://huggingface.co/datasets/Josephgflowers/Finance-Instruct-500k)
- Data Size: 5k samples (subset from original dataset)
- Domain: Finance
- Task: Instruction-based fine-tuning for financial information retrieval and generation.
---
## Notes
- This fine-tuning was performed on the free-tier of Kaggle Notebook, so training time and available resources are limited.
- Ensure that your runtime in Colab/Kaggle is set to a GPU environment to speed up the training process.
- The reduced 5k dataset is a smaller sample for experimentation. You can scale this up depending on your needs and available resources.
---
## Performance
The model performs well in financial instruction tasks, delivering accurate responses based on the reduced dataset. Performance can be further evaluated through specific finance-related benchmarks.
---
## Usage
```bash
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("abhi9ab/DeepSeek-R1-Distill-Llama-8B-finance-v1")
model = AutoModelForCausalLM.from_pretrained("abhi9ab/DeepSeek-R1-Distill-Llama-8B-finance-v1")
inputs = tokenizer("Example finance-related query", return_tensors="pt")
outputs = model.generate(inputs['input_ids'])
```
---
## Acknowledgement
- Josephgflowers for the dataset.
- Hugging Face Transformers library for model implementation and Unsloth for LoRA-based fine-tuning.
--- |