File size: 1,437 Bytes
cdaa6cd cf81e1d cdaa6cd cf81e1d cdaa6cd cf81e1d cdaa6cd cf81e1d cdaa6cd cf81e1d cdaa6cd cf81e1d cdaa6cd cf81e1d cdaa6cd cf81e1d cdaa6cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
library_name: transformers
license: mit
language:
- en
---
### Model Description
This model is used to generate the template based on the body of any emails or messages. It uses Microsoft's Phi-2 as the base model and was finetuned for 2 epochs on Google Colab's Tesla T4 GPU.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Anupam Wagle
- **Model type:** Text Generation
- **Language(s) (NLP):** PyTorch
- **License:** MIT
- **Finetuned from model:** Microsoft Phi-2
## Uses
Use to generate the message based on the previous ones.
## Bias, Risks, and Limitations
For better results, increase the size of the dataset and the training epochs.
## Training Details
### Training Data
The format of the dataset used for finetuning is as follows:
[{
"input_email": "Hello Adam,\n\nCan you come to the party tonight after 6 PM?\nBest,\nSubash",
"generated_email": "Hi Eve,\n\nThank you for the invitation. I'd love to come to the party tonight after 6 PM. Looking forward to it!\n\nBest,\nAdam"
},
...]
## Technical Specifications
This model was finetuned on Google colab's Tesla t4 GPU for a total of 2 epochs.
### Model Architecture and Objective
The base model for this was the Microsoft's Phi-2 which was quantized using Bits and Bytes. It's primray objective is to generate messages based on previous messages.
|