Sentiment Analysis Model: Fine-Tuned DistilBERT

Overview

This repository contains a fine-tuned version of the distilbert-base-uncased model, designed for sentiment analysis of tweets. The model is trained to classify the sentiment of a sentence into two categories: positive (label 0) and negative (label 1).

Model Description

The fine-tuned model utilizes the distilbert-base-uncased architecture, trained on a dataset of GPT-3.5-generated tweets. It is designed to input a sentence and output a binary sentiment label, 0 for positive and 1 for negative.

Training Data

The model was trained on a dataset consisting of tweets generated and labeled with sentiments by GPT-3.5. Each tweet in the training set was labeled as either positive or negative to provide ground truth for training.

Downloads last month
119
Safetensors
Model size
67M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.