Llama-movielens-mpnet
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and it is designed to use in recommender systems for content-base filtering and as a side information for cold-start recommendation.
Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example product description", "Each product description is converted"]
model = SentenceTransformer('beeformer/Llama-movielens-mpnet')
embeddings = model.encode(sentences)
print(embeddings)
Training procedure
Pre-training
We use the pretrained sentence-transformers/all-mpnet-base-v2
model. Please refer to the model card for more detailed information about the pre-training procedure.
Fine-tuning
We use the initial model without modifying its architecture or pre-trained model parameters. However, we reduce the processed sequence length to 384 to reduce the training time of the model.
Dataset
We finetuned our model on the MovieLens-20M dataset with item descriptions generated with meta-llama/Meta-Llama-3.1-8B-Instruct
model. For details please see the dataset page beeformer/recsys-movielens-20m
.
Evaluation Results
Table with results TBA.
Intended uses
This model was trained as a demonstration of capabilities of the beeFormer training framework (link and details TBA) and is intended for research purposes only.
Citation
Preprint available here
TBA
- Downloads last month
- 9