Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
LoRA-TMLR-2024
's Collections
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
Continued Pretraining - Code (StarCoder-Python)
Instruction Finetuning - Math (MetaMathQA)
Continued Pretraining - Math (OpenWebMath)
Continued Pretraining - Math (OpenWebMath)
updated
Sep 27
Model and LoRA adapter checkpoints for Llama-2-7B trained on OpenWebMath for up to 20 billion tokens
Upvote
-
LoRA-TMLR-2024/openwebmath-lora-rank-64-20B-tokens
Updated
Sep 27
LoRA-TMLR-2024/openwebmath-lora-rank-16-20B-tokens
Updated
Sep 27
LoRA-TMLR-2024/openwebmath-lora-rank-256-20B-tokens
Updated
Sep 27
•
5
LoRA-TMLR-2024/openwebmath-full-finetuning-lr-1e-05-20B-tokens
Updated
Sep 27
Upvote
-
Share collection
View history
Collection guide
Browse collections