Various useful datasets with preference optimization
Nicholas Beerbower PRO
nbeerbower
AI & ML interests
QLoRA finetuning and merging LLMs for fun
Recent Activity
liked
a model
4 days ago
HuggingFaceTB/FineMath-Llama-3B
liked
a dataset
7 days ago
interstellarninja/json-mode-dpo-prompts
liked
a dataset
7 days ago
interstellarninja/tool-calls-dpo
Organizations
models
137
nbeerbower/llama-3-gutenberg-8B
Text Generation
•
Updated
•
140
•
8
nbeerbower/SmolNemo-12B-FFT-experimental
Text Generation
•
Updated
•
17
nbeerbower/Nemo-Loony-12B-experimental
Text Generation
•
Updated
•
14
nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
Text Generation
•
Updated
•
22
•
1
nbeerbower/Mistral-Gutenberg-Doppel-7B-FFT
Text Generation
•
Updated
•
52
•
2
nbeerbower/Qwen2.5-Gutenberg-Doppel-14B
Text Generation
•
Updated
•
142
•
11
nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
Text Generation
•
Updated
•
215
•
6
nbeerbower/Mistral-Nemo-Prism-12B
Text Generation
•
Updated
•
36
•
3
nbeerbower/Mistral-Nemo-Prism-12B-v2
Text Generation
•
Updated
•
79
•
3
nbeerbower/Mistral-Nemo-Prism-12B-v3
Text Generation
•
Updated
•
12
datasets
8
nbeerbower/reddit-dpo
Viewer
•
Updated
•
76.9k
•
14
•
1
nbeerbower/cover-images
Viewer
•
Updated
•
4
•
221
•
1
nbeerbower/gutenberg-moderne-dpo
Viewer
•
Updated
•
346
•
59
•
2
nbeerbower/gutenberg2-dpo
Viewer
•
Updated
•
293
•
70
•
18
nbeerbower/Schule-DPO
Viewer
•
Updated
•
34
•
31
•
1
nbeerbower/Arkhaios-DPO
Viewer
•
Updated
•
222
•
60
•
8
nbeerbower/Purpura-DPO
Viewer
•
Updated
•
230
•
35
•
7
nbeerbower/bible-dpo
Viewer
•
Updated
•
31.1k
•
34