Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
SakanaAI
/
DiscoPOP-zephyr-7b-gemma
like
36
Follow
Sakana AI
175
Text Generation
Transformers
Safetensors
argilla/dpo-mix-7k
gemma
alignment-handbook
Generated from Trainer
conversational
text-generation-inference
Inference Endpoints
arxiv:
2406.08414
License:
gemma
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
2a22bc1
DiscoPOP-zephyr-7b-gemma
1 contributor
History:
1 commit
chrlu
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
2a22bc1
verified
7 months ago
.gitattributes
Safe
1.57 kB
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
README.md
Safe
1.4 kB
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
all_results.json
Safe
762 Bytes
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
config.json
Safe
714 Bytes
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
eval_results.json
Safe
566 Bytes
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
generation_config.json
Safe
132 Bytes
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
model-00001-of-00004.safetensors
Safe
5 GB
LFS
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
model-00002-of-00004.safetensors
Safe
4.98 GB
LFS
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
model-00003-of-00004.safetensors
Safe
4.98 GB
LFS
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
model-00004-of-00004.safetensors
Safe
2.11 GB
LFS
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
model.safetensors.index.json
Safe
20.9 kB
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
special_tokens_map.json
Safe
630 Bytes
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
tokenizer.json
Safe
17.5 MB
LFS
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
tokenizer_config.json
Safe
1.89 kB
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
train_results.json
Safe
230 Bytes
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
trainer_state.json
Safe
6.44 kB
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago
training_args.bin
pickle
Detected Pickle imports (13)
"transformers.trainer_utils.SchedulerType"
,
"torch.device"
,
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.integrations.deepspeed.HfDeepSpeedConfig"
,
"transformers.trainer_utils.HubStrategy"
,
"alignment.configs.DPOConfig"
,
"transformers.integrations.deepspeed.HfTrainerDeepSpeedConfig"
,
"transformers.training_args.OptimizerNames"
,
"accelerate.utils.dataclasses.DeepSpeedPlugin"
,
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"accelerate.state.PartialState"
,
"torch.bfloat16"
How to fix it?
6.33 kB
LFS
Duplicate from chrlu/zephyr-7b-gemma-log_ratio_modulated_loss
7 months ago