llama
text-generation-inference
Update README.md
ae1d1c4
-
configs
update README and add config file
-
1.48 kB
initial commit
-
3.41 kB
Update README.md
-
582 Bytes
Change cache = true in config.json to significantly boost inference performance (#1)
-
137 Bytes
first epoch pre-release
pytorch_model-00001-of-00003.bin
Detected Pickle imports (6)
- "torch._tensor._rebuild_from_type_v2",
- "torch.Tensor",
- "collections.OrderedDict",
- "torch.FloatStorage",
- "torch._utils._rebuild_tensor_v2",
- "torch.BFloat16Storage"
How to fix it?
9.95 GB
push up second epoch of wizard-mega
pytorch_model-00002-of-00003.bin
Detected Pickle imports (6)
- "torch._utils._rebuild_tensor_v2",
- "collections.OrderedDict",
- "torch.Tensor",
- "torch.BFloat16Storage",
- "torch._tensor._rebuild_from_type_v2",
- "torch.FloatStorage"
How to fix it?
9.9 GB
push up second epoch of wizard-mega
pytorch_model-00003-of-00003.bin
Detected Pickle imports (6)
- "torch._tensor._rebuild_from_type_v2",
- "torch.BFloat16Storage",
- "torch.FloatStorage",
- "collections.OrderedDict",
- "torch.Tensor",
- "torch._utils._rebuild_tensor_v2"
How to fix it?
6.18 GB
push up second epoch of wizard-mega
-
33.4 kB
first epoch pre-release
-
411 Bytes
first epoch pre-release
-
1.84 MB
first epoch pre-release
-
500 kB
first epoch pre-release
-
700 Bytes
first epoch pre-release