Update README.md
9e5aed4
verified
-
1.52 kB
initial commit
-
3.19 kB
Update README.md
-
1.74 kB
Upload Qwen3ForCausalLM
-
219 Bytes
Upload Qwen3ForCausalLM
pytorch_model-00001-of-00007.bin
Detected Pickle imports (20)
- "torchao.quantization.granularity.PerRow",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.BFloat16Storage",
- "torch.bfloat16",
- "torch._utils._rebuild_tensor_v2",
- "collections.OrderedDict",
- "torch._utils._rebuild_tensor_v3",
- "torch.serialization._get_layout",
- "torch.storage.UntypedStorage",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.float8.inference.Float8MMConfig",
- "torch.float8_e4m3fn",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._tensor._rebuild_from_type_v2",
- "torch.FloatStorage",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch._utils._rebuild_wrapper_subclass",
- "torch.device",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.dtypes.floatx.float8_layout.Float8Layout"
How to fix it?
4.97 GB
Upload Qwen3ForCausalLM
pytorch_model-00002-of-00007.bin
Detected Pickle imports (20)
- "torch.float8_e4m3fn",
- "collections.OrderedDict",
- "torch.serialization._get_layout",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.quantization.granularity.PerRow",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.BFloat16Storage",
- "torch.bfloat16",
- "torchao.float8.inference.Float8MMConfig",
- "torch._utils._rebuild_tensor_v3",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch.FloatStorage",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_wrapper_subclass",
- "torch._tensor._rebuild_from_type_v2",
- "torch.device",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch.storage.UntypedStorage",
- "torch._utils._rebuild_tensor_v2",
- "torchao.quantization.quant_primitives.ZeroPointDomain"
How to fix it?
4.97 GB
Upload Qwen3ForCausalLM
pytorch_model-00003-of-00007.bin
Detected Pickle imports (20)
- "torch.float8_e4m3fn",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch._tensor._rebuild_from_type_v2",
- "torch._utils._rebuild_tensor_v3",
- "torch.storage.UntypedStorage",
- "torch.BFloat16Storage",
- "torch.device",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.quantization.granularity.PerRow",
- "torch.FloatStorage",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "collections.OrderedDict",
- "torchao.float8.inference.Float8MMConfig",
- "torch.serialization._get_layout",
- "torch._utils._rebuild_tensor_v2",
- "torch.bfloat16"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00004-of-00007.bin
Detected Pickle imports (20)
- "torch.device",
- "torch.BFloat16Storage",
- "torch.serialization._get_layout",
- "torch.storage.UntypedStorage",
- "collections.OrderedDict",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch.FloatStorage",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.float8_e4m3fn",
- "torch.bfloat16",
- "torch._tensor._rebuild_from_type_v2",
- "torch._utils._rebuild_tensor_v2",
- "torchao.float8.inference.Float8MMConfig",
- "torch._utils._rebuild_tensor_v3",
- "torchao.quantization.granularity.PerRow"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00005-of-00007.bin
Detected Pickle imports (20)
- "torch.device",
- "torch.BFloat16Storage",
- "torch.serialization._get_layout",
- "torch.storage.UntypedStorage",
- "collections.OrderedDict",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torch.FloatStorage",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.float8_e4m3fn",
- "torch.bfloat16",
- "torch._tensor._rebuild_from_type_v2",
- "torch._utils._rebuild_tensor_v2",
- "torchao.float8.inference.Float8MMConfig",
- "torch._utils._rebuild_tensor_v3",
- "torchao.quantization.granularity.PerRow"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00006-of-00007.bin
Detected Pickle imports (20)
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch.BFloat16Storage",
- "torch.storage.UntypedStorage",
- "torchao.quantization.granularity.PerRow",
- "torch.FloatStorage",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor",
- "torch.device",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torch._tensor._rebuild_from_type_v2",
- "torch.bfloat16",
- "torch.serialization._get_layout",
- "torch._utils._rebuild_tensor_v3",
- "torch._utils._rebuild_tensor_v2",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.float8_e4m3fn",
- "collections.OrderedDict",
- "torchao.float8.inference.Float8MMConfig"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
pytorch_model-00007-of-00007.bin
Detected Pickle imports (20)
- "torch.bfloat16",
- "torchao.dtypes.floatx.float8_layout.Float8AQTTensorImpl",
- "torch._utils._rebuild_wrapper_subclass",
- "torchao.quantization.linear_activation_quantized_tensor.LinearActivationQuantizedTensor",
- "torch.serialization._get_layout",
- "torchao.float8.inference.Float8MMConfig",
- "torchao.dtypes.floatx.float8_layout.Float8Layout",
- "torch._utils._rebuild_tensor_v2",
- "torch.BFloat16Storage",
- "torch._tensor._rebuild_from_type_v2",
- "torch.device",
- "torch._utils._rebuild_tensor_v3",
- "torch.float8_e4m3fn",
- "torchao.quantization.quant_api._input_activation_quant_func_fp8",
- "torchao.quantization.quant_primitives.ZeroPointDomain",
- "torchao.quantization.granularity.PerRow",
- "torch.FloatStorage",
- "collections.OrderedDict",
- "torch.storage.UntypedStorage",
- "torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor"
How to fix it?
4.88 GB
Upload Qwen3ForCausalLM
-
58.3 kB
Upload Qwen3ForCausalLM