meta-llama/Meta-Llama-3-70B-Instruct - W8A8_FP8 Compression
This is a compressed model using llmcompressor.
Compression Configuration
- Base Model: meta-llama/Meta-Llama-3-70B-Instruct
- Compression Scheme: W8A8_FP8
- Dataset: HuggingFaceH4/ultrachat_200k
- Dataset Split: train_sft
- Number of Samples: 512
- Preprocessor: chat
- Maximum Sequence Length: 8192
Sample Output
Prompt:
Could not generate output
Output:
CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Evaluation
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for espressor/meta-llama.Meta-Llama-3-70B-Instruct_W8A8_FP8
Base model
meta-llama/Meta-Llama-3-70B
Finetuned
meta-llama/Meta-Llama-3-70B-Instruct