Llama-3.1-Nemotron-92B-Instruct-HF-late
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range:
- 0
- 55
model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- sources:
- layer_range:
- 50
- 60
model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- sources:
- layer_range:
- 55
- 65
model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- sources:
- layer_range:
- 60
- 70
model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- sources:
- layer_range:
- 65
- 75
model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- sources:
- layer_range:
- 70
- 80
model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- Downloads last month
- 14
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for mav23/Llama-3.1-Nemotron-92B-Instruct-HF-late-GGUF
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.1-70B-Instruct