meta-llama/Llama-2-7b-chat-hf - W4A16 Compression

This is a compressed model using llmcompressor.

Compression Configuration

  • Base Model: meta-llama/Llama-2-7b-chat-hf
  • Compression Scheme: W4A16
  • Dataset: HuggingFaceH4/ultrachat_200k
  • Dataset Split: train_sft
  • Number of Samples: 512
  • Preprocessor: chat
  • Maximum Sequence Length: 4096

Sample Output

Prompt:

<s>[INST] Who is Alan Turing? [/INST]

Output:

<s><s> [INST] Who is Alan Turing? [/INST]  Alan Turing (1912-1954) was a British mathematician, computer scientist, logician, and cryptographer who made significant contributions to the fields of computer science, artificial intelligence, and cryptography.

Turing was born in London, England, and grew up in a family of intellectuals. He was educated at Cambridge University, where he studied mathematics and logic, and later worked at the University of Manchester, where he developed the concept of the universal Turing machine, a theoretical model for a computer.

During World War II, Turing worked at Blet

Evaluation

Downloads last month
49
Safetensors
Model size
1.12B params
Tensor type
I64
·
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for espressor/meta-llama.Llama-2-7b-chat-hf_W4A16

Quantized
(61)
this model

Dataset used to train espressor/meta-llama.Llama-2-7b-chat-hf_W4A16