meta-llama/Llama-2-7b-chat-hf - W8A8_int8 Compression

This is a compressed model using llmcompressor.

Compression Configuration

  • Base Model: meta-llama/Llama-2-7b-chat-hf
  • Compression Scheme: W8A8_int8
  • Dataset: HuggingFaceH4/ultrachat_200k
  • Dataset Split: train_sft
  • Number of Samples: 512
  • Preprocessor: chat
  • Maximum Sequence Length: 4096

Sample Output

Prompt:

<s>[INST] Who is Alan Turing? [/INST]

Output:

<s><s> [INST] Who is Alan Turing? [/INST]  Alan Turing (1912-1954) was a British mathematician, computer scientist, and codebreaker who made significant contributions to the fields of computer science, artificial intelligence, and cryptography.

Turing was born in London, England, and grew up in a family of intellectuals. He was educated at Cambridge University, where he studied mathematics and logic, and later worked at the University of Manchester, where he made important contributions to the field of computer science.

During World War II, Turing worked at Bletchley Park, a top-secret government facility

Evaluation

Downloads last month
59
Safetensors
Model size
6.74B params
Tensor type
FP16
·
I8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for espressor/meta-llama.Llama-2-7b-chat-hf_W8A8_int8

Quantized
(61)
this model

Dataset used to train espressor/meta-llama.Llama-2-7b-chat-hf_W8A8_int8