Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mNLP-project
/
gpt2-dpo-quantized

Text Generation
Transformers
Safetensors
gpt2
text-generation-inference
4-bit precision
gptq
Model card Files Files and versions
xet
Community
gpt2-dpo-quantized
Ctrl+K
Ctrl+K
  • 2 contributors
History: 51 commits
Luca-Engel's picture
Luca-Engel
AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False
a1e2b1e verified about 1 year ago
  • .gitattributes
    192 Bytes
    Rename gptq_model-4bit-128g.safetensors to model.safetensors about 1 year ago
  • README.md
    5.17 kB
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • config.json
    1.24 kB
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • gptq_model-4bit-128g.safetensors
    201 MB
    xet
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • gptq_model-8bit-128g.safetensors
    243 MB
    xet
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • merges.txt
    456 kB
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • model.safetensors
    201 MB
    xet
    Rename gptq_model-4bit-128g.safetensors to model.safetensors about 1 year ago
  • quantize_config.json
    266 Bytes
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • special_tokens_map.json
    583 Bytes
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • tokenizer.json
    2.11 MB
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • tokenizer_config.json
    476 Bytes
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago
  • vocab.json
    798 kB
    AutoGPTQ model for gpt2-dpo: 4bits, gr128, desc_act=False about 1 year ago