repo_clone_080124

name: pygmalion-2-7b
license: llama2
tags: 
- PygmalionAI
- TheBloke
- text-generation
- text2text-generation
- natural-language
- instruct
- Llama
- Meta
type: 
- 4GB
- llama2
- role-playing
- nsfw
config: 
- bf16
- 4-channel-in
- 4-channel-out
resolutions: 
datasets:
- PygmalionAI/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
language:
- en
size: 
- 4081004224
- 4783156928
use: 
- fiction
- entertainment
shortcomings:
- misinformation
- profanity
sources:
funded_by: SpicyChat
train_hardware: 
pipeline_tag: text-generation
examples:
- "<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows: {{persona}} You shall reply to the user while staying in character, and generate long responses."
Downloads last month
9
GGUF
Model size
6.74B params
Architecture
llama

4-bit

5-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train darkshapes/pygmalion-2-7b-gguf