Hypernova-experimental

Quantized to 4bit 128g using AutoGPTQ and 🤗 Optimum

Tried some new stuff this time around. Very different outcome than I expected. This is an experimental model that was created for the development of NovaAI.

Good at chatting and some RP. Sometimes gets characters mixed up. Can occasionally struggle with context.

Prompt Template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:

Models Merged

The following models were included in the merge:

Some finetuning done as well

Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.

Model tree for theNovaAI/Hypernova-experimental-GPTQ

Quantized
(12)
this model

Collection including theNovaAI/Hypernova-experimental-GPTQ