image/webp

BabyDolphin-8B-LLaMA3-Uncensored

  • Developed by: babycommando
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-bnb-4bit

This model, babydolphin-8b-llama3-uncensored, is an 8-billion parameter subset of the larger LLaMA (Large Language Model by Meta) and has been fine-tuned on the cognitivecomputations/dolphin dataset specifically for the FLAN1M-Alpaca-Uncensored tasks. It incorporates cutting-edge transformer architectures optimized for a balance between performance and efficiency.

Model Description

babydolphin-8b-llama3-uncensored is designed to deliver powerful language understanding and generation capabilities while ensuring compliance with non-censorship standards for diverse application scenarios. This version is ideal for applications requiring high-quality text generation where content restrictions are minimal.

Technical Details

  • Base Model: LLaMA3
  • Parameters: 8 billion
  • Fine-tuning Dataset: cognitivecomputations/dolphin FLAN1M-Alpaca-Uncensored

Quantization and Configuration

This model is available in multiple configurations to best suit different deployment needs:

  • f16: Fastest conversion, retains 100% accuracy but is slow and memory-intensive.
  • q4_k_m: Recommended for general use, balancing between speed and efficiency.
  • q3_k_m: Good for environments where model size and speed are more critical than detailed accuracy.
  • q3_k_s: Maximizes speed and minimizes model size, suitable for very resource-constrained environments.

Intended Use

This model is intended for researchers and developers needing advanced natural language processing capabilities without censorship restrictions. It is particularly well-suited for generating text in scenarios where nuanced, unrestricted content generation is crucial.

How to Use

For Ollama, check their docs for running a GGUF model on Ollama

Here is how to load and use the model in your projects using Hugging Face Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "babycommando/babydolphin-8b-llama3-uncensored"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model.generate(inputs["input_ids"])
print(tokenizer.decode(outputs[0]))

Training Loss Over 60 Epochs

image/png

-

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Usage

Copy this markdown content into your model's page on the Hugging Face Model Hub to provide users with a clear, informative description of what your model can do and how it can be used. Adjust the model_name variable in the Python code snippet to reflect the actual path to your model on Hugging Face for ease of use by others.

Downloads last month
437
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for babycommando/babydolphin-8b-llama3-uncensored

Quantized
(711)
this model

Dataset used to train babycommando/babydolphin-8b-llama3-uncensored