Title card

Uploaded Model

  • Developed by: Alpha AI
  • License: apache-2.0
  • Finetuned from model: meta-llama/Llama-3.2-3B-Instruct

This llama model was trained 2x faster with Unsloth and Hugging Face's TRL library.

AlphaAI-Chatty-INT2

Overview

AlphaAI-Chatty-INT2 is a fine-tuned meta-llama/Llama-3.2-3B-Instruct model optimized for empathic, chatty, and engaging conversations. Building on the foundations of our INT1 release, the INT2 version includes enhanced conversational capabilities that make it more context-aware, responsive, and personable. Trained on an improved proprietary conversational dataset, this model is particularly suitable for local deployments requiring a natural, interactive, and empathetic dialogue experience.

The model is available in GGUF format and has been quantized to different levels to support various hardware configurations.

This model is an upgrade to AlphaAI-Chatty-INT1. You can find and use the previous models from here.

Model Details

  • Base Model: meta-llama/Llama-3.2-3B-Instruct
  • Fine-tuned By: Alpha AI
  • Training Framework: Unsloth

Quantization Levels Available

  • q4_k_m
  • q5_k_m
  • q8_0
  • 16-bit (full precision) - Link

(Note: The INT1 16-bit link is referenced (https://huggingface.co/alphaaico/AlphaAI-Chatty-INT1)

Format: GGUF (Optimized for local deployments)

Use Cases

  • Conversational AI – Ideal for chatbots, virtual assistants, and customer support where empathetic and engaging interaction is crucial.
  • Local AI Deployments – Runs efficiently on local machines, negating the need for cloud-based inference.
  • Research & Experimentation – Suitable for studying advanced conversational AI techniques and fine-tuning on specialized or proprietary datasets.

Model Performance

AlphaAI-Chatty-INT2 has been further optimized to deliver:

  • Empathic and Context-Aware Responses – Improved understanding of user inputs with a focus on empathetic replies.
  • High Efficiency on Consumer Hardware – Maintains quick inference speeds even with more advanced conversation modeling.
  • Balanced Coherence and Creativity – Strikes an ideal balance for real-world dialogue applications, allowing for both coherent answers and creative flair.

Limitations & Biases

Like any AI system, this model may exhibit biases stemming from its training data. Users should employ it responsibly and consider additional fine-tuning if needed for sensitive or specialized applications.

License

Released under the Apache-2.0 license. For full details, please consult the license file in the Hugging Face repository.

Acknowledgments

Special thanks to the Unsloth team for their optimized training pipeline for LLaMA models. Additional appreciation goes to Hugging Face’s TRL library for enabling accelerated and efficient fine-tuning workflows.

Downloads last month
86
GGUF
Model size
3.21B params
Architecture
llama

4-bit

5-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for alpha-ai/AlphaAI-Chatty-INT2-GGUF

Quantized
(245)
this model

Collection including alpha-ai/AlphaAI-Chatty-INT2-GGUF