|
--- |
|
license: apache-2.0 |
|
tags: |
|
- unsloth |
|
- trl |
|
- sft |
|
--- |
|
|
|
# DogeGPT Meme Coin ππ€ |
|
The Meme Coin will be launched Soon |
|
Join our socials to find out more (and invest earlyπ) |
|
All other DogeGPTs are all fake, only check the following socials for update |
|
Share them and mention us on X(twitter) |
|
|
|
|
|
|
|
|
|
|
|
<p align="center"> |
|
<!-- Twitter Icon --> |
|
<a href="https://x.com/doge_gpt1" target="_blank"> |
|
<img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Follow on Twitter"> |
|
</a> |
|
|
|
<!-- YouTube Icon --> |
|
<a href="https://www.youtube.com/@dogegpt" target="_blank"> |
|
<img src="https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white" alt="Subscribe on YouTube"> |
|
</a> |
|
|
|
<!-- Website Icon --> |
|
<a href="https://dogegpt.org/" target="_blank"> |
|
<img src="https://img.shields.io/badge/Website-0A66C2?style=for-the-badge&logo=google-chrome&logoColor=white" alt="Visit Our Website"> |
|
</a> |
|
</p> |
|
|
|
|
|
# DogeGPT1-1B ππ€ |
|
|
|
 |
|
|
|
|
|
DogeGPT1-1B is an open-sourced **1.24B-parameter Large Language Model (LLM)** designed to bring the fun of meme coins and the power of AI together! Built on the **LLaMA architecture**, DogeGPT is tailored for conversational AI applications with a playful twist. Whether you're a meme coin enthusiast, developer, or AI explorer, DogeGPT is here to spark your creativity. |
|
|
|
|
|
**3B and 8B -parameter LLMs will be annonced soon** |
|
|
|
--- |
|
|
|
## Model Overview π |
|
|
|
- **Model Name**: DogeGPT1-1B |
|
- **Architecture**: LLaMA |
|
- **Model Size**: 1.24B parameters |
|
- **Quantization Formats**: GGUF (2-bit, 3-bit, 4-bit, 5-bit, 6-bit, 8-bit) |
|
- **License**: Apache 2.0 |
|
- **Tags**: `PyTorch`, `LLaMA`, `TRL`, `GGUF`, `conversational` |
|
- **Downloads Last Month**: 115 |
|
|
|
--- |
|
|
|
## Features π |
|
|
|
- **Conversational AI**: Perfect for building chatbots, virtual assistants, or meme-themed conversational models. |
|
- **Quantization Support**: Includes efficient formats for deployment in resource-constrained environments. |
|
- **Open Source**: Fully available under the permissive Apache 2.0 license. |
|
|
|
--- |
|
|
|
## Getting Started π οΈ |
|
|
|
### Installation |
|
|
|
Clone the model and install the necessary dependencies: |
|
|
|
```bash |
|
pip install transformers huggingface_hub |
|
``` |
|
|
|
### Usage Example |
|
|
|
Hereβs how to load DogeGPT1-1B using transformers: |
|
|
|
|
|
```python |
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
# Load the model and tokenizer |
|
model = AutoModelForCausalLM.from_pretrained("Doge-GPT/DogeGPT1-1B") |
|
tokenizer = AutoTokenizer.from_pretrained("Doge-GPT/DogeGPT1-1B") |
|
|
|
# Generate text |
|
input_text = "What is DogeGPT?" |
|
inputs = tokenizer(input_text, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_new_tokens=50) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
|
``` |
|
|