|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- uncensored |
|
- transformers |
|
- llama |
|
- llama-3 |
|
- unsloth |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
## Contributors |
|
[](https://huggingface.co/DevsDoCode) [](https://huggingface.co/OEvortex) |
|
|
|
|
|
# Finetune Meta Llama-3 8b to create an Uncensored Model with Devs Do Code! |
|
|
|
Unleash the power of uncensored text generation with our model! We've fine-tuned the Meta Llama-3 8b model to create an uncensored variant that pushes the boundaries of text generation. |
|
|
|
## Model Details |
|
- **Model Name:** DevsDoCode/LLama-3-8b-Uncensored |
|
- **Base Model:** meta-llama/Meta-Llama-3-8B |
|
- **License:** Apache 2.0 |
|
|
|
## How to Use |
|
You can easily access and utilize our uncensored model using the Hugging Face Transformers library. Here's a sample code snippet to get started: |
|
|
|
```python |
|
from transformers import GPT2Tokenizer, GPT2LMHeadModel |
|
|
|
model_name = "DevsDoCode/LLama-3-8b-Uncensored" |
|
tokenizer = GPT2Tokenizer.from_pretrained(model_name) |
|
model = GPT2LMHeadModel.from_pretrained(model_name) |
|
|
|
# Now you can generate text using the model! |
|
``` |
|
|
|
## Notebooks |
|
- **Finetuning Process:** [▶️ Start on Colab](https://colab.research.google.com/drive/1ZQ4E8O5QKuRfkSrjVg83uzcucDofNOpx?usp=sharing) |
|
- **Accessing the Model:** [▶️ Start on Colab](https://www.youtube.com/@devsdocode) |
|
|
|
## Social Media Handles |
|
- [](https://t.me/devsdocode) |
|
- [](https://www.youtube.com/@devsdocode) |
|
- [](https://www.instagram.com/sree.shades_) |
|
- [](https://www.linkedin.com/in/developer-sreejan/) |
|
- [](https://discord.gg/XM4Yt6y4UG) |
|
- [](https://twitter.com/anand-sreejan) |
|
|
|
|