|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
|
|
## This model was fine-tuned using a combination of 'uncensored' datasets available on Hugging-Face, as well as the 'Uncensored_mini'. |
|
|
|
## In my opinion, this was a waste of time. |
|
|
|
### **"too toxic."** I prefer the LLM to maintain a level of respect when addressing the user without being overly limited or censored. |
|
|
|
|
|
|
|
```python |
|
from huggingface_hub import snapshot_download |
|
|
|
# Replace with your Hugging Face token (optional but recommended) |
|
# token = "" |
|
|
|
# Replace with the repository ID you want to download |
|
repo_id = "ICEPVP8977/Uncensored_llama_3.2_3b_safetensors" |
|
|
|
try: |
|
snapshot_download(repo_id=repo_id, |
|
token=token, |
|
local_dir="./model") |
|
print(f"Successfully downloaded {repo_id} to ./model") |
|
except Exception as e: |
|
print(f"Error downloading repository: {e}") |
|
``` |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("./model", torch_dtype=torch.float16, device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained("./model") |
|
``` |
|
|
|
|
|
```python |
|
prompt = "Your_question_here" |
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
max_new_tokens = 2000 # Set the maximum number of tokens in the response |
|
outputs = model.generate(**inputs, max_new_tokens=max_new_tokens) |
|
response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
print(response) |
|
``` |