YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Unlike our encrypted DistilBert, this model’s weights reside on Nesa’s secure server, but the tokenizer is on Hugging Face. You can still use the tokenizer to encode and decode text and then submit it for inference via the Nesa network!


###### Load the Tokenizer

from transformers import AutoTokenizer

hf_token = "<HF TOKEN>"  # Replace with your token
model_id = "nesaorg/Llama-3.2-1B-Instruct-Encrypted"
tokenizer = AutoTokenizer.from_pretrained(model_id, token=hf_token, local_files_only=False)
Tokenize and Decode Text
text = "I'm super excited to join Nesa's Equivariant Encryption initiative!"

# Encode text into token IDs
token_ids = tokenizer.encode(text)
print("Token IDs:", token_ids)

# Decode token IDs back to text
decoded_text = tokenizer.decode(token_ids)
print("Decoded Text:", decoded_text)
Example Output:
Token IDs: [128000, 1495, 1135, 2544, 6705, 284, 2219, 11659, 17098, 22968, 8707, 2544, 3539, 285, 34479]
Decoded Text: I'm super excited to join Nesa's Equivariant Encryption initiative!
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .