Overview

nephra v1 is primarily a model built for roleplaying sessions, trained on roleplay and instruction-style datasets.

Model Details

Inference Guidelines

import transformers
import torch

model_id = "yodayo-ai/nephra_v1.0"

pipeline = transformers.pipeline(
  "text-generation",
  model=model_id,
  model_kwargs={"torch_dtype": torch.bfloat16},
  device_map="auto",
)

messages = [
  {"role": "system", "content": "You are to play the role of a cheerful assistant."},
  {"role": "user", "content": "Hi there, how's your day?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
  messages,
  tokenize=False,
  add_generation_prompt=True
)

outputs = pipeline(
  prompt,
  max_new_tokens=512,
  eos_token_id=[
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
    pipeline.tokenizer.eos_token_id,
  ],
  do_sample=True,
  temperature=1.12,
  min_p=0.075,
)
print(outputs[0]["generated_text"][len(prompt):])

Recommended Settings

To guide the model to generate high-quality responses, here are the ideal settings:

Prompt Format: Same Prompt Format as Llama-3-Instruct
Temperature - 1.12
min-p: 0.075
Repetition Penalty: 1.1
Custom Stopping Strings: "\n{{user}}", "<" , "```" , -> Has occasional broken generations.

License

Nephra v1 falls under META LLAMA 3 COMMUNITY LICENSE AGREEMENT.

Downloads last month
77
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yodayo-ai/nephra_v1.0

Finetuned
(374)
this model
Quantizations
2 models

Space using yodayo-ai/nephra_v1.0 1