Qwen2-96M

Qwen2-96M is a small language model based on the Qwen2 architecture, trained from scratch on English datasets with a context length of 8192 tokens. With only 96 million parameters, this model serves as a lightweight base model that can be fine-tuned for specific tasks.

Due to its compact size, the model has significant limitations in reasoning, factual knowledge, and general capabilities compared to larger models. It may produce incorrect, irrelevant, or nonsensical outputs. Additionally, as it was trained on internet text data, it may contain biases and potentially generate inappropriate content.

Usage

pip install transformers==4.49.0 torch==2.6.0
from transformers import pipeline, TextStreamer, AutoModelForCausalLM, AutoTokenizer
import torch

model_path = "Felladrin/Qwen2-96M"
prompt = "I've been thinking about"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path).to(device)
streamer = TextStreamer(tokenizer)
generate = pipeline("text-generation", model=model, tokenizer=tokenizer, device=device)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
model.generate(
    inputs.input_ids,
    attention_mask=inputs.attention_mask,
    streamer=streamer,
    max_length=tokenizer.model_max_length,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.pad_token_id,
    do_sample=True,
    repetition_penalty=1.05,
)
Downloads last month
0
Safetensors
Model size
96.2M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Datasets used to train Felladrin/Qwen2-96M

Collection including Felladrin/Qwen2-96M