30B.gif

LwQ-30B-Instruct

LwQ-30B-Instruct (Llama with Questions), based on the Llama 3.1 collection of multilingual large language models (LLMs), is a set of pre-trained and instruction-tuned generative models optimized for multilingual dialogue use cases. These models outperform many available open-source alternatives. Model Architecture: Llama 3.1 is an auto-regressive language model utilizing an optimized transformer architecture. The tuned versions undergo supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to better align with human preferences for helpfulness and safety. LwQ-30B is trained on synthetic reasoning datasets for mathematical reasoning and context-based problem-solving, with a focus on following instructions or keywords embedded in the input.

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

import transformers
import torch

model_id = "prithivMLmods/LwQ-30B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Intended Use

  1. Multilingual Dialogue Systems: LwQ-30B-Instruct is designed for creating conversational agents capable of engaging in dialogues across multiple languages, making it suitable for global customer support and multilingual chatbots.

  2. Instruction-Following Tasks: The model excels at tasks requiring adherence to specific instructions or keywords embedded in the input, such as form completion, task automation, and guided workflows.

  3. Mathematical Reasoning: With specialized training on synthetic reasoning datasets, LwQ-30B can perform complex mathematical reasoning and problem-solving, making it useful for educational platforms, tutoring systems, and research assistance.

  4. Context-Based Problem Solving: The model is optimized to handle contextually rich problems, allowing it to generate context-aware responses for applications such as summarization, question answering, and decision support.

  5. Content Generation: It can generate high-quality content, including articles, reports, summaries, and creative writing, across various domains and languages.

  6. Knowledge Retrieval: LwQ-30B can retrieve and synthesize information from its trained data to answer factual questions, assist in research, and support knowledge-intensive tasks.

Limitations

  1. Performance Variability Across Languages: While the model supports multiple languages, its performance may vary depending on the language, with better results for languages more prevalent in its training data.

  2. Handling of Niche Topics: The model may struggle to provide accurate information or generate high-quality content for highly specialized or niche topics not covered extensively in its training data.

  3. Complex Multi-Step Reasoning: Although trained on reasoning datasets, the model may still occasionally produce incorrect or incomplete results for multi-step or highly complex reasoning tasks.

  4. Bias and Ethical Concerns: Since LwQ-30B is trained on large, publicly available datasets, it may inherit biases present in the data, leading to potential ethical concerns or inappropriate outputs in certain contexts.

  5. Context Limitations: The model has a finite context window, which may lead to incomplete understanding or response generation for tasks requiring extensive context or very long input texts.

  6. Resource Intensive: As a large-scale model with 30 billion parameters, it requires substantial computational resources for both inference and deployment, limiting its use in resource-constrained environments.

  7. Instruction Ambiguity: The model’s performance can degrade when instructions are ambiguous, vague, or conflicting, potentially leading to outputs that do not align with user expectations.

Downloads last month
0
Safetensors
Model size
32.5B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for prithivMLmods/LwQ-30B-Instruct

Finetuned
(692)
this model
Quantizations
2 models