Virtuoso-Small

GGUF Available Here

Virtuoso-Small

Virtuoso-Small is the debut public release of the Virtuoso series of models by Arcee.ai, designed to bring cutting-edge generative AI capabilities to organizations and developers in a compact, efficient form. With 14 billion parameters, Virtuoso-Small is an accessible entry point for high-quality instruction-following, complex reasoning, and business-oriented generative AI tasks. Its larger siblings, Virtuoso-Forte and Virtuoso-Prime, offer even greater capabilities and are available via API at models.arcee.ai.

Performance Benchmarks

Groups Metric ↑ Value ± Stderr
Leaderboard Accuracy ↑ 0.5194 ± 0.0046
Normalized Accuracy ↑ 0.5814 ± 0.0051
Exact Match ↑ 0.3006 ± 0.0117
Instruction-Level Loose Accuracy ↑ 0.8489 ± N/A
Instruction-Level Strict Accuracy ↑ 0.8249 ± N/A
Prompt-Level Loose Accuracy ↑ 0.7856 ± 0.0177
Prompt-Level Strict Accuracy ↑ 0.7523 ± 0.0186
Leaderboard-BBH Normalized Accuracy ↑ 0.6516 ± 0.0058
Leaderboard-GPQA Normalized Accuracy ↑ 0.3389 ± 0.0137
Leaderboard-Math-Hard Exact Match ↑ 0.3006 ± 0.0117
Leaderboard-MuSR Normalized Accuracy ↑ 0.4286 ± 0.0175

Key Features

  • Compact and Efficient: With 14 billion parameters, Virtuoso-Small provides a high-performance solution optimized for smaller hardware configurations without sacrificing quality.
  • Business-Oriented: Tailored for use cases such as customer support, content creation, and technical assistance, Virtuoso-Small meets the demands of modern enterprises.
  • Scalable Ecosystem: Part of the Virtuoso series, Virtuoso-Small is fully interoperable with its larger siblings, Forte and Prime, enabling seamless scaling as your needs grow.

Deployment Options

Virtuoso-Small is available under the Apache-2.0 license and can be deployed locally or accessed through an API at models.arcee.ai. For larger-scale or more demanding applications, consider Virtuoso-Forte or Virtuoso-Prime.

Downloads last month
69
Safetensors
Model size
3.33B params
Tensor type
I32
·
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for AMead10/Virtuoso-Small-AWQ

Base model

Qwen/Qwen2.5-14B
Quantized
(49)
this model