Model Card for Minerva-7B-instruct-v1.0 in GGUF Format

Minerva is the first family of LLMs pretrained from scratch on Italian developed by Sapienza NLP in the context of the Future Artificial Intelligence Research (FAIR) project, in collaboration with CINECA and with additional contributions from Babelscape and the CREATIVE PRIN Project. Notably, the Minerva models are truly-open (data and model) Italian-English LLMs, with approximately half of the pretraining data including Italian text. The full tech is available at https://nlp.uniroma1.it/minerva/blog/2024/11/26/tech-report.

Description

This is the model card for the GGUF conversion of Minerva-7B-instruct-v1.0, a 7 billion parameter model trained on almost 2.5 trillion tokens (1.14 trillion in Italian, 1.14 trillion in English and 200 billion in code). This repository contains the model weights in float32 and float16 formats, as well as quantized versions in 8-bit, 6-bit, and 4-bit precision.

Important: This model is compatible with llama.cpp updated to at least commit 6fe624783166e7355cec915de0094e63cd3558eb (5 November 2024).

Downloads last month
416
GGUF
Model size
7.4B params
Architecture
llama

4-bit

6-bit

8-bit

16-bit

32-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sapienzanlp/Minerva-7B-instruct-v1.0-GGUF

Quantized
(2)
this model

Collection including sapienzanlp/Minerva-7B-instruct-v1.0-GGUF