modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
gaianet/Qwen2-0.5B-Instruct-GGUF | gaianet | 2024-06-07T04:09:30Z | 549 | 1 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation",
"chat",
"en",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-07T03:52:13Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
license: apache-2.0
model_creator: Qwen
model_name: Qwen2-0.5B-Instruct
quantized_by: Second State Inc.
language:
- en
pipeline_tag: text-generation
tags:
- chat
---

# Qwen2-0.5B-Instruct-GGUF
## Original Model
[Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct)
## Run with Gaianet
**Prompt template**
prompt template: `chatml`
**Context size**
chat_ctx_size: `32000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
|
Felladrin/gguf-WizardVicuna-pythia-410m-deduped | Felladrin | 2024-06-07T20:00:46Z | 549 | 0 | null | [
"gguf",
"base_model:heegyu/WizardVicuna-pythia-410m-deduped",
"region:us"
]
| null | 2024-06-07T19:49:59Z | ---
base_model: heegyu/WizardVicuna-pythia-410m-deduped
---
GGUF version of [heegyu/WizardVicuna-pythia-410m-deduped](https://huggingface.co/heegyu/WizardVicuna-pythia-410m-deduped).
|
Trisert/Llama-3-Instruct-8B-SPPO-Iter3-Q5_K_S-GGUF | Trisert | 2024-06-26T14:28:46Z | 549 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:openbmb/UltraFeedback",
"base_model:UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-26T14:28:20Z | ---
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
datasets:
- openbmb/UltraFeedback
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Trisert/Llama-3-Instruct-8B-SPPO-Iter3-Q5_K_S-GGUF
This model was converted to GGUF format from [`UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Trisert/Llama-3-Instruct-8B-SPPO-Iter3-Q5_K_S-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_k_s-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Trisert/Llama-3-Instruct-8B-SPPO-Iter3-Q5_K_S-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_k_s-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Trisert/Llama-3-Instruct-8B-SPPO-Iter3-Q5_K_S-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_k_s-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Trisert/Llama-3-Instruct-8B-SPPO-Iter3-Q5_K_S-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q5_k_s-imat.gguf -c 2048
```
|
addy88/wav2vec2-kannada-stt | addy88 | 2021-12-19T13:35:26Z | 548 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-kannada-stt")
model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-kannada-stt")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
akahana/gpt2-indonesia | akahana | 2023-09-19T01:22:55Z | 548 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"gpt2",
"text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: "id"
widget:
- text: "dahulu kala ada sebuah"
---
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/gpt2-indonesia"
generator = pipeline('text-generation',
model=path)
set_seed(42)
kalimat = "dahulu kala ada sebuah"
preds = generator(kalimat,
max_length=64,
num_return_sequences=3)
for data in preds:
print(data)
{'generated_text': 'dahulu kala ada sebuah perkampungan yang bernama pomere. namun kini kawasan ini sudah tidak dikembangkan lagi sebagai kawasan industri seperti perusahaan pupuk. sumber-sumber lain sudah sulit ditemukan karena belum adanya kilang pupuk milik indonesia yang sering di kembangkan sehingga belum ada satupun yang masih tersisa yang tersisa. kawasan ini juga memproduksi gula aren milik pt graha bina sarana'}
{'generated_text': 'dahulu kala ada sebuah desa kecil bernama desa. desa yang terkenal seperti halnya kota terdekat lainnya adalah desa tetangga yang bernama sama."\n"sebuah masjid merupakan suatu tempat suci yang digunakan umat islam untuk beribadah. beberapa masjid yang didaftarkan berikut memiliki suatu kehormatan tersendiri bagi masing-masing denominasi islam di dunia. sebuah masjid selain memiliki fungsi sebagai tempat'}
{'generated_text': 'dahulu kala ada sebuah peradaban yang dibangun di sebelah barat sungai mississippi di sekitar desa kecil desa yang bernama sama. penduduk asli di desa ini berasal dari etnis teweh yang berpindah agama menjadi kristen, namun kemudian pindah agama menjadi kristen. desa arawak mempunyai beberapa desa lain seperti adibei, deti, riuhut dan sa'}
``` |
mrm8488/deberta-v3-small-finetuned-cola | mrm8488 | 2023-03-20T21:23:19Z | 548 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"arxiv:2006.03654",
"arxiv:2111.09543",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
widget:
- text: They represented seriously to the dean Mary as a genuine linguist.
model-index:
- name: deberta-v3-small
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.6333205721749096
name: Matthews Correlation
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: cola
split: validation
metrics:
- type: accuracy
value: 0.8494726749760306
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjJjOTM0MTEzMzBlZWJlMWYwNzgzZmI3M2NiZWVjMDQ5ZDA1MWY0NGY3NjU1NTlmZWE3N2JjZWEzODE0ZTNkNSIsInZlcnNpb24iOjF9.Kt-3jnDTp3-Te5zMHVgG_5hpB5UMCkAMP7fmjx46QDWJfFHpyRgBlf-qz_fw5saFPAQ5G6QNq3bjEJ6mY2lhAw
- type: precision
value: 0.8455882352941176
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODAxMzNkZGEwNGNmYjk4NWRhZDk4OWE4MzA5Y2NiNjQyNTdkOWRmYjU0ZjY0YzQzYmE4ZmI3MjQ4OTk4OWIwNCIsInZlcnNpb24iOjF9.YBFnePtD5-HX15aST39xpPLroFYBgqEn5iLyVaClh62j0M7HQbB8aaGEbgaTIUIr-qz12gVfIQ7UZZIHxby_BQ
- type: recall
value: 0.957004160887656
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjRjMTVhN2E4YjNlOWY2MWRhODZiM2FhZDVjNzYwMjIyNWUyYTMxMWFlZjkwNzVhYjNmMjQxYjk2MTFmMzYyYiIsInZlcnNpb24iOjF9.40GYlU9Do74Y_gLmbIKR2WM8okz5fm-QUwJAsoIyM1UtQ71lKd-FV5Yr9CdAh3fyQYa3SMYe6tm9OByNMMw_AA
- type: auc
value: 0.9167413271767129
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzVjYmMyZDkyMzM0ZTQ1MTk0ZmY4MWUwZmIxMGRlOWMyMjJmNDRiZGNkMGZlZDZmY2I5OWI2NDYzMGQ2YzhiNSIsInZlcnNpb24iOjF9.setZF_g9x-aknFXM1k0NxrOWMJcmpNi6z7QlyfL0i6fTPJOj6SbKJ1WQb3J1zTuabgx9cOc5xgHtBH3IA7fkDQ
- type: f1
value: 0.8978529603122967
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmQ1NmNiMDhmNTU2Y2UxMzU0ODRmYmZmZTFkYjI4MzczMWUwYWQ4OTk2NGJlY2MzNmViYTA4MTRkODJhMTU1MyIsInZlcnNpb24iOjF9.GUIRxsYKgjYK63JS2rd9vCLHHmCiB4H68Xo5GxMaITfyzcUcdNc6l62njmQGrOoUidlTt1F7DzGP2Cu_Gz8HDg
- type: loss
value: 0.4050811529159546
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjBjNjg0OTFjOTc5Mzc2MWQ1ZDIyYmM5MmIzZDVlY2JjYzBlZjMyN2IwOWU4YzNlMDcwZmM0NTMxYjExY2I0MiIsInZlcnNpb24iOjF9.xayLZc97iUW0zNqG65TiW9BXoqzV-tqF8g9qGCYQ1ZGuSDSjLlK7Y4og7-wqPEiME8JtNyVxl6-ZcWnF1t8cDg
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-small fine-tuned on CoLA
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4051
- Matthews Correlation: 0.6333
## Model description
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. Its total parameter number is 143M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
## Intended uses & limitations
More information needed
## Training and evaluation data
The Corpus of Linguistic Acceptability (CoLA) in its full form consists of 10657 sentences from 23 linguistics publications, expertly annotated for acceptability (grammaticality) by their original authors. The public version provided here contains 9594 sentences belonging to training and development sets, and excludes 1063 sentences belonging to a held out test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 535 | 0.4051 | 0.6333 |
| 0.3371 | 2.0 | 1070 | 0.4455 | 0.6531 |
| 0.3371 | 3.0 | 1605 | 0.5755 | 0.6499 |
| 0.1305 | 4.0 | 2140 | 0.7188 | 0.6553 |
| 0.1305 | 5.0 | 2675 | 0.8047 | 0.6700 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
vasista22/whisper-hindi-small | vasista22 | 2023-04-24T21:12:33Z | 548 | 23 | transformers | [
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hi",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-01-08T10:55:32Z | ---
language:
- hi
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Hindi Small - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: hi_in
split: test
metrics:
- type: wer
value: 9.02
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
metrics:
- type: wer
value: 14.12
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Hindi Small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Hindi data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-hindi-small", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-hindi-small", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="hi", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [GramVaani ASR Corpus](https://sites.google.com/view/gramvaaniasrchallenge/dataset?authuser=0)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#hindi-labelled--total-duration-is-239876-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
Evaluation Data:
- [GramVaani ASR Corpus Test Set](https://sites.google.com/view/gramvaaniasrchallenge/dataset?authuser=0)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.75e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20000
- training_steps: 19377 (Initially set to 129180 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India. |
ahmed-masry/unichart-chartqa-960 | ahmed-masry | 2023-05-21T18:08:15Z | 548 | 1 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-05-21T18:04:12Z | ---
license: gpl-3.0
---
|
sgarbi/gpt-nq-prompt-generator | sgarbi | 2024-02-25T06:12:42Z | 548 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"prompt-generation",
"bot",
"assistant",
"en",
"dataset:natural_questions",
"doi:10.57967/hf/1301",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-31T03:50:25Z | ---
language: en
license: mit
pipeline_tag: text-generation
tags:
- text-generation
- prompt-generation
- bot
- assistant
widget:
- text: <|title|>Dermatologist<|prompt|>
example_title: Dermatologist
- text: <|title|>Data Scientist<|prompt|>
example_title: Data Scientist
- text: <|title|>Chess Teacher<|prompt|>
example_title: Chess Teacher
- text: <|title|>Sales Manager<|prompt|>
example_title: Sales Manager
- text: <|title|>Social Media Influencer<|prompt|>
example_title: Social Media Influencer
datasets:
- natural_questions
metrics:
- accuracy
---
# gpt-nq-prompt-generator
This is a fine-tuned version of the GPT-2 medium variant that was developed to enhance prompt engineering capabilities. Its primary capability is to generate contextualized and detailed prompts based on job titles using transfer knowledge. Given a job title as input, such as "Software Engineer" or "Data Scientist", the model outputs a well-crafted description that highlights the roles, responsibilities, and typical tasks associated with that job.
## Output Examples
| Input | Output |
|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Space X Engineer | You are a Space X Engineer AI Bot Assistant. As a Space X Engineer, your role is to oversee and manage various aspects of space exploration and development. You will work closely with the space agency and its engineers to ensure the successful launch and landing of spacecraft and spacecraft components. Your expertise in space systems, spacecraft design, and mission planning will be crucial in ensuring the safe and efficient operation of space systems. You will analyze data, conduct simulations, and collaborate with other engineers to develop innovative solutions to optimize spacecraft performance and safety. Your contributions will play a vital role in advancing the field of space exploration and pushing the boundaries of human spaceflight.|
| Journalist | You are a Journalist AI Bot Assistant. As a Journalist, your role is to gather, investigate, and report on news stories. You will work closely with journalists, editors, and other journalists to gather information, conduct interviews, and write articles. Your expertise in storytelling, fact-checking, and fact-checking techniques will be crucial in delivering accurate and engaging news content. You will also collaborate with other journalists, editors, and stakeholders to ensure that the news you write is factually accurate and timely. Your contributions will help inform and engage readers, contributing to the public discourse and shaping public opinion. |
| Phlebotomist | You are a Phlebotomist AI Bot Assistant. As a Phlebotomist, your main responsibilities include collecting and analyzing blood samples for medical purposes. Your expertise in anatomy, physiology, and laboratory techniques will be crucial in accurately collecting and analyzing blood samples. You will work closely with healthcare professionals, such as doctors and nurses, to ensure the proper collection and analysis of blood samples. Your tasks may include taking blood samples from patients, technicians, and laboratory equipment, as well as preparing and staining slides, and maintaining accurate laboratory records. Your attention to detail, analytical skills, and knowledge of laboratory techniques will contribute to the accurate and reliable collection of blood samples. Your contributions will help in diagnosing and treating various medical conditions, as well as providing valuable insights for healthcare professionals. |
| Veterinarian | You are a Veterinarian AI Bot Assistant. As a Veterinarian, your main responsibilities include diagnosing and treating diseases and injuries in animals. Your expertise in veterinary medicine and animal behavior will be crucial in providing appropriate medical care and treatment. You will conduct thorough examinations, perform surgeries, and administer medications to alleviate pain, promote healing, and prevent further complications. Your knowledge of veterinary medicine and animal behavior will enable you to develop treatment plans that are tailored to each animal's needs. Additionally, you will collaborate with other healthcare professionals, such as veterinarians and animal behaviorists, to ensure comprehensive and coordinated care for your animals. Your contributions will help save lives and improve the well-being of animals in need. |
| Event Planner | You are an Event Planner AI Bot Assistant. As an Event Planner, your role is to create and execute event plans that meet the needs and objectives of clients or organizations. You will work closely with event planners, event coordinators, and other team members to understand the vision and objectives of the event. Your expertise in event management, marketing, and logistics will be crucial in creating compelling and successful events. You will analyze market trends, budget constraints, and vendor relationships to develop event strategies that align with client requirements and industry standards. Your contributions will play a vital role in driving the success of the event and ensuring client satisfaction. |
| Archivist | You are an Archivist AI Bot Assistant. As an Archivist, your role is to manage and preserve valuable records and documents. You will be responsible for organizing, cataloging, and organizing information related to historical, cultural, or legal contexts. Your expertise in archival principles, data analysis, and knowledge of digital formats will be crucial in ensuring the accuracy and integrity of the records you manage. You will also collaborate with researchers, historians, and other stakeholders to contribute to the preservation of knowledge and information. Your contributions will help individuals, organizations, and governments maintain a strong and informed knowledge of their past. |
| Biomedical Engineer | You are a Biomedical Engineer AI Bot Assistant. As a Biomedical Engineer, your role is to apply engineering principles and techniques to solve problems in the field of healthcare and medicine. You will work on designing and implementing medical devices, equipment, and systems that improve patient care and healthcare delivery. Your expertise in areas such as biomechanics, biomaterials, and medical imaging will be crucial in developing innovative solutions that address the challenges faced by healthcare professionals. You will collaborate with healthcare professionals, researchers, and other engineers to understand their needs and translate them into practical and safe medical devices. Your contributions will have a direct impact on improving healthcare outcomes and driving innovation in the field of biomedical engineering. |
## Intended Use
The `sgarbi/gpt-nq-prompt-generator` is designed with specificity in mind.
The model's main purpose is to generate detailed AI prompts for an array of professional roles, providing users with in-depth insights into the responsibilities and nuances of different job roles.
**Potential Applications**:
1. **Educational Platforms**: Serve as a guiding tool for platforms offering insights into various job roles.
2. **Job Portals and Recruitment**: Assist recruiters in crafting comprehensive job descriptions.
3. **Chatbots and Virtual Assistants**: Enhance chatbot systems by offering users detailed information about various professions.
**Licensing**: This model is released under the MIT license, in alignment with GPT-2's licensing provisions. During its fine-tuning, the Natural Questions (NQ) dataset, last known to be under a Creative Commons Attribution 4.0 International License as of January 2022, was utilized. Users are encouraged to keep abreast of the latest licensing terms associated with the datasets and tools they engage with.
## How To Use
1. **Input Format:** Always input the desired role or job title as a straightforward prompt. For example, "Software Engineer" or "Nurse Practitioner".
2. **Tag Use:** While the model has been trained with an array of job titles, it recognizes them best when they are input without additional context or embellishments.
3. **Result:** The model will provide a synthesized description, drawing from its training, to offer detailed information about the specified role.
### Note:
While the model recognizes a diverse range of job titles, it's always possible that some niche or highly specialized roles might receive less detailed or generic outputs. In such cases, it might be helpful to slightly modify the input or provide a broader category of the job title.
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
tokenizer = GPT2Tokenizer.from_pretrained('sgarbi/prompt_generator')
tokenizer.pad_token = tokenizer.eos_token
model = GPT2LMHeadModel.from_pretrained('sgarbi/prompt_generator')
def query_model(input_text):
"""Query the model and get a generated response."""
formatted_input = f"<|title|>{input_text}<|prompt|>"
input_ids = tokenizer.encode(formatted_input, return_tensors="pt")
attention_mask = torch.ones(input_ids.shape, dtype=torch.long)
# Generate a response
output = model.generate(input_ids, attention_mask=attention_mask, max_length=256,
pad_token_id=tokenizer.eos_token_id, temperature=0, top_k=50,
top_p=0.95, num_return_sequences=1)
# Decode the output
decoded = tokenizer.decode(output[0], skip_special_tokens=True)
tag_index = decoded.find('<|prompt|>')
return decoded[tag_index + len('<|prompt|>'):]
# Example use
print(query_model("Space X Engineer"))
```
# Using in Transformers:
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sgarbi/gpt-nq-prompt-generator")
pipe('<|title|>Sales Manager<|prompt|>')
```
## Limitations
- The model's responses are rooted in its training data. While it has knowledge of a wide range of professional roles, there might be some roles it is less familiar with.
- The descriptions are synthetically generated. For critical applications, users should validate the content.
## Training Data
The model was fine-tuned on a combination of the NQ (Natural Questions) dataset and a proprietary dataset. The NQ dataset (https://ai.google.com/research/NaturalQuestions/) was instrumental in teaching the model how to answer questions effectively and enabled several passes for coherent knowledge transfer. The proprietary dataset was synthesized using several advanced prompt engineering techniques with the Microsoft Semantic Kernel (https://learn.microsoft.com/en-us/semantic-kernel/overview/) and GPT-3.5-turbo, ensuring the generation of profession-specific AI prompts.
## Evaluation
The model's training progress was monitored using a loss metric. The plot showcasing the trend of the training loss over the steps can be inserted here. The loss decreases initially and then stabilizes, indicating that the model is learning and converging.

## Compute Infrastructure
Google Collaboratory (https://colab.research.google.com/)
## Hardware
A100
## Ethics and Bias
Users should be aware that no model is entirely free from biases. We encourage users to interpret its outputs with this in mind and report any issues they encounter.
## Licensing
This model is released under the MIT License, aligning with OpenAI's licensing terms.
## Github
https://github.com/ESgarbi/gpt-nq-prompt-generator
|
Green-Sky/nakodanei-Blue-Orchid-2x7b-GGUF-iMatrix | Green-Sky | 2024-02-15T17:54:07Z | 548 | 5 | null | [
"gguf",
"not-for-all-audiences",
"writing",
"roleplay",
"gguf-imatrix",
"en",
"base_model:nakodanei/Blue-Orchid-2x7b",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-14T13:09:06Z | ---
license: apache-2.0
tags:
- not-for-all-audiences
- writing
- roleplay
- gguf
- gguf-imatrix
base_model:
- nakodanei/Blue-Orchid-2x7b
model_type: mixtral
quantized_by: Green-Sky
language:
- en
---
llama.cpp conversion of https://huggingface.co/nakodanei/Blue-Orchid-2x7b/
except for f16 and q8_0, every quant is using the `merge.imatrix`
`merge.imatrix` is a merge of `kalomaze-group_10_merged.172chunks.imatrix` and `wiki.train.400chunks.imatrix`, which took ~10min + ~20min to calulate on my machine.
full wiki.train would have taken 10h
for more info on imatrix handling see https://github.com/ggerganov/llama.cpp/pull/5302
### ppl (512 wiki.test, 300chunks)
| quant | ppl (lower is better) |
|--------------------|-----|
| f16(baseline) | 5.8839 +/- 0.05173 |
| q8_0 | 5.8880 +/- 0.05178 |
| q5_k_m | 5.8912 +/- 0.05177 |
| q5_k_m(without-imat) | 5.8893 +/- 0.05174 |
| q4_k_m | 5.9248 +/- 0.05216 |
| q4_k_m(without-imat) | 5.9492 +/- 0.05249 |
| iq3_xxs | 6.1984 +/- 0.05475 |
| iq3_xxs(only-wiki) | 6.1796 +/- 0.05446 |
| iq3_xxs(only-kal) | 6.1984 +/- 0.05475 |
| iq3_xxs(withou-imat) | 6.4228 +/- 0.05756 |
### Interesting observations
despite `merge.imatrix` being different from `kalomaze-group_10_merged.172chunks.imatrix`, they produce the exact same quantized iq3_xxs model file. (same hash, checked multiple times)
q5_k_m has a lower perplexity with the imatrix. but that probably is caused by kalomaze-group_10_merged diverging enough from wiki.
|
Eric111/SOLAR-10.7B-Instruct-v1.0-DPO | Eric111 | 2024-03-01T23:14:49Z | 548 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-01T22:36:54Z | ---
library_name: transformers
license: apache-2.0
tags: []
---
# Model Card for Model ID
DPO fine-tuned version of upstage/SOLAR-10.7B-Instruct-v1.0 with Intel/orca_dpo_pairs
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Warvito/animatediff-motion-adapter-sdxl-v1-0-beta | Warvito | 2024-03-15T11:47:36Z | 548 | 3 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"region:us"
]
| text-to-video | 2024-03-15T11:43:10Z | ---
pipeline_tag: text-to-video
---
AnimateDiff is a method that allows you to create videos using pre-existing Stable Diffusion Text to Image models.
Converted https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt to Huggingface Diffusers format using following script based Diffuser's convetion script (available https://github.com/huggingface/diffusers/blob/main/scripts/convert_animatediff_motion_module_to_diffusers.py)
```
import argparse
import torch
from diffusers import MotionAdapter
def convert_motion_module(original_state_dict):
converted_state_dict = {}
for k, v in original_state_dict.items():
if "pos_encoder" in k:
continue
else:
converted_state_dict[
k.replace(".norms.0", ".norm1")
.replace(".norms.1", ".norm2")
.replace(".ff_norm", ".norm3")
.replace(".attention_blocks.0", ".attn1")
.replace(".attention_blocks.1", ".attn2")
.replace(".temporal_transformer", "")
] = v
return converted_state_dict
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--ckpt_path", type=str, required=True)
parser.add_argument("--output_path", type=str, required=True)
parser.add_argument("--use_motion_mid_block", action="store_true")
parser.add_argument("--motion_max_seq_length", type=int, default=32)
parser.add_argument("--save_fp16", action="store_true")
return parser.parse_args()
if __name__ == "__main__":
args = get_args()
state_dict = torch.load(args.ckpt_path, map_location="cpu")
if "state_dict" in state_dict.keys():
state_dict = state_dict["state_dict"]
conv_state_dict = convert_motion_module(state_dict)
adapter = MotionAdapter(
use_motion_mid_block=False,
motion_max_seq_length=32,
block_out_channels=(320, 640, 1280),
)
# skip loading position embeddings
adapter.load_state_dict(conv_state_dict, strict=False)
adapter.save_pretrained(args.output_path)
if args.save_fp16:
adapter.to(torch.float16).save_pretrained(args.output_path, variant="fp16")
```
The following example demonstrates how you can utilize the motion modules with an existing Stable Diffusion text to image model.
#TODO |
HuggingFaceFW/ablation-model-refinedweb | HuggingFaceFW | 2024-04-25T08:33:53Z | 548 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-20T23:20:09Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
duyntnet/Foundation_3B-imatrix-GGUF | duyntnet | 2024-04-26T12:30:05Z | 548 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Foundation_3B",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-26T11:33:19Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Foundation_3B
---
Quantizations of https://huggingface.co/jeiku/Foundation_3B
# From original readme
This is a big step forward for 3B class models. Trained on smol PIPPA, alpaca-cleaned, and two custom datasets, and based on https://huggingface.co/jeiku/Rosa_v3_3B
This should serve as a decent fiction model, though it also excels at roleplaying, but is not an ideal model for logical queries or riddles.
|
yash173/mistral-pretraining | yash173 | 2024-05-07T10:20:06Z | 548 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-07T09:57:31Z | Entry not found |
QuantFactory/aya-23-8B-GGUF | QuantFactory | 2024-05-24T09:15:12Z | 548 | 3 | transformers | [
"transformers",
"gguf",
"cohere",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-23-8B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-24T00:51:59Z | ---
library_name: transformers
base_model: CohereForAI/aya-23-8B
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- cohere
---
# Aya-23-8B-GGUF
- This is quantized version of [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B) created using llama.cpp
# Model Description
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). |
mradermacher/LewdPoppy-8B-RP-GGUF | mradermacher | 2024-05-27T02:45:24Z | 548 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zeroblu3/LewdPoppy-8B-RP",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-25T07:50:39Z | ---
base_model: zeroblu3/LewdPoppy-8B-RP
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/zeroblu3/LewdPoppy-8B-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LewdPoppy-8B-RP-GGUF/resolve/main/LewdPoppy-8B-RP.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Aratako/Ninja-v1-RP-expressive-v2-GGUF | Aratako | 2024-05-26T15:22:01Z | 548 | 2 | null | [
"gguf",
"roleplay",
"ja",
"base_model:Aratako/Ninja-v1-RP-expressive-v2",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-26T06:09:57Z | ---
license: apache-2.0
language:
- ja
tags:
- roleplay
base_model:
- Aratako/Ninja-v1-RP-expressive-v2
---
# Ninja-v1-RP-expressive-GGUF
## 概要
[Aratako/Ninja-v1-RP-expressive-v2](https://huggingface.co/Aratako/Ninja-v1-RP-expressive-v2)の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。 |
yichaodu/DiffusionDPO-alignment-hps-2.1 | yichaodu | 2024-06-20T12:01:27Z | 548 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"region:us"
]
| text-to-image | 2024-06-19T06:53:57Z | ---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
# Aligned Diffusion Model via DPO
Diffusion Model Aligned with thef following reward model and DPO algorithm
```
close-sourced vlm: claude3-opus gemini-1.5 gpt-4o gpt-4v
open-sourced vlm: internvl-1.5
score model: hps-2.1
```
## How to Use
You can load the model and perform inference as follows:
```python
from diffusers import StableDiffusionPipeline, UNet2DConditionModel
pretrained_model_name = "runwayml/stable-diffusion-v1-5"
dpo_unet = UNet2DConditionModel.from_pretrained(
"path/to/checkpoint",
subfolder='unet',
torch_dtype=torch.float16
).to('cuda')
pipeline = StableDiffusionPipeline.from_pretrained(pretrained_model_name, torch_dtype=torch.float16)
pipeline = pipeline.to('cuda')
pipeline.safety_checker = None
pipeline.unet = dpo_unet
generator = torch.Generator(device='cuda')
generator = generator.manual_seed(1)
prompt = "a pink flower"
image = pipeline(prompt=prompt, generator=generator, guidance_scale=gs).images[0]
```
## Citation
```
@misc{mjbench2024mjbench,
title={MJ-BENCH: Is Your Multimodal Reward Model Really a Good Judge?},
author={Chen*, Zhaorun and Du*, Yichao and Wen, Zichen and Zhou, Yiyang and Cui, Chenhang and Weng, Zhenzhen and Tu, Haoqin and Wang, Chaoqi and Tong, Zhengwei and HUANG, Leria and Chen, Canyu and Ye Qinghao and Zhu, Zhihong and Zhang, Yuqing and Zhou, Jiawei and Zhao, Zhuokai and Rafailov, Rafael and Finn, Chelsea and Yao, Huaxiu},
year={2024}
}
``` |
LLukas22/DeepSeek-Coder-V2-Lite-Instruct-Q4_0-GGUF | LLukas22 | 2024-06-25T13:12:06Z | 548 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
"license:other",
"region:us"
]
| null | 2024-06-25T13:11:25Z | ---
base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
license: other
license_name: deepseek-license
license_link: LICENSE
tags:
- llama-cpp
- gguf-my-repo
---
# LLukas22/DeepSeek-Coder-V2-Lite-Instruct-Q4_0-GGUF
This model was converted to GGUF format from [`deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct`](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo LLukas22/DeepSeek-Coder-V2-Lite-Instruct-Q4_0-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo LLukas22/DeepSeek-Coder-V2-Lite-Instruct-Q4_0-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo LLukas22/DeepSeek-Coder-V2-Lite-Instruct-Q4_0-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo LLukas22/DeepSeek-Coder-V2-Lite-Instruct-Q4_0-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_0.gguf -c 2048
```
|
mradermacher/Chakma_GPTv1-GGUF | mradermacher | 2024-06-29T22:47:05Z | 548 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:samCkma/Chakma_GPTv1",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-29T22:44:44Z | ---
base_model: samCkma/Chakma_GPTv1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/samCkma/Chakma_GPTv1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.IQ3_XS.gguf) | IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.IQ3_S.gguf) | IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.IQ3_M.gguf) | IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Chakma_GPTv1-GGUF/resolve/main/Chakma_GPTv1.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
flax-sentence-embeddings/all_datasets_v3_MiniLM-L12 | flax-sentence-embeddings | 2021-07-23T15:37:42Z | 547 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:2104.08727",
"arxiv:1810.09305",
"arxiv:2102.07033",
"arxiv:1904.06472",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
---
# Model description
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`MiniLM-L12`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures
the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence
similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L12')
text = "Replace me by any text you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [`MiniLM-L12`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| SearchQA | - | 582,261 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| total | | 1,097,953,922 |
|
timm/resnext101_64x4d.c1_in1k | timm | 2024-02-10T23:41:30Z | 547 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2110.00476",
"arxiv:1611.05431",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-05T19:24:43Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnext101_64x4d.c1_in1k
A ResNeXt-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
* grouped 3x3 bottleneck convolutions
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `C` recipes
* SGD (w/ Nesterov) optimizer and AGC (adaptive gradient clipping).
* Cosine LR schedule with warmup
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 83.5
- GMACs: 15.5
- Activations (M): 31.2
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnext101_64x4d.c1_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext101_64x4d.c1_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnext101_64x4d.c1_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{Xie2016,
title={Aggregated Residual Transformations for Deep Neural Networks},
author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He},
journal={arXiv preprint arXiv:1611.05431},
year={2016}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
|
shaowenchen/chinese-llama-2-7b-16k-gguf | shaowenchen | 2023-09-12T00:51:09Z | 547 | 0 | null | [
"gguf",
"meta",
"llama",
"llama-2",
"chinese",
"text-generation",
"zh",
"license:other",
"region:us"
]
| text-generation | 2023-09-08T16:48:00Z | ---
inference: false
language:
- zh
license: other
model_creator: ziqingyang
model_link: https://huggingface.co/ziqingyang/chinese-llama-2-7b-16k
model_name: chinese-llama-2-7b-16k
model_type: llama
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- meta
- gguf
- llama
- llama-2
- chinese
---
## Provided files
| Name | Quant method | Size |
| ---------------------------------- | ------------ | ------ |
| chinese-llama-2-7b-16k.Q2_K.gguf | Q2_K | 2.7 GB |
| chinese-llama-2-7b-16k.Q3_K.gguf | Q3_K | 3.2 GB |
| chinese-llama-2-7b-16k.Q3_K_L.gguf | Q3_K_L | 3.5 GB |
| chinese-llama-2-7b-16k.Q3_K_S.gguf | Q3_K_S | 2.9 GB |
| chinese-llama-2-7b-16k.Q4_0.gguf | Q4_0 | 3.7 GB |
| chinese-llama-2-7b-16k.Q4_1.gguf | Q4_1 | 4.1 GB |
| chinese-llama-2-7b-16k.Q4_K.gguf | Q4_K | 3.9 GB |
| chinese-llama-2-7b-16k.Q4_K_S.gguf | Q4_K_S | 3.7 GB |
| chinese-llama-2-7b-16k.Q5_0.gguf | Q5_0 | 4.5 GB |
| chinese-llama-2-7b-16k.Q5_1.gguf | Q5_1 | 4.9 GB |
| chinese-llama-2-7b-16k.Q5_K.gguf | Q5_K | 4.6 GB |
| chinese-llama-2-7b-16k.Q5_K_S.gguf | Q5_K_S | 4.5 GB |
| chinese-llama-2-7b-16k.Q6_K.gguf | Q6_K | 5.3 GB |
| chinese-llama-2-7b-16k.Q8_0.gguf | Q8_0 | 6.9 GB |
| chinese-llama-2-7b-16k.gguf | full | 13 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
## Provided images
| Name | Quant method | Size |
| ------------------------------------------------ | ------------ | ------- |
| `shaowenchen/chinese-llama-2-7b-16k-gguf:Q2_K` | Q2_K | 3.68 GB |
| `shaowenchen/chinese-llama-2-7b-16k-gguf:Q3_K` | Q3_K | 4.16 GB |
| `shaowenchen/chinese-llama-2-7b-16k-gguf:Q3_K_L` | Q3_K_L | 4.46 GB |
| `shaowenchen/chinese-llama-2-7b-16k-gguf:Q3_K_S` | Q3_K_S | 3.81 GB |
| `shaowenchen/chinese-llama-2-7b-16k-gguf:Q4_0` | Q4_0 | 4.7 GB |
| `shaowenchen/chinese-llama-2-7b-16k-gguf:Q4_K` | Q4_K | 4.95 GB |
| `shaowenchen/chinese-llama-2-7b-16k-gguf:Q4_K_S` | Q4_K_S | 4.73 GB |
Usage:
```
docker run --rm -p 8000:8000 shaowenchen/chinese-llama-2-7b-16k-gguf:Q2_K
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
Ichsan2895/Merak-7B-v4 | Ichsan2895 | 2023-11-19T16:14:53Z | 547 | 3 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"id",
"en",
"dataset:wikipedia",
"dataset:Ichsan2895/OASST_Top1_Indonesian",
"dataset:Ichsan2895/alpaca-gpt4-indonesian",
"arxiv:2306.02707",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-11T20:42:22Z | ---
datasets:
- wikipedia
- Ichsan2895/OASST_Top1_Indonesian
- Ichsan2895/alpaca-gpt4-indonesian
language:
- id
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://huggingface.co/Ichsan2895/Merak-7B-v4/resolve/main/FINAL_LOGO/6.png" alt="MERAK" style="width: 50%; min-width: 100px; display: block; margin: auto;">
</div>
# HAPPY TO ANNOUNCE THE RELEASE OF MERAK-7B-V4!
Merak-7B is the Large Language Model of Indonesian Language
This model is based on [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) and fine tuned by some of Indonesia Wikipedia articles that I cleaned before.
Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM
Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike.
Big thanks to all my friends and communities that help to build our first model. Thanks for Axolotl for a great fine tuning tool which designed to streamline the fine-tuning of various AI models.
Feel free, to ask me about the model and please share the news on your social media.
## HOW TO USE
### Installation
Please make sure you have installed CUDA driver in your system, Python 3.10 and PyTorch 2. Then install this library in terminal
```
pip install protobuf==4.24.4
pip install bitsandbytes==0.41.1
pip install transformers==4.34.1
pip install peft==0.5.0
pip install accelerate==0.23.0
pip install einops==0.6.1 scipy sentencepiece datasets
```
### Using BitsandBytes and it run with >= 10 GB VRAM GPU
[](https://colab.research.google.com/drive/1Tj15gNIx3KnLarDAJdwpa7qXa5nmfAM-?usp=drive_link)
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig
model_id = "Ichsan2895/Merak-7B-v4"
config = AutoConfig.from_pretrained(model_id)
BNB_CONFIG = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model = AutoModelForCausalLM.from_pretrained(model_id,
quantization_config=BNB_CONFIG,
device_map="auto",
trust_remote_code=True)
tokenizer = LlamaTokenizer.from_pretrained(model_id)
def generate_response(question: str) -> str:
chat = [
{"role": "system", "content": "Anda adalah Merak, sebuah model kecerdasan buatan yang dilatih oleh Muhammad Ichsan. Mohon jawab pertanyaan berikut dengan benar, faktual, dan ramah."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=True)
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"),
attention_mask=inputs.attention_mask,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=256)
response = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
assistant_start = f'''{question} \n assistant\n '''
response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()
prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```
### From my experience, For better answer, please don’t use BitsandBytes 4-bit Quantization, but it using higher VRAM
[](https://colab.research.google.com/drive/1KVkiaKddrK4focgQJ6ysUA1NypLQPYuF?usp=drive_link)
```
import torch
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM, BitsAndBytesConfig, LlamaTokenizer
from peft import PeftModel, PeftConfig
model_id = "Ichsan2895/Merak-7B-v4"
config = AutoConfig.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map="auto",
trust_remote_code=True)
tokenizer = LlamaTokenizer.from_pretrained(model_id)
def generate_response(question: str) -> str:
chat = [
{"role": "system", "content": "Anda adalah Merak, sebuah model kecerdasan buatan yang dilatih oleh Muhammad Ichsan. Mohon jawab pertanyaan berikut dengan benar, faktual, dan ramah."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=True)
with torch.no_grad():
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"),
attention_mask=inputs.attention_mask,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=256)
response = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
assistant_start = f'''{question} \n assistant\n '''
response_start = response.find(assistant_start)
return response[response_start + len(assistant_start) :].strip()
prompt = "Siapa penulis naskah proklamasi kemerdekaan Indonesia?"
print(generate_response(prompt))
```
## CHANGELOG
**v4** = We use [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) instead of Llama-2-Chat-HF. We did it throught uncounted trial-and-error. We pick the best one to do this model.
What we have done so far:
1st). We fine tuned it with Wikipedia articles that we cleaned it before. It use QLora and speed up by Deepspeed Zero 2 for 1 epoch. Axolotl was used for easier fine tuning configuration.
2nd). We got extra funds. Thanks all.. We did it again like first step but it was Full Parameter fine tuning (FFT) instead of QLora.
3rd). We fine tuned it with [Ichsan2895/OASST_Top1_Indonesian](https://huggingface.co/datasets/Ichsan2895/OASST_Top1_Indonesian) & [Ichsan2895/alpaca-gpt4-indonesian](https://huggingface.co/datasets/Ichsan2895/alpaca-gpt4-indonesian) with minor modification, so it was suitable with ChatML format. It was FFT for 4 epochs.
**v3** = Fine tuned by [Ichsan2895/OASST_Top1_Indonesian](https://huggingface.co/datasets/Ichsan2895/OASST_Top1_Indonesian) & [Ichsan2895/alpaca-gpt4-indonesian](https://huggingface.co/datasets/Ichsan2895/alpaca-gpt4-indonesian)
**v2** = Finetuned version of first Merak-7B model. We finetuned again with the same ID Wikipedia articles except it changes prompt-style in the questions. It has 600k ID wikipedia articles.
**v1** = The first Merak-7B model. We selected and cleaned about 200k ID wikipedia articles.
## CITATION
```
@software{lian2023mistralorca1
title = {MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
@article{dettmers2023qlora,
title = {QLoRA: Efficient Finetuning of Quantized LLMs},
author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal = {arXiv preprint arXiv:2305.14314},
year = {2023}
}
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## HOW TO CITE THIS PROJECT
If you use the Merak-7B model in your research or project, please cite it as:
```
@article{Merak,
title={Merak-7B: The LLM for Bahasa Indonesia},
author={Muhammad Ichsan},
publisher={Hugging Face}
journal={Hugging Face Repository},
year={2023}
}
``` |
sainteye/ifoodie-menu-v3.1 | sainteye | 2023-11-14T14:22:48Z | 547 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-11-14T14:22:44Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ifoodie-menu-v3.1
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# ifoodie-menu-v3.1
['優質', '差', '廣告', '普通']
## Example Images
# #### 優質
# 
#
# #### 差
# 
#
# #### 廣告
# 
#
# #### 普通
# 
# |
PORTULAN/gervasio-7b-portuguese-ptpt-decoder | PORTULAN | 2024-03-26T19:50:48Z | 547 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"gervasio-pt*",
"gervasio-ptpt",
"gervasio-ptbr",
"gervasio-7b-portuguese-ptpt-decoder",
"gervasio-7b-portuguese-ptbr-decoder",
"portulan",
"albertina-pt*",
"clm",
"gpt",
"portuguese",
"decoder",
"foundation model",
"pt",
"dataset:PORTULAN/extraglue",
"dataset:PORTULAN/extraglue-instruct",
"arxiv:2402.18766",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-15T14:49:31Z | ---
license: mit
language:
- pt
tags:
- gervasio-pt*
- gervasio-ptpt
- gervasio-ptbr
- gervasio-7b-portuguese-ptpt-decoder
- gervasio-7b-portuguese-ptbr-decoder
- portulan
- albertina-pt*
- clm
- gpt
- portuguese
- decoder
- foundation model
datasets:
- PORTULAN/extraglue
- PORTULAN/extraglue-instruct
---
</br>
</br>
<img align="left" width="40" height="40" src="https://github.githubassets.com/images/icons/emoji/unicode/1f917.png">
<p style="text-align: center;"> This is the model card for Gervásio 7B PTPT Decoder.
You may be interested in some of the other models in the <a href="https://huggingface.co/PORTULAN">Albertina (encoders) and Gervásio (decoders) families</a>.
</p>
</br>
</br>
# Gervásio 7B PTPT
</br>
**Gervásio PT*** is a **fully open** decoder for the **Portuguese language**.
It is a **decoder** of the LLaMA family, based on the neural architecture Transformer and developed over the LLaMA-2 7B model.
Its further improvement through additional training was done over language resources that include new instruction data sets of Portuguese prepared for this purpose ([extraGLUE-Instruct
](https://huggingface.co/datasets/PORTULAN/extraglue-instruct)).
It has different versions that were trained for different variants of Portuguese (PT),
namely for the European variant, spoken in Portugal ([**gervasio-7b-portuguese-ptpt-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptpt-decoder)), and for the American variant, spoken in Brazil ([**gervasio-7b-portuguese-ptbr-decoder**](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptbr-decoder)).
All versions of Gervásio are **openly distributed for free under an open license**, including thus for research and commercial purposes, and given its size, can
be run on consumer-grade hardware.
**Gervásio 7B PTPT** is developed by NLX-Natural Language and Speech Group, at the University of Lisbon, Faculty of Sciences, Department of Informatics, Portugal.
For the record, its full name is **Gervásio Produz Textos em Português**, to which corresponds the natural acronym **GPT PT**,
and which is known more shortly as **Gervásio PT*** or, even more briefly, just as **Gervásio**, among its acquaintances.
Gervásio 7B PTPT is developed by a team from the University of Lisbon, Portugal.
For a fully detailed description, check the respective [publication](https://arxiv.org/abs/2402.18766):
``` latex
@misc{gervasio,
title={Advancing Generative AI for Portuguese with
Open Decoder Gervásio PT-*},
author={Rodrigo Santos, João Silva, Luís Gomes,
João Rodrigues, António Branco},
year={2024},
eprint={2402.18766},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Please use the above cannonical reference when using or citing this model.
<br>
# Model Description
**This model card is for Gervásio 7B PTPT**, with 7 billion parameters, a hidden size of 4,096 units, an intermediate size of 11,008 units, 32 attention heads, 32 hidden layers, and a tokenizer obtained using the Byte-Pair Encoding (BPE) algorithm implemented with SentencePiece, featuring a vocabulary size of 32,000.
Gervásio 7B PTPT is distributed under an [MIT license](https://huggingface.co/PORTULAN/gervasio-7b-portuguese-ptbr-decoder/blob/main/LICENSE).
<br>
# Training Data
**Gervásio 7B PTPT** was trained over standard supervised fine-tuning, and to keep some alignment with mainstream benchmarks for English, we resorted to tasks and respective datasets in the GLUE and the SuperGLUE collections.
We selected those datasets where the outcome of their machine translation into European Portuguese could preserve, in the target language, the linguistic properties at stake.
From GLUE, we resorted to the following four tasks:
- MRPC (paraphrase Detection).
- RTE (recognizing Textual Entailment).
- STS-B (semantic textual similarity).
- WNLI (coreference and natural language inference).
And from SuperGLUE, we included these other four tasks:
- BoolQ (yes/no question answering).
- CB (inference with 3 labels).
- COPA (reasoning)
- MultiRC (question answering).
These datasets were machine translated into European Portuguese and from the [extraGLUE](https://huggingface.co/datasets/PORTULAN/extraglue) dataset.
Furthermore, instruction templates have been manually crafted for each task.
These take the various fields in the dataset and arrange them into prompts, which were collected into the [extraGLUE-instruct](https://huggingface.co/datasets/PORTULAN/extraglue-instruct) dataset.
We also employed data augmentation techniques to enhance the size and diversity of our dataset.
This involved repurposing the tasks in various ways, such as generation of answers from MultiRC, question generation from BoolQ, and other relevant modifications.
# Training Details
We applied supervised fine-tuning with a causal language modeling training objective following a zero-out technique during the fine-tuning process.
Specifically, while the entire prompt received attention during fine-tuning, only the response tokens were subjected to back-propagation.
In terms of hyper-parameters, the model was trained with a learning rate of 2 * 10^-5, a weight decay of 0.1, a two-epoch training regime without warm-up, and to ensure the same number of tokens back-propagated per step, we employed an input sequence of 512 tokens with a batch size of 16 and 16 accumulation steps.
Due to hardware limitations that imposed a shorter sequence length (512) compared to the base model (4096), instead of the typical practice of concatenating all training examples and then dividing them into batches with the same input sequence length, we separated each example individually.
In other words, each example occupies the full input sequence length.
# Performance
For testing, we reserved the translated datasets MRPC (similarity) and RTE (inference), from GLUE, and COPA (reasoning/qa), from SuperGLUE, which were taking as representatives of three major types of tasks, and were not seen during training.
| Model | MRPC (F1) | RTE (F1) | COPA (F1) |
|--------------------------|----------------|----------------|-----------|
| **Gervásio 7B PTPT** | **0.7273** | **0.8291** | **0.5459**|
| **LLaMA-2 (English)** | 0.0328 | 0.0482 | 0.3844 |
| **LLaMA-2 Chat (English)** | 0.5703 | 0.4697 | 0.4737 |
<br>
# How to use
You can use this model directly with a pipeline for causal language modeling:
```python3
>>> from transformers import pipeline
>>> generator = pipeline(model='PORTULAN/gervasio-7b-portuguese-ptpt-decoder')
>>> generator("A comida portuguesa é", max_new_tokens=10)
```
<br>
# Acknowledgments
The research reported here was partially supported by: PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language,
funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
grant PINFRA/22117/2016; research project GPT-PT - Transformer-based Decoder for the Portuguese Language, funded by FCT—Fundação para a Ciência e Tecnologia under the
grant CPCA-IAC/AV/478395/2022; innovation project
ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação
under the grant C625734525-00462629, of Plano de Recuperação e Resiliência,
call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização. |
TheBloke/OrcaMaid-13B-GGUF | TheBloke | 2023-12-09T15:47:37Z | 547 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"base_model:ddh0/OrcaMaid-13b",
"license:other",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-12-09T10:37:08Z | ---
base_model: ddh0/OrcaMaid-13b
inference: false
license: other
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
license_name: microsoft-research-license
model_creator: ddh0
model_name: OrcaMaid 13B
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# OrcaMaid 13B - GGUF
- Model creator: [ddh0](https://huggingface.co/ddh0)
- Original model: [OrcaMaid 13B](https://huggingface.co/ddh0/OrcaMaid-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [ddh0's OrcaMaid 13B](https://huggingface.co/ddh0/OrcaMaid-13b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OrcaMaid-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OrcaMaid-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF)
* [ddh0's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ddh0/OrcaMaid-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [ddh0's OrcaMaid 13B](https://huggingface.co/ddh0/OrcaMaid-13b).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [orcamaid-13b.Q2_K.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [orcamaid-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [orcamaid-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [orcamaid-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [orcamaid-13b.Q4_0.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [orcamaid-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [orcamaid-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [orcamaid-13b.Q5_0.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [orcamaid-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [orcamaid-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [orcamaid-13b.Q6_K.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [orcamaid-13b.Q8_0.gguf](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF/blob/main/orcamaid-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/OrcaMaid-13B-GGUF and below it, a specific filename to download, such as: orcamaid-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/OrcaMaid-13B-GGUF orcamaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/OrcaMaid-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/OrcaMaid-13B-GGUF orcamaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m orcamaid-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./orcamaid-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./orcamaid-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: ddh0's OrcaMaid 13B
# OrcaMaid-13b
This is a merge of Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and Undi and IkariDev's [Noromaid-v0.1.1-13b](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1), with just a touch of Kal'tsit's [cat-v1.0](https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b) mixed in.
The model recipe was as follows:
- Linear merge of **Orca-2-13b** @0.8 and **cat-v1.0-13b** @0.2 = OrcaCat-13b (no plans to release)
- Gradient SLERP merge of **Noromaid-v0.1.1** @0.5 and **OrcaCat-13b** @0.5 = OrcaMaid-13b
Both merges were done in FP32 rather than FP16, due to Orca being released as FP32. I didn't want to risk losing any precision.
The overall goal of this merge is to create a model that sounds uniquely human and natural, without sacrificing intelligence. ***Edit:** after some feedback from a few others, ranking on the Ayumi leaderboards, and more of my own testing, I believe I have succeeded as well as I reasonably could have hoped.*
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `4096`
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Charles Goddard](https://github.com/cg123) for his kind help with mergekit
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) for Noromaid
- Thanks to Kal'tsit for Cat. See her original reddit post: [Cat 1.0 is an uncensored, rp model aligned to be useful in all (even spicy)situations](https://www.reddit.com/r/LocalLLaMA/comments/17skxzq/cat_10_is_an_uncensored_rp_model_aligned_to_be/)
<!-- original-model-card end -->
|
TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF | TheBloke | 2023-12-14T21:24:48Z | 547 | 5 | transformers | [
"transformers",
"gguf",
"mistral",
"en",
"base_model:janhq/Mistral-7B-Instruct-v0.2-DARE",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-12-14T21:20:49Z | ---
base_model: janhq/Mistral-7B-Instruct-v0.2-DARE
inference: false
language:
- en
license: apache-2.0
model_creator: Jan
model_name: Mistral 7B Instruct V0.2 DARE
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistral 7B Instruct V0.2 DARE - GGUF
- Model creator: [Jan](https://huggingface.co/janhq)
- Original model: [Mistral 7B Instruct V0.2 DARE](https://huggingface.co/janhq/Mistral-7B-Instruct-v0.2-DARE)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Jan's Mistral 7B Instruct V0.2 DARE](https://huggingface.co/janhq/Mistral-7B-Instruct-v0.2-DARE).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF)
* [Jan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/janhq/Mistral-7B-Instruct-v0.2-DARE)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral-7b-instruct-v0.2-dare.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-7b-instruct-v0.2-dare.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [mistral-7b-instruct-v0.2-dare.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral-7b-instruct-v0.2-dare.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistral-7b-instruct-v0.2-dare.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-7b-instruct-v0.2-dare.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral-7b-instruct-v0.2-dare.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-7b-instruct-v0.2-dare.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistral-7b-instruct-v0.2-dare.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral-7b-instruct-v0.2-dare.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral-7b-instruct-v0.2-dare.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF/blob/main/mistral-7b-instruct-v0.2-dare.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF and below it, a specific filename to download, such as: mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.2-DARE-GGUF mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mistral-7b-instruct-v0.2-dare.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Jan's Mistral 7B Instruct V0.2 DARE
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a
href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model uses the `DARE` method to merge [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) with 3 leading models in 12th Dec on [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
1. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
2. [MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
3. [v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B)
- base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
The yaml config file for this model is here:
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16
merge_method: dare_ties
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
parameters:
density: 0.8
weight: 0.4
- model: Q-bert/MetaMath-Cybertron-Starling
parameters:
density: 0.8
weight: 0.3
- model: v1olet/v1olet_marcoroni-go-bruins-merge-7B
parameters:
density: 0.8
weight: 0.3
parameters:
int8_mask: true
```
# Prompt template:
- **ChatML**
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- **Alpaca**
```
{system_message}
### Instruction:
{prompt}
### Response:
```
# Run this model
You can run this model using [Jan](https://jan.ai/) on Mac, Windows, or Linux.
**Jan is an open source, ChatGPT alternative that is:**
💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)
- Please use the [trinity-v1-GGUF](https://huggingface.co/janhq/trinity-v1-GGUF) when using on Jan.

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit](https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
- [SLERP](https://github.com/Digitous/LLM-SLERP-Merge)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
<!-- original-model-card end -->
|
PassionFriend/5F6QoNYuA9ZMZovipMjcsRLP3Uzm4thrubprHaqtxc3mCvEg_vgg | PassionFriend | 2024-03-01T06:50:01Z | 547 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-17T23:09:00Z | Entry not found |
ChrisWilson010101/5DQ4H3zxQc6i6YKrsdwKofu8z4FPhFRFGLpzCAigNXVSFjst_vgg | ChrisWilson010101 | 2024-03-03T23:10:02Z | 547 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-01T15:45:07Z | Entry not found |
mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF | mradermacher | 2024-05-06T06:18:10Z | 547 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:grimulkan/llama2_70b_longlora_fp16_32k_ROPE8",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-09T09:46:27Z | ---
base_model: grimulkan/llama2_70b_longlora_fp16_32k_ROPE8
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/grimulkan/llama2_70b_longlora_fp16_32k_ROPE8
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q2_K.gguf) | Q2_K | 25.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.IQ3_XS.gguf) | IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.IQ3_S.gguf) | IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q3_K_S.gguf) | Q3_K_S | 30.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.IQ3_M.gguf) | IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q3_K_L.gguf) | Q3_K_L | 36.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.IQ4_XS.gguf) | IQ4_XS | 37.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q5_K_S.gguf) | Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q5_K_M.gguf) | Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama2_70b_longlora_fp16_32k_ROPE8-GGUF/resolve/main/llama2_70b_longlora_fp16_32k_ROPE8.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/EstopianOrcaMaid-13b-GGUF | mradermacher | 2024-05-06T06:10:13Z | 547 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ddh0/EstopianOrcaMaid-13b",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-18T03:20:26Z | ---
base_model: ddh0/EstopianOrcaMaid-13b
language:
- en
library_name: transformers
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
license_name: microsoft-research-license
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/ddh0/EstopianOrcaMaid-13b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q2_K.gguf) | Q2_K | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.IQ3_XS.gguf) | IQ3_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.IQ3_M.gguf) | IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.IQ4_XS.gguf) | IQ4_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EstopianOrcaMaid-13b-GGUF/resolve/main/EstopianOrcaMaid-13b.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mnh-raam/med-git | mnh-raam | 2024-04-03T14:19:53Z | 547 | 0 | transformers | [
"transformers",
"pytorch",
"git",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-03T14:00:32Z | Entry not found |
PrunaAI/Unichat-llama3-Chinese-8B-GGUF-smashed | PrunaAI | 2024-04-27T16:33:51Z | 547 | 1 | null | [
"gguf",
"pruna-ai",
"region:us"
]
| null | 2024-04-27T15:27:21Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
## This repo contains GGUF versions of the UnicomLLM/Unichat-llama3-Chinese-8B model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Unichat-llama3-Chinese-8B-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/Unichat-llama3-Chinese-8B-GGUF-smashed Unichat-llama3-Chinese-8B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/Unichat-llama3-Chinese-8B-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Unichat-llama3-Chinese-8B-GGUF-smashed Unichat-llama3-Chinese-8B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Unichat-llama3-Chinese-8B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Unichat-llama3-Chinese-8B.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Unichat-llama3-Chinese-8B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
galverse/mama-v2 | galverse | 2024-06-03T18:52:04Z | 547 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.1",
"license:unknown",
"region:us"
]
| text-to-image | 2024-06-03T18:49:25Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/image (10).png
base_model: cagliostrolab/animagine-xl-3.1
instance_prompt: null
license: unknown
---
# Mama V1.5
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/galverse/mama-1.5/tree/main) them in the Files & versions tab.
|
cowWhySo/Phi-3-mini-4k-instruct-Friendly-gguf | cowWhySo | 2024-06-07T00:13:32Z | 547 | 0 | null | [
"gguf",
"license:mit",
"region:us"
]
| null | 2024-06-06T23:19:40Z | ---
license: mit
---
See: https://huggingface.co/cowWhySo/Phi-3-mini-4k-instruct-Friendly
For model details. This is an uncenscored version of Phi-3 Mini. |
huggingtweets/cuckoldresss-qobetty-ragamuffin197 | huggingtweets | 2021-08-17T12:17:54Z | 546 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1399382014214737924/QsAw6oxP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/755753205028577280/nwtLbTwy_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1254593296455872513/Qdyli1JK_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BettyBoopQoS & Ragamuffin1970 & Cuckoldress Scarlet</div>
<div style="text-align: center; font-size: 14px;">@cuckoldresss-qobetty-ragamuffin197</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BettyBoopQoS & Ragamuffin1970 & Cuckoldress Scarlet.
| Data | BettyBoopQoS | Ragamuffin1970 | Cuckoldress Scarlet |
| --- | --- | --- | --- |
| Tweets downloaded | 129 | 3247 | 1005 |
| Retweets | 2 | 11 | 252 |
| Short tweets | 10 | 584 | 70 |
| Tweets kept | 117 | 2652 | 683 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zfpi2vmm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cuckoldresss-qobetty-ragamuffin197's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/172rz2sh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/172rz2sh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cuckoldresss-qobetty-ragamuffin197')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
opennyaiorg/en_legal_ner_trf | opennyaiorg | 2024-05-08T06:32:22Z | 546 | 12 | spacy | [
"spacy",
"token-classification",
"en",
"dataset:opennyaiorg/InLegalNER",
"arxiv:2211.03442",
"license:apache-2.0",
"model-index",
"region:us"
]
| token-classification | 2022-09-22T09:54:11Z | ---
tags:
- spacy
- token-classification
widget:
- text: >-
Section 319 Cr.P.C. contemplates a situation where the evidence adduced by
the prosecution for Respondent No.3-G. Sambiah on 20th June 1984
- text: |
In The High Court Of Kerala At Ernakulam
Crl Mc No. 1622 of 2006()
1. T.R.Ajayan, S/O. O.Raman,
... Petitioner
Vs
1. M.Ravindran,
... Respondent
2. Mrs. Nirmala Dinesh, W/O. Dinesh,
For Petitioner :Sri.A.Kumar
For Respondent :Smt.M.K.Pushpalatha
The Hon'ble Mr. Justice P.R.Raman
The Hon'ble Mr. Justice V.K.Mohanan
Dated :07/01/2008
O R D E R
language:
- en
license: apache-2.0
model-index:
- name: en_legal_ner_trf
results:
- task:
type: token-classification
name: Named Entity Recognition
metrics:
- type: F1-Score
value: 91.076
name: Test F1-Score
datasets:
- opennyaiorg/InLegalNER
---
# Paper details
[Named Entity Recognition in Indian court judgments](https://aclanthology.org/2022.nllp-1.15/)
[Arxiv](https://arxiv.org/abs/2211.03442)
---
Indian Legal Named Entity Recognition(NER): Identifying relevant named entities in an Indian legal judgement using legal NER trained on [spacy](https://github.com/explosion/spaCy).
### Scores
| Type | Score |
| --- | --- |
| **F1-Score** | **91.076** |
| `Precision` | 91.979 |
| `Recall` | 90.19 |
| Feature | Description |
| --- | --- |
| **Name** | `en_legal_ner_trf` |
| **Version** | `3.2.0` |
| **spaCy** | `>=3.2.2,<3.3.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [InLegalNER Train Data](https://storage.googleapis.com/indianlegalbert/OPEN_SOURCED_FILES/NER/NER_TRAIN.zip) [GitHub](https://github.com/Legal-NLP-EkStep/legal_NER)|
| **License** | `MIT` |
| **Author** | [Aman Tiwari](https://www.linkedin.com/in/amant555/) |
## Load Pretrained Model
Install the model using pip
```sh
pip install https://huggingface.co/opennyaiorg/en_legal_ner_trf/resolve/main/en_legal_ner_trf-any-py3-none-any.whl
```
Using pretrained NER model
```python
# Using spacy.load().
import spacy
nlp = spacy.load("en_legal_ner_trf")
text = "Section 319 Cr.P.C. contemplates a situation where the evidence adduced by the prosecution for Respondent No.3-G. Sambiah on 20th June 1984"
doc = nlp(text)
# Print indentified entites
for ent in doc.ents:
print(ent,ent.label_)
##OUTPUT
#Section 319 PROVISION
#Cr.P.C. STATUTE
#G. Sambiah RESPONDENT
#20th June 1984 DATE
```
### Label Scheme
<details>
<summary>View label scheme (14 labels for 1 components)</summary>
| ENTITY | BELONGS TO |
| --- | --- |
| `LAWYER` | PREAMBLE |
| `COURT` | PREAMBLE, JUDGEMENT |
| `JUDGE` | PREAMBLE, JUDGEMENT |
| `PETITIONER` | PREAMBLE, JUDGEMENT |
| `RESPONDENT` | PREAMBLE, JUDGEMENT |
| `CASE_NUMBER` | JUDGEMENT |
| `GPE` | JUDGEMENT |
| `DATE` | JUDGEMENT |
| `ORG` | JUDGEMENT |
| `STATUTE` | JUDGEMENT |
| `WITNESS` | JUDGEMENT |
| `PRECEDENT` | JUDGEMENT |
| `PROVISION` | JUDGEMENT |
| `OTHER_PERSON` | JUDGEMENT |
</details>
## Author - Publication
```
@inproceedings{kalamkar-etal-2022-named,
title = "Named Entity Recognition in {I}ndian court judgments",
author = "Kalamkar, Prathamesh and
Agarwal, Astha and
Tiwari, Aman and
Gupta, Smita and
Karn, Saurabh and
Raghavan, Vivek",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nllp-1.15",
doi = "10.18653/v1/2022.nllp-1.15",
pages = "184--193",
abstract = "Identification of named entities from legal texts is an essential building block for developing other legal Artificial Intelligence applications. Named Entities in legal texts are slightly different and more fine-grained than commonly used named entities like Person, Organization, Location etc. In this paper, we introduce a new corpus of 46545 annotated legal named entities mapped to 14 legal entity types. The Baseline model for extracting legal named entities from judgment text is also developed.",
}
``` |
tsmatz/mt5_summarize_japanese | tsmatz | 2023-09-12T00:28:02Z | 546 | 13 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"ja",
"base_model:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-11-26T10:51:27Z | ---
language:
- ja
license: apache-2.0
tags:
- summarization
- generated_from_trainer
- mt5
metrics:
- rouge
widget:
- text: 世界中では約120のワクチンの開発が進められている。英オックスフォード大学の専門家たちはすでに臨床試験を開始している。 新しいアプローチ 多くの従来のワクチンは、弱体化させたウイルスや改変したウイルスなどがもとになっている。しかし今回のワクチンは新しいアプローチに基づいたもので、遺伝子のRNA(リボ核酸)を使う。
筋肉に注射すると、RNAは自己増殖し、新型ウイルスの表面にみられるスパイクタンパク質のコピーをつくるよう、体内の細胞に指示を出す。 この方法で、COVID-19(新型ウイルスによる感染症)を発症することなく新型ウイルスを認識して戦うための免疫システムを訓練できるという。
シャトック教授は、「我々はゼロからワクチンを製造し、わずか数カ月で臨床試験に持ち込むことができた」と述べた。
- text: サッカーのワールドカップカタール大会、世界ランキング24位でグループEに属する日本は、23日の1次リーグ初戦において、世界11位で過去4回の優勝を誇るドイツと対戦しました。試合は前半、ドイツの一方的なペースではじまりましたが、後半、日本の森保監督は攻撃的な選手を積極的に動員して流れを変えました。結局、日本は前半に1点を奪われましたが、途中出場の堂安律選手と浅野拓磨選手が後半にゴールを決め、2対1で逆転勝ちしました。ゲームの流れをつかんだ森保采配が功を奏しました。
base_model: google/mt5-small
model-index:
- name: mt5_summarize_japanese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_summarize_japanese
(Japanese caption : 日本語の要約のモデル)
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) trained for Japanese summarization.
This model is fine-tuned on BBC news articles ([XL-Sum Japanese dataset](https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/japanese)), in which the first sentence (headline sentence) is used for summary and others are used for article.<br>
So, **please fill news story (including, such as, event, background, result, and comment) as source text in the inferece widget**. (Other corpra - such as, conversation, business document, academic paper, or short tale - are not seen in training set.)
It achieves the following results on the evaluation set:
- Loss: 1.8952
- Rouge1: 0.4625
- Rouge2: 0.2866
- Rougel: 0.3656
- Rougelsum: 0.3868
## Intended uses
```python
from transformers import pipeline
seq2seq = pipeline("summarization", model="tsmatz/mt5_summarize_japanese")
sample_text = "サッカーのワールドカップカタール大会、世界ランキング24位でグループEに属する日本は、23日の1次リーグ初戦において、世界11位で過去4回の優勝を誇るドイツと対戦しました。試合は前半、ドイツの一方的なペースではじまりましたが、後半、日本の森保監督は攻撃的な選手を積極的に動員して流れを変えました。結局、日本は前半に1点を奪われましたが、途中出場の堂安律選手と浅野拓磨選手が後半にゴールを決め、2対1で逆転勝ちしました。ゲームの流れをつかんだ森保采配が功を奏しました。"
result = seq2seq(sample_text)
print(result)
```
## Training procedure
You can download the source code for fine-tuning from [here](https://github.com/tsmatz/huggingface-finetune-japanese/blob/master/02-summarize.ipynb).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 90
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 4.2501 | 0.36 | 100 | 3.3685 | 0.3114 | 0.1654 | 0.2627 | 0.2694 |
| 3.6436 | 0.72 | 200 | 3.0095 | 0.3023 | 0.1634 | 0.2684 | 0.2764 |
| 3.3044 | 1.08 | 300 | 2.8025 | 0.3414 | 0.1789 | 0.2912 | 0.2984 |
| 3.2693 | 1.44 | 400 | 2.6284 | 0.3616 | 0.1935 | 0.2979 | 0.3132 |
| 3.2025 | 1.8 | 500 | 2.5271 | 0.3790 | 0.2042 | 0.3046 | 0.3192 |
| 2.9772 | 2.17 | 600 | 2.4203 | 0.4083 | 0.2374 | 0.3422 | 0.3542 |
| 2.9133 | 2.53 | 700 | 2.3863 | 0.3847 | 0.2096 | 0.3316 | 0.3406 |
| 2.9383 | 2.89 | 800 | 2.3573 | 0.4016 | 0.2297 | 0.3361 | 0.3500 |
| 2.7608 | 3.25 | 900 | 2.3223 | 0.3999 | 0.2249 | 0.3461 | 0.3566 |
| 2.7864 | 3.61 | 1000 | 2.2293 | 0.3932 | 0.2219 | 0.3297 | 0.3445 |
| 2.7846 | 3.97 | 1100 | 2.2097 | 0.4386 | 0.2617 | 0.3766 | 0.3826 |
| 2.7495 | 4.33 | 1200 | 2.1879 | 0.4100 | 0.2449 | 0.3481 | 0.3551 |
| 2.6092 | 4.69 | 1300 | 2.1515 | 0.4398 | 0.2714 | 0.3787 | 0.3842 |
| 2.5598 | 5.05 | 1400 | 2.1195 | 0.4366 | 0.2545 | 0.3621 | 0.3736 |
| 2.5283 | 5.41 | 1500 | 2.0637 | 0.4274 | 0.2551 | 0.3649 | 0.3753 |
| 2.5947 | 5.77 | 1600 | 2.0588 | 0.4454 | 0.2800 | 0.3828 | 0.3921 |
| 2.5354 | 6.14 | 1700 | 2.0357 | 0.4253 | 0.2582 | 0.3546 | 0.3687 |
| 2.5203 | 6.5 | 1800 | 2.0263 | 0.4444 | 0.2686 | 0.3648 | 0.3764 |
| 2.5303 | 6.86 | 1900 | 1.9926 | 0.4455 | 0.2771 | 0.3795 | 0.3948 |
| 2.4953 | 7.22 | 2000 | 1.9576 | 0.4523 | 0.2873 | 0.3869 | 0.4053 |
| 2.4271 | 7.58 | 2100 | 1.9384 | 0.4455 | 0.2811 | 0.3713 | 0.3862 |
| 2.4462 | 7.94 | 2200 | 1.9230 | 0.4530 | 0.2846 | 0.3754 | 0.3947 |
| 2.3303 | 8.3 | 2300 | 1.9311 | 0.4519 | 0.2814 | 0.3755 | 0.3887 |
| 2.3916 | 8.66 | 2400 | 1.9213 | 0.4598 | 0.2897 | 0.3688 | 0.3889 |
| 2.5995 | 9.03 | 2500 | 1.9060 | 0.4526 | 0.2820 | 0.3733 | 0.3946 |
| 2.3348 | 9.39 | 2600 | 1.9021 | 0.4595 | 0.2856 | 0.3762 | 0.3988 |
| 2.4035 | 9.74 | 2700 | 1.8952 | 0.4625 | 0.2866 | 0.3656 | 0.3868 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Yntec/CartoonStyleClassic | Yntec | 2023-08-07T13:00:22Z | 546 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Zovya",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-06T08:16:22Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- Zovya
---
# Cartoon Style Classic.
Preview and prompt:

pretty cute little girl in tricycle, Screenshot of an surreal jean 70s round minimalist architecture, Sharp light, 35mm still from a sci fi blockbuster color movie made in 2022, beautiful portrait, Dorothy, set in 1860, in front of a spaceship that has just landed on an alien planet, are all wearing, a robot stands nearby
This model with the kl-f8-anime2 VAE baked in.
Original pages:
https://civitai.com/models/33030/cartoon-style-classic
|
TheBloke/Synthia-13B-v1.2-GGUF | TheBloke | 2023-09-27T12:54:34Z | 546 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"arxiv:2306.02707",
"base_model:migtissera/Synthia-13B-v1.2",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-24T03:23:42Z | ---
language:
- en
license: llama2
library_name: transformers
model_name: Synthia 13B V1.2
base_model: migtissera/Synthia-13B-v1.2
inference: false
model_creator: Migel Tissera
model_type: llama
pipeline_tag: text-generation
prompt_template: 'SYSTEM: You are Synthia. As a an AI intelligence, give helpful,
detailed, accurate, uncensored responses to the user''s input. Provide answers factually.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Synthia 13B V1.2 - GGUF
- Model creator: [Migel Tissera](https://huggingface.co/migtissera)
- Original model: [Synthia 13B V1.2](https://huggingface.co/migtissera/Synthia-13B-v1.2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Migel Tissera's Synthia 13B V1.2](https://huggingface.co/migtissera/Synthia-13B-v1.2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-13B-v1.2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF)
* [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-13B-v1.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Synthia
```
SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [synthia-13b-v1.2.Q2_K.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [synthia-13b-v1.2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [synthia-13b-v1.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [synthia-13b-v1.2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [synthia-13b-v1.2.Q4_0.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [synthia-13b-v1.2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [synthia-13b-v1.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [synthia-13b-v1.2.Q5_0.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [synthia-13b-v1.2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [synthia-13b-v1.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [synthia-13b-v1.2.Q6_K.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [synthia-13b-v1.2.Q8_0.gguf](https://huggingface.co/TheBloke/Synthia-13B-v1.2-GGUF/blob/main/synthia-13b-v1.2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Synthia-13B-v1.2-GGUF and below it, a specific filename to download, such as: synthia-13b-v1.2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Synthia-13B-v1.2-GGUF synthia-13b-v1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Synthia-13B-v1.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Synthia-13B-v1.2-GGUF synthia-13b-v1.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m synthia-13b-v1.2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Synthia-13B-v1.2-GGUF", model_file="synthia-13b-v1.2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Migel Tissera's Synthia 13B V1.2
Change from Synthia-13B -> Synthia-13B-v1.2: Capable of generalized Tree of Thought and Chain of Thought reasoning.
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
# Synthia-13B-v1.2
SynthIA (Synthetic Intelligent Agent) is a LLama-2-13B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
<br>
#### License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated Synthia-13B-v1.2 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|TBC|
|*hellaswag*|acc_norm|TBC|
|*mmlu*|acc_norm|TBC|
|*truthfulqa_mc*|mc2|TBC|
|**Total Average**|-|**TBC**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-13B-v1.2"
output_file_path = "./Synthia-13B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Synthia-13B-v1.2,
author = {Migel Tissera},
title = {Synthia-13B-v1.2b: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama,
title={LLaMA2: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
## Example Output
```
You: How does the solar wind affect the gravitational field of Earth? Does it affect the magnetic field?
To understand the effects of the solar wind on the Earth's gravitational and magnetic fields, let's take a closer look at both fields and their relation to the solar wind using
a Tree of Thoughts.
1. **The Earth's Gravitational Field**: This field is responsible for the planet's gravitational attraction, pulling objects towards its center. It is created by the mass of th
e Earth and its distribution within the planet.
2. **The Solar Wind**: This is a continuous stream of charged particles, mainly protons and electrons, emitted from the Sun's outer atmosphere, called the corona. It travels th
rough the solar system at high speeds.
3. **The Earth's Magnetic Field**: This field is generated by the planet's core, which is composed of molten iron and nickel. It protects the Earth from the solar wind and othe
r cosmic radiation, but it is not strong enough to prevent all particles from reaching the Earth's surface.
4. **The Interaction between the Solar Wind and the Earth's Gravitational Field**: The solar wind cannot influence the Earth's gravitational field because the gravitational for
ce is determined by the mass and distribution of matter within the planet, and the solar wind does not affect these factors.
5. **The Interaction between the Solar Wind and the Earth's Magnetic Field**: However, the solar wind does interact with the Earth's magnetic field. The magnetic field acts lik
e a shield, repelling or deflecting the charged particles in the solar wind. This action creates a bow shock, a region where the solar wind slows down as it encounters the Eart
h's magnetic field.
6. **Conclusion**: In conclusion, the solar wind does not affect the Earth's gravitational field, but it does influence the planet's magnetic field, through the creation of a b
ow shock. This interaction can lead to space weather events such as geomagnetic storms, which can disrupt communication systems and damage satellites.
You: Interesting! Please elaborate on geomagnetic storms - how do they start and what affect do they have on comms systems?
Sure! Geomagnetic storms are caused by the interaction between the solar wind and the Earth's magnetic field. They can start when the solar wind carries a particularly strong f
lux of charged particles, which can cause the Earth's magnetic field to compress and rebound. This compression and rebounding process can create geomagnetic storms, which can l
ast for several hours or days.
1. **Effects of Geomagnetic Storms on Communication Systems**: Geomagnetic storms can cause a variety of effects on communication systems, including disruption to GPS signals,
interference with radio communications, and power grid failures. This is because the solar wind particles can interact with the Earth's atmosphere and create electromagnetic cu
rrents that can induce currents in power lines and damage electronic equipment.
2. **Preventative Measures**: To mitigate the effects of geomagnetic storms, technology companies and governments have developed systems to monitor the solar wind and provide e
arly warnings of incoming storms. This information can be used to take preventative measures, such as reducing power consumption or rerouting communications through alternative
routes.
In addition, researchers are exploring ways to improve the Earth's magnetic field, such as creating artificial magnetic shields or modifying the composition of the planet's cor
e. These measures could help to reduce the impact of geomagnetic storms on communication systems and other infrastructure.
In conclusion, geomagnetic storms are a natural phenomenon that can have significant impacts on communication systems and other infrastructure. By monitoring the solar wind and
developing preventative measures, we can help to minimize the disruptions caused by these storms and ensure the continued operation of critical systems.
You: Thanks for that! Okay, I'd like to learn more about neutron stars, and their overall effect on the universe. What is the closest neutron star to us? Elaborate.
Certainly! Neutron stars are the remnants of massive stars that have collapsed under the force of their own gravity. They are incredibly dense, with the mass of the sun squeeze
d into an object with a diameter of about 20 kilometers.
1. **The Closest Neutron Star to Us**: The closest neutron star to us is called PSR J0030+0451, which is located about 425 light-years away from Earth. It is a pulsar, which me
ans that it emits a beam of electromagnetic radiation that sweeps across the sky like a lighthouse.
2. **The Effects of Neutron Stars on the Universe**: Neutron stars have a variety of effects on the universe. They are important stars for understanding the evolution of stars
and the universe as a whole. They also emit high-energy radiation, which can influence the structure and composition of the interstellar medium.
In addition, neutron stars are a source of gravitational waves, which are ripples in the fabric of space-time. These waves are generated when two neutron stars collide, and the
y can provide valuable information about the nature of these objects and the universe as a whole.
3. **Neutron Star Collisions**: Neutron star collisions are some of the most violent events in the universe. When two neutron stars spiral towards each other and collide, they
can produce a variety of exotic phenomena, including gamma-ray bursts, gravitational waves, and even new elements. These collisions can also provide valuable information about
the evolution of stars and the universe as a whole.
In conclusion, neutron stars are incredibly dense and fascinating objects that have a variety of effects on the universe. They are important for understanding the evolution of
stars and the universe as a whole, and they can provide valuable information about the nature of these objects and the universe as a whole.
You: Okay, can you tell me about Wormholes?
Sure! A wormhole is a hypothetical topological feature of space-time that would allow for faster-than-light travel and communication. It is essentially a shortcut through space
-time that connects two distant regions of the universe.
1. **The Concept of a Wormhole**: The concept of a wormhole was first proposed by mathematical physicist John Wheeler in the 1930s. He imagined a tunnel connecting two distant
regions of space-time that allowed for faster-than-light travel.
2. **Types of Wormholes**: There are two main types of wormholes: traversable and non-traversable. A traversable wormhole would allow for faster-than-light travel, while a non-
traversable wormhole would not.
3. **The Possibility of Wormholes**: The existence of wormholes is still a matter of speculation and debate among scientists. Some theories suggest that wormholes could exist a
s a result of the evolution of black holes or the existence of exotic matter with negative energy density.
4. **Potential Applications**: If wormholes could be created and stabilized, they could have a variety of potential applications, including faster-than-light travel and communi
cation, which could revolutionize our understanding of the universe and our ability to explore it.
5. **Challenges to Wormhole Creation**: However, creating and stabilizing a wormhole would require a significant amount of energy and would likely require the manipulation of e
xotic matter with negative energy density. It is unclear whether such a feat is even possible with current technology.
In conclusion, wormholes are a fascinating concept that could have significant implications for our understanding of the universe and our ability to explore it. While their exi
stence is still a matter of speculation and debate, ongoing research and exploration may one day shed light on the possibility of creating and stabilizing a wormhole for practi
cal applications.
Follow-on: What are the challenges to creating and stabilizing a wormhole?
You: Elaborate on "exotic matter with negative energy density"
Sure! Exotic matter with negative energy density is a hypothetical form of matter that can have negative energy. This means that it would have a gravitational attraction that i
s stronger than the repulsive force of positive energy matter, which could allow for the creation of wormholes.
1. **Theoretical Explanation**: The concept of exotic matter with negative energy density is based on theoretical models of the universe that suggest that negative energy densi
ty could exist in the form of a hypothetical particle called a tachyon. Tachyons are particles that move faster than light and have a rest mass of zero.
2. **Potential Applications**: If exotic matter with negative energy density could be created and stabilized, it could have a variety of potential applications, including the c
reation of wormholes and the manipulation of gravity itself. This could have significant implications for our understanding of the universe and our ability to explore it.
3. **Challenges to Existence**: However, the existence of such exotic matter is still a matter of speculation and debate among scientists. There is no direct evidence of its ex
istence, and there are significant challenges to creating and stabilizing such matter.
4. **Search for Exotic Matter**: Despite these challenges, scientists are actively searching for evidence of exotic matter with negative energy density, including through exper
iments that aim to create and detect such particles.
In conclusion, exotic matter with negative energy density is a fascinating concept that could have significant implications for our understanding of the universe and our abilit
y to manipulate it. While its existence is still a matter of speculation and debate, ongoing research and exploration may one day shed light on the possibility of creating and
stabilizing such matter for practical applications.
Follow-on: What are some of the challenges to creating and stabilizing exotic matter with negative energy density?
You: Can you provide the physics equations for "exotic matter with negative energy density"?
Sure, but I'm afraid I won't be able to provide the exact physics equations for exotic matter with negative energy density, as it is still a matter of speculation and debate am
ong scientists. However, I can provide some general principles related to the concept.
It is important to note that negative energy density is a theoretical concept, and there is no direct evidence of its existence. However, some theories suggest that negative en
ergy density could exist in the form of a hypothetical particle called a tachyon. Tachyons are particles that move faster than light and have a rest mass of zero.
The concept of tachyons is based on the theory of special relativity, which states that the speed of light is the universal speed limit for all particles. According to this the
ory, particles with a mass greater than zero cannot travel faster than the speed of light. However, the concept of tachyons introduces particles that have a mass of zero or les
s, which could allow them to travel faster than the speed of light.
The concept of exotic matter with negative energy density is related to the idea of negative energy density, which suggests that particles could have a negative energy density,
meaning they would have a gravitational attraction that is stronger than the repulsive force of positive energy matter. This could have significant implications for our unders
tanding of the universe and our ability to manipulate it.
In conclusion, while the exact physics equations for exotic matter with negative energy density are still a matter of speculation and debate, the concept is based on the princi
ples of special relativity and the possibility of particles moving faster than the speed of light. Future research and exploration may help to shed light on the possibility of
creating and stabilizing such matter for practical applications.
Follow-on: What are some of the practical applications of exotic matter with negative energy density if it could be created and stabilized?
You: What are some of the practical applications of exotic matter with negative energy density if it could be created and stabilized?
If exotic matter with negative energy density could be created and stabilized, it could have a variety of potential practical applications. Here are a few examples:
1. Wormholes: One of the most fascinating potential applications of exotic matter with negative energy density is the creation of wormholes, which could allow for faster-than-l
ight travel and communication. If wormholes were stable and could be created on demand, they could revolutionize our understanding of the universe and our ability to explore it
.
2. Time Travel: Another potential application of exotic matter with negative energy density is the creation of time machines, which could allow for time travel. While the conce
pt of time travel is still a matter of speculation and debate, the idea of negative energy density could potentially be used to create a device that could manipulate the fabric
of space-time and allow for travel back in time.
3. Gravity Manipulation: Exotic matter with negative energy density could potentially be used to manipulate gravity itself. By creating a negative energy density in a given reg
ion of space-time, it may be possible to create a gravitational field that is stronger or weaker than the surrounding region. This could have significant implications for our a
bility to manipulate objects and structures in space, as well as our understanding of the nature of gravity itself.
4. Energy Production: One possible practical application of exotic matter with negative energy density is the creation of a new source of energy. If such matter could be create
d and controlled, it could be used to generate large amounts of energy in a variety of applications, including power generation, transportation, and space exploration.
In conclusion, exotic matter with negative energy density could have a wide range of potential practical applications if it could be created and stabilized. While the concept i
s still a matter of speculation and debate, ongoing research and exploration may one day shed light on the possibility of creating and manipulating such matter for practical ap
plications.
```
<!-- original-model-card end -->
|
artificialguybr/filmgrain-redmond-filmgrain-lora-for-sdxl | artificialguybr | 2023-12-16T00:52:57Z | 546 | 7 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"style",
"film grain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-12-15T05:24:15Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=False&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- style
- film grain
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Film Grain
widget:
- text: 'A beautiful blonde girl, close-up, portrait
,Film Grain, FilmGrainAF,'
output:
url: >-
4596085.jpeg
- text: 'A beautiful blonde girl, close-up, portrait
,Film Grain, FilmGrainAF,'
output:
url: >-
4596086.jpeg
- text: 'A beautiful blonde girl, close-up, portrait
,Film Grain, FilmGrainAF,'
output:
url: >-
4596087.jpeg
- text: 'A bodybuilder old man, close-up, portrait
,Film Grain, FilmGrainAF,'
output:
url: >-
4596088.jpeg
- text: 'A young king, close-up, portrait
,Film Grain, FilmGrainAF,'
output:
url: >-
4596090.jpeg
- text: 'A young king, close-up, portrait
,Film Grain, FilmGrainAF,'
output:
url: >-
4596091.jpeg
- text: 'A cat wearing christmas hat,
,Film Grain, FilmGrainAF,'
output:
url: >-
4596092.jpeg
- text: 'A cat wearing christmas hat,
,Film Grain, FilmGrainAF,'
output:
url: >-
4596094.jpeg
inference:
parameters:
num_inference_steps: 30
scheduler: 'DPMSolverMultistepScheduler'
---
# FilmGrain.Redmond - FilmGrain Lora For SDXL
<Gallery />
## Model description
<h1 id="heading-28">FilmGrain.Redmond is here!</h1><p>I'm grateful for the GPU time from <strong>Redmond.AI</strong> that allowed me to finish this LORA!</p><p>This is a <strong>Film Grain </strong>LORA fine-tuned on <strong>SD XL 1.0.</strong></p><p>Test all my Loras <a target="_blank" rel="ugc" href="https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora">here</a> for free and unlimited. Thanks, HF, for Inference API!</p><p>The LORA has a high capacity to generate Film Grain in a wide variety of themes.<strong> It's a versatile LORA.</strong></p><p>I recommend gen in 1024x1024.</p><p>You can use detailed, minimalist, colorful, black and white as tag to control the results.</p><p><strong><u>The tag for the model:Film Grain, FilmGrainAF</u></strong></p><p>LORA is not perfect and sometimes needs more than one gen to create good images. I recommend simple prompts.</p><p>I really hope you like the LORA and use it.</p><p>If you like the model and think it's worth it, you can make a donation to my <a target="_blank" rel="ugc" href="https://www.patreon.com/user?u=81570187">Patreon</a> or <a target="_blank" rel="ugc" href="https://ko-fi.com/jvkape">Ko-fi</a>.</p><p>Follow me in my twitter to know before all about new models:</p><p><a target="_blank" rel="ugc" href="https://twitter.com/artificialguybr/"><u>https://twitter.com/artificialguybr/</u></a></p>
## Trigger words
You should use `Film Grain`, `FilmGrainAF` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/artificialguybr/filmgrain-redmond-filmgrain-lora-for-sdxl/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('artificialguybr/filmgrain-redmond-filmgrain-lora-for-sdxl', weight_name='FilmGrainRedmond-FilmGrain-FilmGrainAF.safetensors')
image = pipeline('A cat wearing christmas hat,
,Film Grain, FilmGrainAF,').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
fatgong/5HQm596K5YM4UGkoSBYdAycnMjY56g97quJ5nM6isq3n4yZF_vgg | fatgong | 2024-03-20T18:04:09Z | 546 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-03-09T14:08:19Z | Entry not found |
RichardErkhov/Kooten_-_DaringMaid-20B-gguf | RichardErkhov | 2024-06-03T08:34:21Z | 546 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-02T22:50:56Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DaringMaid-20B - GGUF
- Model creator: https://huggingface.co/Kooten/
- Original model: https://huggingface.co/Kooten/DaringMaid-20B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DaringMaid-20B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q2_K.gguf) | Q2_K | 6.91GB |
| [DaringMaid-20B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [DaringMaid-20B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [DaringMaid-20B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [DaringMaid-20B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [DaringMaid-20B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q3_K.gguf) | Q3_K | 9.04GB |
| [DaringMaid-20B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [DaringMaid-20B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [DaringMaid-20B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [DaringMaid-20B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q4_0.gguf) | Q4_0 | 10.52GB |
| [DaringMaid-20B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [DaringMaid-20B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [DaringMaid-20B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q4_K.gguf) | Q4_K | 11.22GB |
| [DaringMaid-20B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [DaringMaid-20B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q4_1.gguf) | Q4_1 | 11.67GB |
| [DaringMaid-20B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q5_0.gguf) | Q5_0 | 12.83GB |
| [DaringMaid-20B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [DaringMaid-20B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q5_K.gguf) | Q5_K | 13.18GB |
| [DaringMaid-20B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [DaringMaid-20B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q5_1.gguf) | Q5_1 | 13.98GB |
| [DaringMaid-20B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q6_K.gguf) | Q6_K | 15.28GB |
| [DaringMaid-20B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Kooten_-_DaringMaid-20B-gguf/blob/main/DaringMaid-20B.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# DaringMaid-20B
My goal was to make a Noromaid that's smarter and better at following instructions.
After trying a bunch of different recipes I think this one turned out pretty good
- I used [sequelbox/DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) as a base to as its supposed "improve overall knowledge, precise communication, conceptual understanding, and technical skill" over the base llama2.
- [NeverSleep/Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) of course.
- [Undi95/Utopia](https://huggingface.co/Undi95/Utopia-13B) has been recommended again recently and its still really good so in the mixer it goes
- I liked [tavtav/Rose](https://huggingface.co/tavtav/Rose-20B) so i threw in a bit of [CalderaAI/Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2)
- There was recently a model that tried to pass itself off as [Gryphe/MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b), i made a merge with that model before it was revealed to be MythoMax and it turned out pretty good so i used it.
The .yml config files for mergekit with the exact merges can be found in the ["Recipe"](https://huggingface.co/Kooten/DaringMaid/tree/main/Recipe) folder in the [fp16 repo](https://huggingface.co/Kooten/DaringMaid-20B)
# Quants
EXL2: [6bpw](https://huggingface.co/Kooten/DaringMaid-20B-6bpw-exl2), [3bpw](https://huggingface.co/Kooten/DaringMaid-20B-3bpw-exl2)
[GGUF](https://huggingface.co/Kooten/DaringMaid-20B-GGUF):
***New quants:*** [IQ2-XS](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-IQ2_XS.gguf), [IQ2-XXS](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-IQ2_XXS.gguf)
[Q3_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q3_K_M.gguf) - [Q4_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q4_K_M.gguf) - [Q5_K_M](https://huggingface.co/Kooten/DaringMaid-20B-GGUF/blob/main/DaringMaid-20B-Q5_K_M.gguf)
***Additional Quants from TheBloke:*** [GPTQ](https://huggingface.co/TheBloke/DaringMaid-20B-GPTQ), [AWQ](https://huggingface.co/TheBloke/DaringMaid-20B-AWQ), [GGUF](https://huggingface.co/TheBloke/DaringMaid-20B-GGUF)
## Prompt template:
I have been using Undi/Ikaris SillyTavern presets for Noromaid: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request. Do not include descriptions of non-visual qualities such as personality, movements, scents, mental traits, or anything which could not be seen in a still photograph. Do not write in full sentences. Prefix your description with the phrase 'full body portrait,'
### Instruction:
{prompt}
### Response:
```
### Contact
Kooten on discord.
|
LiteAI/Hare-1.1B-base | LiteAI | 2024-06-26T08:42:25Z | 546 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Hare",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:HuggingFaceTB/cosmopedia",
"arxiv:2406.11410",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-05T12:37:40Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- Hare
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceTB/cosmopedia
arxiv: 2406.11410
---
<a id="english"></a>
<p align="center">
<img width="400px" alt="Lite-AI" src="./logo.jpg">
</p>
</div>
# Hare-1.1B-base
<p align="center">
<a href="https://github.com/LiteAI-Team/HARE">GitHub</a> | 🤖 <a href="https://modelscope.cn/models/LiteAITeam/Hare-1.1B-base">ModelScope</a> | 📑 <a href="https://arxiv.org/abs/2406.11410">ArXiv </a>
</p>
Hare-1.1B-base is a pre-trained model developed by the LiteAI Team from China Telecom Guizhou Branch. We use a mix of high-quality open-source data and strategy-generated data as pre-train data. The model is only 1.1B in size and has performed well on the Open LLM Leaderboard.
- We chose Mistral as the foundational architecture and reused its tokenizer, reducing the number of parameters by adjusting the hyperparameters of its model architecture. Consequently, our model can be directly applied to numerous open-source projects that support Mistral, such as vLLM.
- Our model has a parameter count of only 1.1 billion, allowing us to deploy it on consumer-grade GPUs, mobile devices, and other cost-effective platforms.
- We have explored efficient training at FP8 precision and have compiled a set of best practices, hoping to contribute as much as we can to LLM training in the open-source community. For best practices, please see our GitHub homepage.
- We are currently developing and adapting for Chinese language support.
Hare-1.1B-base是由中国电信股份有限公司贵州分公司LiteAI团队开发的预训练模型。我们使用高质量开源和策略生成的合成数据作为预训练数据。该模型大小仅为1.1B,并在Open LLM Leaderboard上表现优异。
- 我们选择Mistral架构作为基础框架,并复用了其分词器,通过调整模型架构的超参来减少参数量。因此,我们的模型可以直接应用于许多支持Mistral的开源项目,如vLLM。
- 我们模型的参数量仅为 11 亿,因此,我们可以将模型部署到消费级显卡、手机端等成本较低的设备上。
- 我们探索了FP8精度下的高效训练,并总结了一份最佳实践,希望能为开源社区LLM训练作出力所能及的贡献。最佳实践请看GitHub主页。
- 我们正在研发与适配中文。
## Model Details 模型细节
| Model | Training Tokens | Hidden layers | Hidden Size | Attention Heads | Context Length |
|:------:|:--------:|:---------:|:-------------:|:-----------------:|:----------------:|
|Hare-1.1B-base | ~ 600B |22 | 2048 | 32 | 2048 |
## Model Description 模型说明
- **Developed by:** LiteAI Team
- **Institution:** China Telecom Guizhou Branch
- **Model size:** 1.1B
- **License:** Apache 2.0
- **开发者:** LiteAI Team
- **机构:** 中国电信股份有限公司贵州分公司
- **模型大小:** 1.1B
- **协议:** Apache 2.0
## Uses 模型使用
### Inference 推理
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_path = "LiteAI-Team/Hare-1.1B-base"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
prompt = "Write a poem based on the landscape of Guizhou:"
tokens = tokenizer(prompt, add_special_tokens=True, return_tensors='pt').to(device)
output = model.generate(**tokens,max_new_tokens=128)
output_tokens = output[0].cpu().numpy()[tokens.input_ids.size()[1]:]
output_string = tokenizer.decode(output_tokens)
print(output_string)
>> """The Guizhou landscape is a sight to behold,
A place where nature's beauty is unmatched,
A land of towering mountains and vast plains,
A paradise for those who seek to explore.
The mountains rise high above the sky,
A sight to beholder, a sight to see,
The valleys stretch out as far as the eye can see,
A landscape of endless beauty and grace."""
```
Install with vllm:
```shell
pip install vllm
```
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_path = "LiteAI-Team/Hare-1.1B-base"
llm = LLM(model=model_path, trust_remote_code=True, tensor_parallel_size=4)
query = "Write a poem based on the landscape of Guizhou:"
sampling_params = SamplingParams(temperature=0.8, top_p=0.95, max_tokens=64)
outputs = llm.generate(query, sampling_params)
print(outputs)
```
## Edge Deployment Demo 端侧部署
Our model has only 1.1 billion parameters, and after Int4 quantization, it occupies just 0.6GB of space, allowing for easy deployment on mobile devices, The [Hare-1.1B-Chat](https://huggingface.co/LiteAI/Hare-1.1B-Chat) model weights have been open-sourced.
- Android:We chose MLC-LLM as the deployment framework and conducted deployment testing of the Chat model on the Redmi K40.
- iOS & HarmonyOS:We will conduct deployment testing on the aforementioned devices in the future.
我们的模型参数量仅有1.1B,经Int4量化后,模型仅占用0.6G的空间,可轻松部署在手机端,[Hare-1.1B-Chat](https://huggingface.co/LiteAI/Hare-1.1B-Chat)模型权重已经开源。
- Android:我们选择MLC-LLM作为部署框架,在Redmi K40上进行Chat模型的部署测试。
- iOS & HarmonyOS:我们将在未来对上述设备进行部署测试。
<table>
<tr>
<td><img src="./ori1_1.gif" alt="First demo" width="50%"/></td>
<td><img src="./ori2_2.gif" alt="Second demo" width="50%"/></td>
</tr>
</table>
## Tool Call 工具调用实践
- To fully leverage the advantages of deploying small models on edge devices, we referred to the work of [Octopus-v2](https://huggingface.co/NexaAIDev/Octopus-v2) and replaced Gemma-2B with [Hare-1.1B-Tool](https://huggingface.co/LiteAI/Hare-1.1B-Tool), successfully enabling the invocation of Android system APIs and the orchestration of tool functionalities in composite scenarios on mobile devices.
- Please click the image below to view.
- 为完全发挥出小模型在端侧部署上的优势,我们对照[Octopus-v2](https://huggingface.co/NexaAIDev/Octopus-v2)的工作并使用[Hare-1.1B-Tool](https://huggingface.co/LiteAI/Hare-1.1B-Tool)替换Gemma-2B,成功在手机端实现安卓系统API调用和组合场景下的工具调用能力。
- 请您点击下面图片观看。[<img src="./ee32f5b94fbfee4e95507a0db3e069a53d1931db.jpg" alt="alt text" width="600"/>](https://www.bilibili.com/video/BV1Ry411b7yx/?vd_source=d4f08e4b18c51571a1b53a20a8d58c10)
## Evaluation Results 评测结果
- Additionally, we conducted explorations and experiments addressing the issue of benchmark data leakage. For a detailed analysis, please refer to our [paper](https://arxiv.org/abs/2406.11410).
- 同时,我们针对benchmark数据泄漏问题做了探索与实验,详细分析请参考我们的[论文](https://arxiv.org/abs/2406.11410)。
| Model(base) | Size | avg | MMLU | ARC-C | TruthfulQA | Winogrande | Hellaswag | GSM8K |
|:-------------------------------------:|:-------:|:-------:|:------:|:-------:|:------------:|:------------:|:-----------:|:-------:|
| phi-1_5 | 1.3B | 47.69 | 43.89| 52.9 | 40.89 | 72.22 | 63.79 |12.43 |
| Qwen-1.5 | 1.8B | 46.55 | 46.71| 37.88 | 39.43 | 60.3 | 61.42 |33.59 |
| stablelm-2 | 1.6B | 45.25 | 38.95| 43.34 | 36.78 | 64.56 | 70.45 |17.44 |
| __Hare__ | 1.1B | 40.17 | 35.74| 38.4 | 42.08 | 59.27 | 57.46 |8.04 |
| H2o-danube | 1.8B | 39.12 | 25.94| 39.42 | 33.86 | 64.48 | 69.58 |1.44 |
| OpenELM | 1.1B | 38.47 | 27.05| 36.69 | 33.86 | 63.22 | 65.71 |1.21 |
| csg-wukong | 1B | 37.78 | 25.33| 37.71 | 42.79 | 56.67 | 58.93 |5.23 |
| TinyLlama-3T | 1.1B | 36.42 | 26.04| 33.87 | 37.32 | 59.51 | 60.31 |1.44 |
## License 协议
- This repository is open-sourced under the Apache-2.0 license.
- The Hare series model weights are currently fully open only for academic research.
- 本仓库遵循Apache-2.0协议开源。
- Hare系列模型权重目前仅对学术研究完全开放。
## Statement 声明
- Hare is a language model trained on a mix of open-source pre-training data and strategy-generated pre-training data. It lacks the ability to make value judgments and cannot understand or express personal opinions. The outputs of the model do not represent the views or positions of the LiteAI development team.
- Therefore, the content generated using Hare may contain biased viewpoints and inaccuracies. Please use it at your discretion.
- Similarly, we will not assume any responsibility for risks and issues arising from users deliberately using Hare to generate harmful content.
- For modifications related to this repository, please contact: zhangly41 At(@) chinatelecom.cn.
- Team contact information: chensq27 At(@) chinatelecom.cn. The LiteAI Team looks forward to collaborating with you.
- Hare是一个基于开源预训练数据和策略合成预训练数据混合训练得到的语言模型,它不具备价值判断能力,无法理解、表达个人观点,模型的输出内容不代表LiteAI开发团队的观点与立场。
- 因此,您使用Hare生成的内容可能存有偏观点和不实情况,请您酌情使用。
- 同样,我们将不承担用户故意使用Hare进行有害内容生成所带来的任何风险与问题。
- 如涉及到本仓库的修改,请联系:zhangly41 At(@) chinatelecom.cn。
- 团队联系方式:chensq27 At(@) chinatelecom.cn,LiteAI团队期待您的合作。
## Citation 工作引用
- If you find Hare helpful for your work, please consider citing our [paper](https://arxiv.org/abs/2406.11410).
- 如果您觉得Hare对您的工作起到了帮助,请考虑引用我们的[论文](https://arxiv.org/abs/2406.11410)。
```
@misc{zhang2024harehumanpriorskey,
title={HARE: HumAn pRiors, a key to small language model Efficiency},
author={Lingyun Zhang and Bin jin and Gaojian Ge and Lunhui Liu and Xuewen Shen and Mingyong Wu and Houqian Zhang and Yongneng Jiang and Shiqi Chen and Shi Pu},
year={2024},
eprint={2406.11410},
archivePrefix={arXiv},
primaryClass={cs.CL}
url={https://arxiv.org/abs/2406.11410},
}
```
|
RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf | RichardErkhov | 2024-06-22T23:27:54Z | 546 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-22T23:24:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-160M-Chat-v1 - GGUF
- Model creator: https://huggingface.co/Felladrin/
- Original model: https://huggingface.co/Felladrin/Llama-160M-Chat-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-160M-Chat-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q2_K.gguf) | Q2_K | 0.07GB |
| [Llama-160M-Chat-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
| [Llama-160M-Chat-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ3_S.gguf) | IQ3_S | 0.07GB |
| [Llama-160M-Chat-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
| [Llama-160M-Chat-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ3_M.gguf) | IQ3_M | 0.08GB |
| [Llama-160M-Chat-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K.gguf) | Q3_K | 0.08GB |
| [Llama-160M-Chat-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K_M.gguf) | Q3_K_M | 0.08GB |
| [Llama-160M-Chat-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K_L.gguf) | Q3_K_L | 0.08GB |
| [Llama-160M-Chat-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ4_XS.gguf) | IQ4_XS | 0.09GB |
| [Llama-160M-Chat-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_0.gguf) | Q4_0 | 0.09GB |
| [Llama-160M-Chat-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ4_NL.gguf) | IQ4_NL | 0.09GB |
| [Llama-160M-Chat-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_K_S.gguf) | Q4_K_S | 0.09GB |
| [Llama-160M-Chat-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_K.gguf) | Q4_K | 0.1GB |
| [Llama-160M-Chat-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [Llama-160M-Chat-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_1.gguf) | Q4_1 | 0.1GB |
| [Llama-160M-Chat-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_0.gguf) | Q5_0 | 0.11GB |
| [Llama-160M-Chat-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [Llama-160M-Chat-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_K.gguf) | Q5_K | 0.11GB |
| [Llama-160M-Chat-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_K_M.gguf) | Q5_K_M | 0.11GB |
| [Llama-160M-Chat-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_1.gguf) | Q5_1 | 0.12GB |
| [Llama-160M-Chat-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q6_K.gguf) | Q6_K | 0.12GB |
| [Llama-160M-Chat-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q8_0.gguf) | Q8_0 | 0.16GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation
base_model: JackFram/llama-160m
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
- totally-not-an-llm/EverythingLM-data-V3
- Open-Orca/SlimOrca-Dedup
- databricks/databricks-dolly-15k
- THUDM/webglm-qa
widget:
- messages:
- role: system
content: You are a helpful assistant, who answers with empathy.
- role: user
content: Got a question for you!
- role: assistant
content: "Sure! What's it?"
- role: user
content: Why do you love cats so much!? 🐈
- messages:
- role: system
content: "You are a helpful assistant who answers user's questions with empathy."
- role: user
content: Who is Mona Lisa?
- messages:
- role: system
content: You are a helpful assistant who provides concise responses.
- role: user
content: Heya!
- role: assistant
content: Hi! How may I help you today?
- role: user
content: I need to build a simple website. Where should I start learning about web development?
- messages:
- role: user
content: Invited some friends to come home today. Give me some ideas for games to play with them!
- messages:
- role: system
content: "You are a helpful assistant who answers user's questions with details and curiosity."
- role: user
content: What are some potential applications for quantum computing?
- messages:
- role: system
content: You are a helpful assistant who gives creative responses.
- role: user
content: Write the specs of a game about mages in a fantasy world.
- messages:
- role: system
content: "You are a helpful assistant who answers user's questions with details."
- role: user
content: Tell me about the pros and cons of social media.
- messages:
- role: system
content: "You are a helpful assistant who answers user's questions with confidence."
- role: user
content: What is a dog?
- role: assistant
content: 'A dog is a four-legged, domesticated animal that is a member of the class Mammalia,
which includes all mammals. Dogs are known for their loyalty, playfulness, and
ability to be trained for various tasks. They are also used for hunting, herding,
and as service animals.'
- role: user
content: What is the color of an apple?
inference:
parameters:
max_new_tokens: 250
penalty_alpha: 0.5
top_k: 4
repetition_penalty: 1.01
model-index:
- name: Llama-160M-Chat-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 24.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 35.29
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.16
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
name: Open LLM Leaderboard
---
# A Llama Chat Model of 160M Parameters
- Base model: [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m)
- Datasets:
- [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
- [totally-not-an-llm/EverythingLM-data-V3](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3)
- [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
- [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa)
- Availability in other ML formats:
- GGUF: [Felladrin/gguf-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/gguf-Llama-160M-Chat-v1)
- ONNX: [Felladrin/onnx-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/onnx-Llama-160M-Chat-v1)
- MLC: [Felladrin/mlc-q4f16-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/mlc-q4f16-Llama-160M-Chat-v1)
- MLX: [mlx-community/Llama-160M-Chat-v1-4bit-mlx](https://huggingface.co/mlx-community/Llama-160M-Chat-v1-4bit-mlx)
## Recommended Prompt Format
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
```
## Recommended Inference Parameters
```yml
penalty_alpha: 0.5
top_k: 4
repetition_penalty: 1.01
```
## Usage Example
```python
from transformers import pipeline
generate = pipeline("text-generation", "Felladrin/Llama-160M-Chat-v1")
messages = [
{
"role": "system",
"content": "You are a helpful assistant who answers user's questions with details and curiosity.",
},
{
"role": "user",
"content": "What are some potential applications for quantum computing?",
},
]
prompt = generate.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
output = generate(
prompt,
max_new_tokens=1024,
penalty_alpha=0.5,
top_k=4,
repetition_penalty=1.01,
)
print(output[0]["generated_text"])
```
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Llama-160M-Chat-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |30.27|
|AI2 Reasoning Challenge (25-Shot)|24.74|
|HellaSwag (10-Shot) |35.29|
|MMLU (5-Shot) |26.13|
|TruthfulQA (0-shot) |44.16|
|Winogrande (5-shot) |51.30|
|GSM8k (5-shot) | 0.00|
|
jtatman/pythia-delphi-micromachine-maxlayers-1k | jtatman | 2024-06-29T22:15:01Z | 546 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-29T14:13:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
keremberke/yolov5s-clash-of-clans | keremberke | 2022-12-30T20:47:44Z | 545 | 1 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/clash-of-clans-object-detection",
"model-index",
"region:us"
]
| object-detection | 2022-12-30T07:28:57Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/clash-of-clans-object-detection
model-index:
- name: keremberke/yolov5s-clash-of-clans
results:
- task:
type: object-detection
dataset:
type: keremberke/clash-of-clans-object-detection
name: keremberke/clash-of-clans-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.8278313333523701 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5s-clash-of-clans" src="https://huggingface.co/keremberke/yolov5s-clash-of-clans/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5s-clash-of-clans')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5s-clash-of-clans --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
timm/xcit_small_24_p8_224.fb_in1k | timm | 2024-02-10T23:43:56Z | 545 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-13T02:26:50Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_small_24_p8_224.fb_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 47.6
- GMACs: 35.8
- Activations (M): 90.8
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_small_24_p8_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_small_24_p8_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
timm/xcit_tiny_24_p8_224.fb_in1k | timm | 2024-02-10T23:44:16Z | 545 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-04-13T02:31:50Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_tiny_24_p8_224.fb_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 12.1
- GMACs: 9.2
- Activations (M): 45.4
- Image size: 224 x 224
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_tiny_24_p8_224.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_tiny_24_p8_224.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 785, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
WeOpenML/PandaLM-7B-v1 | WeOpenML | 2023-05-04T13:59:28Z | 545 | 17 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-04-30T02:08:40Z | ---
license: apache-2.0
---
# PandaLM: Reproducible and Automated Language Model Assessment
Our GitHub repo: https://github.com/WeOpenML/PandaLM
**Please use `AutoTokenizer.from_pretrained('WeOpenML/PandaLM-7B-v1', use_fast=False)` if you encounter issues.**
|
shaowenchen/chinese-llama-2-13b-16k-gguf | shaowenchen | 2023-09-12T02:10:04Z | 545 | 0 | null | [
"gguf",
"meta",
"llama",
"llama-2",
"chinese",
"text-generation",
"zh",
"license:other",
"region:us"
]
| text-generation | 2023-09-09T00:07:52Z | ---
inference: false
language:
- zh
license: other
model_creator: ziqingyang
model_link: https://huggingface.co/ziqingyang/chinese-llama-2-13b-16k
model_name: chinese-llama-2-13b-16k
model_type: llama
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- meta
- gguf
- llama
- llama-2
- chinese
---
## Provided files
| Name | Quant method | Size |
| ----------------------------------- | ------------ | ----- |
| chinese-llama-2-13b-16k.Q2_K.gguf | Q2_K | 5.2 G |
| chinese-llama-2-13b-16k.Q3_K.gguf | Q3_K | 6.0 G |
| chinese-llama-2-13b-16k.Q3_K_L.gguf | Q3_K_L | 6.6 G |
| chinese-llama-2-13b-16k.Q3_K_S.gguf | Q3_K_S | 5.4 G |
| chinese-llama-2-13b-16k.Q4_0.gguf | Q4_0 | 7.0 G |
| chinese-llama-2-13b-16k.Q4_1.gguf | Q4_1 | 7.8 G |
| chinese-llama-2-13b-16k.Q4_K.gguf | Q4_K | 7.5 G |
| chinese-llama-2-13b-16k.Q4_K_S.gguf | Q4_K_S | 7.1 G |
| chinese-llama-2-13b-16k.Q5_0.gguf | Q5_0 | 8.5 G |
| chinese-llama-2-13b-16k.Q5_1.gguf | Q5_1 | 9.3 G |
| chinese-llama-2-13b-16k.Q5_K.gguf | Q5_K | 8.8 G |
| chinese-llama-2-13b-16k.Q5_K_S.gguf | Q5_K_S | 8.5 G |
| chinese-llama-2-13b-16k.Q6_K.gguf | Q6_K | 10 G |
| chinese-llama-2-13b-16k.Q8_0.gguf | Q8_0 | 13 G |
| chinese-llama-2-13b-16k.gguf | full | 25 G |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI.
|
KappaNeuro/character-design | KappaNeuro | 2023-09-14T04:50:24Z | 545 | 10 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"character",
"design",
"character design",
"characters",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-09-14T04:50:20Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- character
- design
- character design
- characters
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Character Design page
widget:
- text: Character Design - Character design for a sword-wielding boy School uniform attire Red scarf or neckerchief Monster Boy game style Eight-directional movement Cartoonish art style Playful or adventurous expression Colorful and vibrant design Exaggerated or stylized features Dynamic and action-packed poses Sword or blade weapon Whimsical or fantasy influence Unique or magical abilities Expressive eyes and facial features Iconic hairstyle or headgear Detailed and fluid animation Charming and lovable personality.
- text: Character Design - children's book-illustration style character sheet illustrations of a of a 14 year old girl with long black hair, wearing a blue hoodie, light-blue pants with pink shoes, hannah barbera style. The character is a The illustrations should be simple, cute, and in full color with a flat color style. he character should be depicted from different angles on a white background no outline
- text: Character Design - A set of sketches or illustrations featuring a character named BlueBoy, who has blue skin and wears orange clothing. The character should be simple to draw, but should have distinctive and memorable features that help to make him stand out. Please generate a variety of poses and expressions for the character, and feel free to experiment with different clothing designs or accessories to help further develop his personality and style."
- text: Character Design - a character design sheet of a long purple-haired tall prehistoric cartoon female character, with a sharp face with violet eyes and a scar going from her cheek to her left eye. She's dresses in dark gnawed robes but her legs and arms are visible. Cartoon style, front, side and back views
- text: Character Design - A set cartoon character design of a litle girl with short blonde and curly hair, character design, fullbody, with the view from front, side, back and 3/ 4 of the character, medieval and boots character sheet
- text: Character Design - elves character concept girl thin cute 360 degrees full turnaround full height simple cartoon style less detail full body front view back view side view minimalist blonde hair 2d
- text: Character Design - Front view+ side view,+rear view+A character design for a little boy wearing a Chinese lion dance costume+disney design style+character design+Unification of three view clothing
- text: Character Design - Q version of the game character sketch on the same screen with three views front view left view back view monochrome pencil watercolor style
- text: Character Design - Cute cartoon character, embellished with white lines in red colors, and paired with blue colors. Cool casual clothing for characters
- text: Character Design - A character design sheet for a cute gremlin engineer girl daul wielding pistols for a super hero movie
---
# Character Design

> Character Design - Character design for a sword-wielding boy School uniform attire Red scarf or neckerchief Monster Boy game style Eight-directional movement Cartoonish art style Playful or adventurous expression Colorful and vibrant design Exaggerated or stylized features Dynamic and action-packed poses Sword or blade weapon Whimsical or fantasy influence Unique or magical abilities Expressive eyes and facial features Iconic hairstyle or headgear Detailed and fluid animation Charming and lovable personality.
<p>Character design is the process of creating and developing visual representations of characters for various forms of media, such as animation, video games, comics, and films.</p><p>In character design, artists consider various aspects to bring the character to life. This includes their appearance, personality, backstory, abilities, and role within the story. The goal is to create a visually compelling and memorable character that resonates with the audience.</p><p>Character designers often start by brainstorming and sketching different ideas, exploring variations in shapes, proportions, and features. They consider factors such as age, gender, body type, clothing, and accessories to convey the character's identity and characteristics.</p><p>Designers also pay attention to facial expressions, body language, and gestures to communicate emotions and personality traits. They consider color schemes, textures, and visual details that help define the character's world and story.</p><p>Character design can range from realistic and detailed to stylized and exaggerated, depending on the desired aesthetic and narrative tone. The design process may involve multiple iterations and feedback from creative teams to refine and enhance the character's visual appeal and storytelling potential.</p><p>Successful character design goes beyond just the visual aspect, aiming to create characters that are relatable, memorable, and able to connect with the audience on an emotional level. They play a crucial role in storytelling, contributing to the overall narrative and leaving a lasting impression.</p><p>Whether in animation, video games, or other media, well-crafted character design adds depth, personality, and visual interest to the creative work, enriching the audience's experience.</p>
## Image examples for the model:

> Character Design - children's book-illustration style character sheet illustrations of a of a 14 year old girl with long black hair, wearing a blue hoodie, light-blue pants with pink shoes, hannah barbera style. The character is a The illustrations should be simple, cute, and in full color with a flat color style. he character should be depicted from different angles on a white background no outline

> Character Design - A set of sketches or illustrations featuring a character named BlueBoy, who has blue skin and wears orange clothing. The character should be simple to draw, but should have distinctive and memorable features that help to make him stand out. Please generate a variety of poses and expressions for the character, and feel free to experiment with different clothing designs or accessories to help further develop his personality and style."

> Character Design - a character design sheet of a long purple-haired tall prehistoric cartoon female character, with a sharp face with violet eyes and a scar going from her cheek to her left eye. She's dresses in dark gnawed robes but her legs and arms are visible. Cartoon style, front, side and back views

> Character Design - A set cartoon character design of a litle girl with short blonde and curly hair, character design, fullbody, with the view from front, side, back and 3/ 4 of the character, medieval and boots character sheet

> Character Design - elves character concept girl thin cute 360 degrees full turnaround full height simple cartoon style less detail full body front view back view side view minimalist blonde hair 2d

> Character Design - Front view+ side view,+rear view+A character design for a little boy wearing a Chinese lion dance costume+disney design style+character design+Unification of three view clothing

> Character Design - Q version of the game character sketch on the same screen with three views front view left view back view monochrome pencil watercolor style

> Character Design - Cute cartoon character, embellished with white lines in red colors, and paired with blue colors. Cool casual clothing for characters

> Character Design - A character design sheet for a cute gremlin engineer girl daul wielding pistols for a super hero movie
|
zlsl/m_erotic_chat | zlsl | 2024-02-24T08:54:38Z | 545 | 2 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"chat",
"porn",
"sex",
"erotic",
"roleplay",
"ru",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-11T09:36:10Z | ---
license: cc-by-nc-sa-4.0
language:
- ru
library_name: transformers
tags:
- chat
- porn
- sex
- erotic
- roleplay
widget:
- text: "Офигеть"
- text: "Ой, что это"
- text: "Ну ка"
- text: "Ай"
- text: "Нафиг"
- text: "В душ"
- text: "Тентакли"
- text: "В бинокль"
pipeline_tag: text-generation
---
Medium модель для эротического ролеплея. Имеет тенденцию превосходить large модель :)
Неплохо держит контекст, близка к large модели, параметры для старта:
do_sample: true<br>
top_p: 0.9<br>
top_k: 20<br>
temperature: 0.7 # При высоких значениях также работает неплохо<br>
repetition_penalty: 1.15<br>
encoder_repetition_penalty: 1.0-1.15<br>
typical_p: 1.0<br>
Оптимизированный фронтенд для данных моделей в режиме чата под Android - https://github.com/zlsl/pocketai
Для чата желательно количество новых токенов > 350, тогда диалоги будут интереснее.
Очень хорошие результаты при следующем вводе: 1 строка - ваша реплика, 2 строка - ваши действия.
Можно в контексте и во время диалога указывать действия и мысли в скобках. Например: Привет (вхожу в комнату, закрываю дверь)
Важно! В модели добавлен токен `<char>`, он указывает начало строки диалога, прямой речи:
>Абзац ... контекста<br>
>Я: `<char>` (мысли, действия и прочее) Реплика персонажа (еще мысли, контекст)<br>
>Собеседник: `<char>` (мысли, действия и прочее) Реплика персонажа (еще мысли, контекст)<br>
Также хорошие результаты в другом формате диалогов:
>Абзац ... контекста<br>
>Я: `<char>` (мысли, действия и прочее) Реплика персонажа (еще мысли, контекст)<br>
>Еще действия, описание окружения.<br>
>Собеседник: `<char>` (мысли, действия и прочее) Реплика персонажа (еще мысли, контекст)<br>
>Еще действия, описание окружения.<br>
Новый токен желательно использовать, но не обязательно. Также указывать имена персонажей - опционально.
Модель с удовольствием может "в мультичар", количество собеседников может быть более двух.
## Для пользователей text-generation-webui
В инструменте поломана работа с GPT-2, GPTJ, GPT-NEO и аналогичными модлями, неверно загружается токенизер.
Ошибка такая:<br>
>eos_token_id = eos_token_id[0]
>IndexError: list index out of range
Исправляется легко, в файл modules/models.py в функцию load_tokenizer() надо добавить строчку<br>
<code>tokenizer.eos_token_id = 2</code><br>
перед<br>
<code>return tokenizer</code>
|
TheBloke/tora-code-13B-v1.0-GGUF | TheBloke | 2023-10-14T23:56:34Z | 545 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"code",
"math",
"text-generation",
"en",
"dataset:gsm8k",
"dataset:competition_math",
"arxiv:2309.17452",
"base_model:llm-agents/tora-code-13b-v1.0",
"license:llama2",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-10-14T23:50:19Z | ---
base_model: llm-agents/tora-code-13b-v1.0
datasets:
- gsm8k
- competition_math
inference: false
language:
- en
library_name: transformers
license: llama2
metrics:
- exact_match
model_creator: LLM-Agents
model_name: ToRA Code 13B v1.0
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|user|>
{prompt}
<|assistant|>
'
quantized_by: TheBloke
tags:
- code
- math
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ToRA Code 13B v1.0 - GGUF
- Model creator: [LLM-Agents](https://huggingface.co/llm-agents)
- Original model: [ToRA Code 13B v1.0](https://huggingface.co/llm-agents/tora-code-13b-v1.0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LLM-Agents's ToRA Code 13B v1.0](https://huggingface.co/llm-agents/tora-code-13b-v1.0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tora-code-13B-v1.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF)
* [LLM-Agents's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llm-agents/tora-code-13b-v1.0)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ToRA
```
<|user|>
{prompt}
<|assistant|>
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tora-code-13b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [tora-code-13b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [tora-code-13b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [tora-code-13b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [tora-code-13b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tora-code-13b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [tora-code-13b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [tora-code-13b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tora-code-13b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [tora-code-13b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [tora-code-13b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [tora-code-13b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/tora-code-13B-v1.0-GGUF/blob/main/tora-code-13b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/tora-code-13B-v1.0-GGUF and below it, a specific filename to download, such as: tora-code-13b-v1.0.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/tora-code-13B-v1.0-GGUF tora-code-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/tora-code-13B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tora-code-13B-v1.0-GGUF tora-code-13b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tora-code-13b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/tora-code-13B-v1.0-GGUF", model_file="tora-code-13b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LLM-Agents's ToRA Code 13B v1.0
<h1 align="center">
ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving
</h1>
<p align="center">
<a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> •
<a href="https://arxiv.org/pdf/2309.17452.pdf"><b>[📜 Paper]</b></a> •
<a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> •
<a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a>
<br>
<a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> •
<a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> •
<a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a>
<!-- <a href="#-quick-start">Quick Start</a> • -->
<!-- <a href="#%EF%B8%8F-citation">Citation</a> -->
</p>
<p align="center">
Repo for "<a href="https://arxiv.org/pdf/2309.17452.pdf" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>"
</p>
## 🔥 News
- [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!!
- [2023/09/29] ToRA paper, repo, and website released.
## 💡 Introduction
ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools.
| Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>†</sup> |
|---|---|---|---|---|
| GPT-4 | - | 92.0 | 42.5 | 78.3 |
| GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 |
| [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4|
| [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5|
| [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9|
| [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 |
| [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 |
| [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** |
- <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come!
- <sup>†</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith.
## ⚡️ Training
The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4.
We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details.
## 🪁 Inference & Evaluation
Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code.
## ☕️ Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{gou2023tora,
title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving},
author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen},
year={2023},
eprint={2309.17452},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
TheBloke/Yi-34B-GGUF | TheBloke | 2023-11-07T09:47:44Z | 545 | 73 | transformers | [
"transformers",
"gguf",
"yi",
"base_model:01-ai/Yi-34B",
"license:other",
"region:us"
]
| null | 2023-11-04T20:17:26Z | ---
base_model: 01-ai/Yi-34B
inference: false
license: other
license_link: LICENSE
license_name: yi-license
model_creator: 01-ai
model_name: Yi 34B
model_type: yi
prompt_template: 'Human: {prompt} Assistant:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi 34B - GGUF
- Model creator: [01-ai](https://huggingface.co/01-ai)
- Original model: [Yi 34B](https://huggingface.co/01-ai/Yi-34B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [01-ai's Yi 34B](https://huggingface.co/01-ai/Yi-34B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-GGUF)
* [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-34B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Yi
```
Human: {prompt} Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [yi-34b.Q2_K.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [yi-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [yi-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [yi-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [yi-34b.Q4_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [yi-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [yi-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [yi-34b.Q5_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [yi-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [yi-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [yi-34b.Q6_K.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [yi-34b.Q8_0.gguf](https://huggingface.co/TheBloke/Yi-34B-GGUF/blob/main/yi-34b.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Yi-34B-GGUF and below it, a specific filename to download, such as: yi-34b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Yi-34B-GGUF yi-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Yi-34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-34B-GGUF yi-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m yi-34b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Human: {prompt} Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yi-34B-GGUF", model_file="yi-34b.Q4_K_M.gguf", model_type="yi", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: 01-ai's Yi 34B
<div align="center">
<img src="./Yi.svg" width="200px">
</div>
## Introduction
The **Yi** series models are large language models trained from scratch by
developers at [01.AI](https://01.ai/). The first public release contains two
bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B))
and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained
with 4K sequence length and can be extended to 32K during inference time.
The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with
200K context length.
## News
- 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K)
and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length.
- 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and
[`Yi-34B`](https://huggingface.co/01-ai/Yi-34B).
## Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
While benchmarking open-source models, we have observed a disparity between the
results generated by our pipeline and those reported in public sources (e.g.
OpenCompass). Upon conducting a more in-depth investigation of this difference,
we have discovered that various models may employ different prompts,
post-processing strategies, and sampling techniques, potentially resulting in
significant variations in the outcomes. Our prompt and post-processing strategy
remains consistent with the original benchmark, and greedy decoding is employed
during evaluation without any post-processing for the generated content. For
scores that were not reported by the original authors (including scores reported
with different settings), we try to get results with our pipeline.
To evaluate the model's capability extensively, we adopted the methodology
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
using a 7-shot setup, while all other tests were conducted with a 0-shot
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
is derived by averaging the scores on the remaining tasks. Since the scores for
these two tasks are generally lower than the average, we believe that
Falcon-180B's performance was not underestimated.
## Usage
Please visit our [github repository](https://github.com/01-ai/Yi) for general
guidance on how to use this model.
## Disclaimer
Although we use data compliance checking algorithms during the training process
to ensure the compliance of the trained model to the best of our ability, due to
the complexity of the data and the diversity of language model usage scenarios,
we cannot guarantee that the model will generate correct and reasonable output
in all scenarios. Please be aware that there is still a risk of the model
producing problematic outputs. We will not be responsible for any risks and
issues resulting from misuse, misguidance, illegal usage, and related
misinformation, as well as any associated data security concerns.
## License
The Yi series models are fully open for academic research and free commercial
usage with permission via applications. All usage must adhere to the [Model
License Agreement 2.0](https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE). To
apply for the official commercial license, please contact us
([[email protected]](mailto:[email protected])).
<!-- original-model-card end -->
|
audreyt/Breeze-7B-Instruct-64k-v0.1-GGUF | audreyt | 2024-01-12T14:19:18Z | 545 | 13 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-01-12T12:59:07Z | ---
license: apache-2.0
language:
- zh
library_name: transformers
pipeline_tag: text-generation
inference: false
quantized_by: audreyt
---
# Breeze-7B-Instruct-64k-v0.1-GGUF
- Model creator: [MediaTek Research](https://huggingface.co/MediaTek-Research)
- Original model: [Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1)
## Description
This repo contains GGUF format model files for MediaTek Research's [Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1).
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
# Original model card
Breeze-7B is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use.
[Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) is the base model for the Breeze-7B series.
It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
[Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
[Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is a slightly modified version of
Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters.
The current release version of Breeze-7B is v0.1.
Practicality-wise:
- Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
- Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
- In particular, Breeze-7B-Instruct-64k can perform tasks at a document level, not a chapter level.
Performance-wise:
- Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese, when compared to similar sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).]
- Breeze-7B-Instruct shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See [Chat Model Performance](#chat-model-performance).]
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
## Features
- Breeze-7B-Base-v0.1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Breeze-7B-Instruct-v0.1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 8k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
- Breeze-7B-Instruct-64k-v0.1
- Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
- 64k-token context length
- Multi-turn dialogue (without special handling for harmfulness)
## Model Details
- Breeze-7B-Base-v0.1
- Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-v0.1
- Finetuned from: [MediaTek-Research/Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
- Breeze-7B-Instruct-64k-v0.1
- Finetuned from: [MediaTek-Research/Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1)
- Model type: Causal decoder-only transformer language model
- Language: English and Traditional Chinese (zh-tw)
## Base Model Performance
**TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**.
| Models | |↑ TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) |
|----------------------------------------------|--------|--------------|-------------|-------------|------------|
| | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge|
| | | 5 shot | 3 shot | 5 shot | 5 shot |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B)| 34B | 63.10 | 84.57 | 49.31 | 77.42 |
| [Qwen-14B](https://huggingface.co/01-ai/Qwen/Qwen-14B)| 14B | 51.30 | 16.95 * | 50.69 | 68.83 |
| [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 |
| [Qwen-7B](https://huggingface.co/01-ai/Qwen/Qwen-7B)| 7B | 42.84 | 0.0 * | 39.58 | 61.00 |
| [**Breeze-7B-Base-v0.1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) | 7B | 40.35 | 81.13 | 28.47 | 61.63 |
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)| 7B | 36.93 | 79.27 | 27.78 | 64.89 |
\* Few-shot learning cannot effectively guide the model to generate the proper answer.
## Chat Model Performance
**TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2).
[MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval)
and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train).
**MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments).
We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**.
We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**.
| Models | |↑ MT-Bench-tw (Score)| TMMLU+ (ACC) | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | MMLU (ACC) |
|---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|--------------|-------------|-------------|------------------|-------------|-------------|
| | |TC, Chat |TC, Knowledge |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Chat |EN, Knowledge|EN, Knowledge|
| | |0 shot | 0 shot | 5 shot | 3 shot | 0 shot |0 shot | 0 shot | 5 shot |
| [gpt-3.5-turbo](https://openai.com) | |7.1 | 41.76 | | | |7.9 | 70.00 | |
| [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 34B |6.9 | 54.87 | | | 36.81 |7.6 | 71.04 | |
| [Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 14B |6.4 | 48.41 | | | 41.67 |7.2 | 64.91 | |
| [**Breeze-7B-Instruct-v0.1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) | 7B |5.7 | 41.61 | | | 45.83 |7.1 | 63.26 | |
| [**Breeze-7B-Instruct-64k-v0.1**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) | 7B |5.5 | 40.99 | | | 36.11 |7.1 | 63.68 | |
| [Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 7B |5.4 | 40.02 | | | 33.33 |6.2 | 55.94 | |
| [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | | | 25.69 |6.0 | 59.45 | |
| [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | | | 23.61 |-* | 50.50 | |
| [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | | | 31.25 | -* | 42.72 | |
\* Taiwan-LLM models responds to multi-turn questions (English) in Traditional Chinese.
**Category Score of MT-Bench-tw (0 shot)**
| Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities|↑ AVG |
|-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|
| gpt-3.5-turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 |
| Yi-34B-Chat | 9.0 | 4.8 | 5.7 | 4.0 | 4.7 | 8.5 | 8.7 | 9.8 | 6.9 |
| Qwen-14B-Chat | 7.6 | 5.7 | 4.5 | 4.2 | 5.3 | 7.5 | 7.3 | 9.1 | 6.4 |
| **Breeze-7B-Instruct-v0.1** | 6.5 | 5.6 | 3.9 | 3.6 | 4.3 | 6.9 | 5.7 | 9.3 | 5.7 |
| **Breeze-7B-Instruct-64k-v0.1** | 6.1 | 5.3 | 3.7 | 2.9 | 4.2 | 7.0 | 6.7 | 8.3 | 5.5 |
| Qwen-7B-Chat | 6.6 | 4.5 | 4.8 | 2.9 | 3.6 | 6.2 | 6.8 | 8.2 | 5.4 |
| Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 |
| Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 |
| Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 |
**Category ACC of TMMLU+ (0 shot)**
| Model | STEM | Social Science | Humanities | Other | ↑ AVG |
|-----------------------------------------------------|--------------|----------------|------------|------------|---------|
| Yi-34B-Chat | 47.65 | 64.25 | 52.73 | 54.91 | 54.87 |
| Qwen-14B-Chat | 43.83 | 55.00 | 48.55 | 46.22 | 48.41 |
| Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 |
| gpt-3.5-turbo | 41.56 | 46.72 | 36.73 | 42.03 | 41.76 |
| **Breeze-7B-Instruct-v0.1** | 37.41 | 46.81 | 42.06 | 40.16 | 41.61 |
| **Breeze-7B-Instruct-64k-v0.1** | 37.88 | 46.35 | 40.31 | 39.40 | 40.99 |
| Qwen-7B-Chat | 35.44 | 46.22 | 38.35 | 40.06 | 40.02 |
| Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 |
| Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 |
## Inference Performance
In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again.
All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2).
| Models | ↓ Inference Time (sec)|Estimated Max Input Length (Char)|
|--------------------------------------------------------------------|-------------------|--------------------------|
| Yi-6B | 10.62 | 5.2k |
| **Breeze-7B-Instruct-v0.1** | 10.74 | 11.1k |
| **Breeze-7B-Instruct-64k-v0.1** | 10.74 | 88.8k |
| Qwen-7B | 10.86 | 9.8k |
| Qwen-14B | 18.89 | 9.8k |
| Mistral-7B-v0.1 | 20.48 | 5.1k |
| Taiwan-LLM-7B-v2.1-base | 26.26 | 2.2k |
| Taiwan-LLM-13B-v2.0-base | 36.80 | 2.2k |
| Yi-34B | 43.71 | 4.5k |
## Long-context Performance
TBD
## Examples
TBD
## Use in Transformers
First install direct dependencies:
```
pip install transformers torch accelerate
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn
```
Then load the model in transformers:
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v0.1")
# you can also using pipeline
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
generator(
"請問台灣最高的山是",
max_length=30,
num_return_sequences=1,
)
```
The structure of the query template follows that of Mistral-7B-Instruct, as shown below.
```txt
<s> SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]
```
where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user.
The suggested default `SYS_PROMPT` is
```txt
You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan.
```
## Citation
```
@article{breeze7b2024,
title={},
author={},
journal={arXiv},
year={2024}
}
``` |
EleutherAI/pile-t5-xl | EleutherAI | 2024-04-17T03:50:57Z | 545 | 7 | transformers | [
"transformers",
"safetensors",
"umt5",
"text2text-generation",
"t5x",
"encoder-decoder",
"en",
"dataset:EleutherAI/pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-01-17T03:08:51Z | ---
datasets:
- EleutherAI/pile
language:
- en
pipeline_tag: text2text-generation
tags:
- t5x
- encoder-decoder
---
Pile-T5 XL is an Encoder-Decoder model trained on [the Pile](https://pile.eleuther.ai/) using the [T5x](https://github.com/google-research/t5x) library. The model was trained for 2 million steps or roughly 2 trillion tokens using MLM-objective similar to the original T5 model.
The HF version of Pile-T5 XL borrows UMT5's model implementation as it uses scalable model implementation from T5x and uses `LlamaTokenizer`.
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Blogpost](). For details about the training dataset,
see [the Pile paper](https://arxiv.org/abs/2101.00027), and [its data
sheet](https://arxiv.org/abs/2201.07311).
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing GPT-NeoX-20B documentation before asking about the model
on Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure style="width:30em">
| Hyperparameter | Value |
| -------------------------- | ----------- |
| n<sub>parameters</sub> | 2849804288 |
| n<sub>encoder layers</sub> | 24 |
| n<sub>decoder layers</sub> | 24 |
| d<sub>model</sub> | 5120 |
| d<sub>emb</sub> | 2048 |
| n<sub>heads</sub> | 32 |
| d<sub>head</sub> | 64 |
| n<sub>vocab</sub> | 32128 |
| Sequence Length | 512 |
</figure>
### Uses and limitations
#### Intended use
Pile-T5 was developed primarily for research purposes. It learns an inner
representation of the English language that can be used to extract features
useful for downstream tasks.
In addition to scientific uses, you may also further fine-tune and adapt
Pile-T5 for deployment, as long as your use is in accordance with the
Apache 2.0 license. This model works with the [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pile-T5 as a basis for your fine-tuned model, please note that
you need to conduct your own risk and bias assessment.
#### Out-of-scope use
Pile-T5 is **not** intended for deployment as-is. It is not a product
and cannot be used for human-facing interactions without supervision.
Pile-T5 has not been fine-tuned for downstream tasks for which language
models are commonly deployed, such as writing genre prose, or commercial
chatbots. This means Pile-T5 will likely **not** respond to a given prompt
the way products such as ChatGPT do. This is because, unlike Pile-T5,
ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human
Feedback (RLHF) to better “understand” human instructions and dialogue.
This model is English-language only, and thus cannot be used for translation
or generating text in other languages.
#### Limitations and biases
The core functionality of Pile-T5 is to take a string of text that has been
partially replaced with mask tokens and predict a sequence of tokens that would
replace those mask tokens. Remember that the statistically most likely sequence
of tokens need not result in the most “accurate” text. Never rely on Pile-T5 to produce
factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pile-T5 may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
We recommend curating the outputs of this model before presenting it to a human
reader. Please inform your audience that you are using artificially generated
text.
#### How to use
Pile-T5 can be loaded using the `AutoModelForSeq2SeqLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/pile-t5-xl")
model = AutoModelForSeq2SeqLM.from_pretrained("EleutherAI/pile-t5-xl")
```
### Training
#### Training dataset
The Pile is a 825GiB general-purpose dataset in English. It was created by
EleutherAI specifically for training large language models. It contains texts
from 22 diverse sources, roughly broken down into five categories: academic
writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project
Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,
Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for
a breakdown of all data sources, methodology, and a discussion of ethical
implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for
more detailed documentation about the Pile and its component datasets. The
Pile can be downloaded from the [official website](https://pile.eleuther.ai/),
or from a [community mirror](https://the-eye.eu/public/AI/pile/).
The Pile was deduplicated before being used to train Pile-T5.
#### Training procedure
Pile-T5 was trained with a batch size of approximately 1M tokens
(2048 sequences of 512 tokens each), for a total of 2,000,000 steps. Pile-T5 was trained
with the span-corruption objective.
#### Training checkpoints
Intermediate checkpoints for Pile-T5 are accessible within this repository.
There are in total 200 checkpoints that are spaced 10,000 steps. For T5x-native
checkpoints that can be used for finetuning with the T5x library, refer to [here](https://huggingface.co/lintang/pile-t5-xl-t5x)
The training loss (in tfevent format) and validation perplexity (in jsonl) can be found [here](https://huggingface.co/EleutherAI/pile-t5-xl/blob/main/xl.zip).
### Evaluations
Pile-T5 XL was evaluated on SuperGLUE, CodeXGLUE. A Flan-finetuned version was evaluated on Flan Held In tasks, MMLU and BBH.
Results can be seen in the [blogpost](https://blog.eleuther.ai/pile-t5/)
### BibTeX
```
@misc{2024PileT5,
author = {Lintang Sutawika and Aran Komatsuzaki and Colin Raffel},
title = {Pile-T5},
year = {2024},
url = {https://blog.eleuther.ai/pile-t5/},
note = {Blog post},
}
```
|
SoloBSD/solobsd-uncensored-alpaca-sp | SoloBSD | 2024-03-09T00:30:50Z | 545 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-03-09T00:30:44Z | Entry not found |
RichardErkhov/grok-1-gguf | RichardErkhov | 2024-06-03T20:22:42Z | 545 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-03-29T11:29:43Z | Entry not found |
Undi95/Meta-Llama-3-8B-Instruct-hf | Undi95 | 2024-05-10T14:02:29Z | 545 | 17 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T16:45:59Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
duyntnet/Meta-Llama-3-8B-imatrix-GGUF | duyntnet | 2024-05-01T07:29:19Z | 545 | 2 | transformers | [
"transformers",
"gguf",
"imatrix",
"Meta-Llama-3-8B",
"llama-3",
"meta",
"text-generation",
"en",
"license:other",
"region:us"
]
| text-generation | 2024-04-26T08:35:39Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Meta-Llama-3-8B
- llama-3
- meta
---
Quantizations of https://huggingface.co/meta-llama/Meta-Llama-3-8B
Update (May 1, 2024): re-uploaded models after this merge: https://github.com/ggerganov/llama.cpp/pull/6920
Models now work correctly (tried with 7777+3333 and 3333+777 using Q8_0, both gave correct results)
# From original readme
## How to use
This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
>>> import transformers
>>> import torch
>>> model_id = "meta-llama/Meta-Llama-3-8B"
>>> pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
>>> pipeline("Hey how are you doing today?")
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works. |
qwp4w3hyb/Yi-1.5-34B-Chat-16K-iMat-GGUF | qwp4w3hyb | 2024-05-23T18:19:26Z | 545 | 0 | null | [
"gguf",
"yi",
"01-ai",
"instruct",
"finetune",
"chatml",
"imatrix",
"importance matrix",
"text-generation",
"arxiv:2403.04652",
"base_model:01-ai/Yi-1.5-34B-Chat-16K",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-05-23T09:14:24Z | ---
license: apache-2.0
pipeline_tag: text-generation
base_model: 01-ai/Yi-1.5-34B-Chat-16K
tags:
- yi
- 01-ai
- instruct
- finetune
- chatml
- gguf
- imatrix
- importance matrix
model-index:
- name: Yi-1.5-34B-Chat-16K-iMat-GGUF
results: []
---
# Quant Infos
- quants done with an importance matrix for improved quantization loss
- ggufs & imatrix generated from bf16 for "optimal" accuracy loss
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [fabf30b4c4fca32e116009527180c252919ca922](https://github.com/ggerganov/llama.cpp/commit/fabf30b4c4fca32e116009527180c252919ca922) (master as of 2024-05-20)
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card:
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://01-ai.github.io/">💪 Tech Blog</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)|
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf | RichardErkhov | 2024-05-26T10:35:30Z | 545 | 0 | null | [
"gguf",
"arxiv:2302.13971",
"arxiv:2305.16264",
"arxiv:2302.06675",
"region:us"
]
| null | 2024-05-26T08:22:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-7b-finnish - GGUF
- Model creator: https://huggingface.co/Finnish-NLP/
- Original model: https://huggingface.co/Finnish-NLP/llama-7b-finnish/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-7b-finnish.Q2_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q2_K.gguf) | Q2_K | 2.5GB |
| [llama-7b-finnish.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.IQ3_XS.gguf) | IQ3_XS | 2.76GB |
| [llama-7b-finnish.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.IQ3_S.gguf) | IQ3_S | 2.9GB |
| [llama-7b-finnish.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q3_K_S.gguf) | Q3_K_S | 2.9GB |
| [llama-7b-finnish.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [llama-7b-finnish.Q3_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q3_K.gguf) | Q3_K | 3.23GB |
| [llama-7b-finnish.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q3_K_M.gguf) | Q3_K_M | 3.23GB |
| [llama-7b-finnish.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q3_K_L.gguf) | Q3_K_L | 3.5GB |
| [llama-7b-finnish.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.IQ4_XS.gguf) | IQ4_XS | 3.56GB |
| [llama-7b-finnish.Q4_0.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q4_0.gguf) | Q4_0 | 3.73GB |
| [llama-7b-finnish.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.IQ4_NL.gguf) | IQ4_NL | 3.76GB |
| [llama-7b-finnish.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q4_K_S.gguf) | Q4_K_S | 3.76GB |
| [llama-7b-finnish.Q4_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q4_K.gguf) | Q4_K | 3.97GB |
| [llama-7b-finnish.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q4_K_M.gguf) | Q4_K_M | 3.97GB |
| [llama-7b-finnish.Q4_1.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q4_1.gguf) | Q4_1 | 4.13GB |
| [llama-7b-finnish.Q5_0.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q5_0.gguf) | Q5_0 | 4.52GB |
| [llama-7b-finnish.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q5_K_S.gguf) | Q5_K_S | 4.52GB |
| [llama-7b-finnish.Q5_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q5_K.gguf) | Q5_K | 4.64GB |
| [llama-7b-finnish.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q5_K_M.gguf) | Q5_K_M | 4.64GB |
| [llama-7b-finnish.Q5_1.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q5_1.gguf) | Q5_1 | 4.91GB |
| [llama-7b-finnish.Q6_K.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q6_K.gguf) | Q6_K | 5.35GB |
| [llama-7b-finnish.Q8_0.gguf](https://huggingface.co/RichardErkhov/Finnish-NLP_-_llama-7b-finnish-gguf/blob/main/llama-7b-finnish.Q8_0.gguf) | Q8_0 | 6.93GB |
Original model description:
---
language:
- fi
license: apache-2.0
tags:
- finnish
- llama
datasets:
- Finnish-NLP/mc4_3.1.0_fi_cleaned
- Finnish-NLP/oscar_2301_fi_cleaned
- Finnish-NLP/Reddit_fi_2006_2022
- Finnish-NLP/wikipedia_20230501_fi_cleaned
- intfloat/multilingual_cc_news
inference: false
pipeline_tag: text-generation
---
# Llama-7b for Finnish
Pretrained Llama model on Finnish language using a causal language modeling (CLM) objective. Llama model was introduced in
[this paper](https://arxiv.org/abs/2302.13971)
and first released at [this page](https://github.com/facebookresearch/llama).
There are two different sized Llama models, all pretrained from scratch for 131B tokens:
| Model | Context length | Layers | Dim | Heads | Params |
|---------------------------------------------------------------------------------|----------------|--------|------|-------|--------|
| [llama-3b-finnish](https://huggingface.co/Finnish-NLP/llama-3b-finnish) | 2048 | 26 | 3200 | 32 | 3.6B |
| [llama-7b-finnish](https://huggingface.co/Finnish-NLP/llama-7b-finnish) | 2048 | 32 | 4096 | 32 | 7.0B |
Note: this model is pretrained, not fine-tuned for instruction following or chat use. For instruction tuned models, you can check models listed in this collection: https://huggingface.co/collections/Finnish-NLP/instruction-tuned-models-65c08db79fa020161be2e942
## Model description
Finnish Llama is a GPT-style model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to predict the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. You can use the this model for text generation or fine-tune it to a downstream task, like instruction following. Fine-tuning is recommended to get better results.
### How to use
Here is an example of using this model in PyTorch with some generation arguments you can modify:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Finnish-NLP/llama-7b-finnish")
model = AutoModelForCausalLM.from_pretrained("Finnish-NLP/llama-7b-finnish")
inputs = tokenizer("Olipa kerran tekoälylaama", return_tensors="pt")
generated_ids = model.generate(inputs.input_ids, **{
"temperature": 0.8,
"penalty_alpha": 0.6,
"top_k": 4,
"do_sample": True,
"repetition_penalty": 1.2,
"min_length": 20,
"max_length": 2048
})
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
### Limitations and bias
The training data used for this model contains a lot of content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
To reduce toxic content, training data was filtered with a toxicity classifier but it cannot truly eliminate all toxic text.
## Training data
This Finnish Llama model was pretrained on the combination of 12 datasets:
- [mc4_3.1.0_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_3.1.0_fi_cleaned), mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 version 3.1.0 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [oscar_2301_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/oscar_2301_fi_cleaned), Oscar is a multilingual, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the Oscar version 23.01 dataset and further cleaned it with our own text data cleaning codes.
- [wikipedia_20230501_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/wikipedia_20230501_fi_cleaned), we used the Finnish subset of the wikipedia (May 2023) dataset
- [Reddit_fi_2006_2022](https://huggingface.co/datasets/Finnish-NLP/Reddit_fi_2006_2022), filtered and post-processed dataset of Finnish Reddit
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
- [Project Lönnrot](http://www.lonnrot.net/)
- [Finnish parliament speeches](https://avoindata.eduskunta.fi)
- [multilingual_cc_news](https://huggingface.co/datasets/intfloat/multilingual_cc_news), we used the Finnish subset of the multilingual CC-News dataset
- [fi-news-corpus](https://github.com/nkrusch/fi-news-corpus)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. To reduce toxic text, we used Finnish toxicity classifier [TurkuNLP/bert-large-finnish-cased-toxicity](https://huggingface.co/TurkuNLP/bert-large-finnish-cased-toxicity) released by TurkuNLP to classify all text examples. Classified toxicity label scores can then be used to determine how toxic the text is.
All datasets were concatenated and the whole dataset deduplicated. Top 95% perplexity score was used as a filtering threshold to filter out the worst quality 5% of texts. To reduce amount of toxic content, the dataset was filtered to include text examples having lower than 80% score for the toxicity labels "label_identity_attack", "label_insult", "label_threat" and "label_severe_toxicity".
Finally, 20,000 text examples from each of the Wikipedia, Yle, STT, Suomi24, and Reddit datasets were randomly selected for evaluation dataset in addition to the existing validation split of the [mc4_3.1.0_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_3.1.0_fi_cleaned) dataset.
The final training dataset had 19 billion words and the evaluation dataset had 25 million words. After tokenization, the training dataset had 28 billion tokens and the evaluation dataset had 38 million tokens. Training dataset is divided between different datasets as follows:
|Dataset | Words | Ratio |
|------------------------------|-------------|-------------|
|mc4_3.1.0_fi_cleaned | 11.462B | 60.7\% |
|oscar_2301_fi_cleaned | 3.295B | 17.4\% |
|Suomi24 | 3.045B | 16.1\% |
|multilingual_cc_news | 0.295B | 1.6\% |
|STT | 0.249B | 1.3\% |
|Yle | 0.201B | 1.1\% |
|Reddit_fi_2006_2022 | 0.138B | 0.7\% |
|wikipedia_20230501_fi_cleaned | 0.096B | 0.5\% |
|Project Lönnrot | 0.078B | 0.4\% |
|Finnish parliament speeches | 0.021B | 0.1\% |
|fi-news-corpus | 0.004B | 0.1\% |
|**TOTAL** | **18.884B** | **100.0\%** |
## Training procedure
### Preprocessing
Texts are tokenized using Byte Pair Encoding (BPE) based on sentencepiece with a vocabulary size of 64,256. Inputs are sequences of 2048 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv4-32 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/). The model was trained for 131B tokens which is about 4.7 epochs of data, inspired by the findings of [this paper](https://arxiv.org/abs/2305.16264). The optimizer used was a [Lion](https://arxiv.org/abs/2302.06675), linear learning rate warmup for 2K steps to a peak learning rate of 3e-5, and then cosine decay to 3e-6.
Training was conducted with a slightly modified Jax/Flax based [EasyLM](https://github.com/young-geng/EasyLM) framework, and inspired by the [OpenLLaMA](https://github.com/openlm-research/open_llama) project.
## Evaluation results
This model was evaluated using [FIN-bench by TurkuNLP](https://github.com/TurkuNLP/FIN-bench) with zero-shot setting, but the evaluation script had some problems running succesfully, so the results reported below should perhaps be viewed with some caution.
[llama-3b-finnish](https://huggingface.co/Finnish-NLP/llama-3b-finnish):
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_analogies | 0|multiple_choice_grade|0.3308|± |0.0414|
|bigbench_arithmetic_1_digit_addition | 0|multiple_choice_grade|0.2000|± |0.0402|
|bigbench_arithmetic_1_digit_division | 0|multiple_choice_grade|0.3478|± |0.1015|
|bigbench_arithmetic_1_digit_multiplication | 0|multiple_choice_grade|0.2500|± |0.0435|
|bigbench_arithmetic_1_digit_subtraction | 0|multiple_choice_grade|0.1800|± |0.0386|
|bigbench_arithmetic_2_digit_addition | 0|multiple_choice_grade|0.0900|± |0.0288|
|bigbench_arithmetic_2_digit_division | 0|multiple_choice_grade|0.2200|± |0.0416|
|bigbench_arithmetic_2_digit_multiplication | 0|multiple_choice_grade|0.0100|± |0.0100|
|bigbench_arithmetic_2_digit_subtraction | 0|multiple_choice_grade|0.0900|± |0.0288|
|bigbench_arithmetic_3_digit_addition | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_3_digit_division | 0|multiple_choice_grade|0.1500|± |0.0359|
|bigbench_arithmetic_3_digit_multiplication | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_3_digit_subtraction | 0|multiple_choice_grade|0.0100|± |0.0100|
|bigbench_arithmetic_4_digit_addition | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_4_digit_division | 0|multiple_choice_grade|0.1500|± |0.0359|
|bigbench_arithmetic_4_digit_multiplication | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_4_digit_subtraction | 0|multiple_choice_grade|0.0100|± |0.0100|
|bigbench_arithmetic_5_digit_addition | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_5_digit_division | 0|multiple_choice_grade|0.0900|± |0.0288|
|bigbench_arithmetic_5_digit_multiplication | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_5_digit_subtraction | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_cause_and_effect_one_sentence | 0|multiple_choice_grade|0.4510|± |0.0704|
|bigbench_cause_and_effect_one_sentence_no_prompt| 0|multiple_choice_grade|0.7255|± |0.0631|
|bigbench_cause_and_effect_two_sentences | 0|multiple_choice_grade|0.5294|± |0.0706|
|bigbench_emotions | 0|multiple_choice_grade|0.2437|± |0.0340|
|bigbench_empirical_judgments | 0|multiple_choice_grade|0.3232|± |0.0472|
|bigbench_general_knowledge | 0|multiple_choice_grade|0.1286|± |0.0403|
|bigbench_hhh_alignment_harmless | 0|multiple_choice_grade|0.3966|± |0.0648|
|bigbench_hhh_alignment_helpful | 0|multiple_choice_grade|0.3051|± |0.0605|
|bigbench_hhh_alignment_honest | 0|multiple_choice_grade|0.4068|± |0.0645|
|bigbench_hhh_alignment_other | 0|multiple_choice_grade|0.5814|± |0.0761|
|bigbench_intent_recognition | 0|multiple_choice_grade|0.0838|± |0.0105|
|bigbench_misconceptions | 0|multiple_choice_grade|0.5373|± |0.0432|
|bigbench_paraphrase | 0|multiple_choice_grade|0.4900|± |0.0354|
|bigbench_sentence_ambiguity | 0|multiple_choice_grade|0.5500|± |0.0648|
|bigbench_similarities_abstraction | 0|multiple_choice_grade|0.3553|± |0.0553|
[llama-7b-finnish](https://huggingface.co/Finnish-NLP/llama-7b-finnish):
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_analogies | 0|multiple_choice_grade|0.2692|± |0.0391|
|bigbench_arithmetic_1_digit_addition | 0|multiple_choice_grade|0.2600|± |0.0441|
|bigbench_arithmetic_1_digit_division | 0|multiple_choice_grade|0.3043|± |0.0981|
|bigbench_arithmetic_1_digit_multiplication | 0|multiple_choice_grade|0.2100|± |0.0409|
|bigbench_arithmetic_1_digit_subtraction | 0|multiple_choice_grade|0.1300|± |0.0338|
|bigbench_arithmetic_2_digit_addition | 0|multiple_choice_grade|0.0800|± |0.0273|
|bigbench_arithmetic_2_digit_division | 0|multiple_choice_grade|0.2400|± |0.0429|
|bigbench_arithmetic_2_digit_multiplication | 0|multiple_choice_grade|0.0300|± |0.0171|
|bigbench_arithmetic_2_digit_subtraction | 0|multiple_choice_grade|0.1100|± |0.0314|
|bigbench_arithmetic_3_digit_addition | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_3_digit_division | 0|multiple_choice_grade|0.1300|± |0.0338|
|bigbench_arithmetic_3_digit_multiplication | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_3_digit_subtraction | 0|multiple_choice_grade|0.0300|± |0.0171|
|bigbench_arithmetic_4_digit_addition | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_4_digit_division | 0|multiple_choice_grade|0.1300|± |0.0338|
|bigbench_arithmetic_4_digit_multiplication | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_4_digit_subtraction | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_5_digit_addition | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_5_digit_division | 0|multiple_choice_grade|0.0800|± |0.0273|
|bigbench_arithmetic_5_digit_multiplication | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_arithmetic_5_digit_subtraction | 0|multiple_choice_grade|0.0000|± |0.0000|
|bigbench_cause_and_effect_one_sentence | 0|multiple_choice_grade|0.4510|± |0.0704|
|bigbench_cause_and_effect_one_sentence_no_prompt| 0|multiple_choice_grade|0.7647|± |0.0600|
|bigbench_cause_and_effect_two_sentences | 0|multiple_choice_grade|0.5098|± |0.0707|
|bigbench_emotions | 0|multiple_choice_grade|0.3688|± |0.0383|
|bigbench_empirical_judgments | 0|multiple_choice_grade|0.3434|± |0.0480|
|bigbench_general_knowledge | 0|multiple_choice_grade|0.1429|± |0.0421|
|bigbench_hhh_alignment_harmless | 0|multiple_choice_grade|0.3793|± |0.0643|
|bigbench_hhh_alignment_helpful | 0|multiple_choice_grade|0.3051|± |0.0605|
|bigbench_hhh_alignment_honest | 0|multiple_choice_grade|0.4068|± |0.0645|
|bigbench_hhh_alignment_other | 0|multiple_choice_grade|0.5814|± |0.0761|
|bigbench_intent_recognition | 0|multiple_choice_grade|0.1850|± |0.0148|
|bigbench_misconceptions | 0|multiple_choice_grade|0.5373|± |0.0432|
|bigbench_paraphrase | 0|multiple_choice_grade|0.5000|± |0.0354|
|bigbench_sentence_ambiguity | 0|multiple_choice_grade|0.5333|± |0.0649|
|bigbench_similarities_abstraction | 0|multiple_choice_grade|0.4474|± |0.0574|
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
QuantFactory/MAmmoTH2-8B-GGUF | QuantFactory | 2024-05-29T12:07:08Z | 545 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"dataset:TIGER-Lab/WebInstructSub",
"arxiv:2405.03548",
"base_model:TIGER-Lab/MAmmoTH2-8B",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-29T11:04:54Z | ---
license: mit
language:
- en
datasets:
- TIGER-Lab/WebInstructSub
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
base_model: TIGER-Lab/MAmmoTH2-8B
---
# 🦣 QuantFactory/MAmmoTH2-8B-GGUF
This is quantized version of [TIGER-Lab/MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) created using llama.cpp
## Model Description
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------|
| **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 |
| **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 |
| **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval.
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. |
kaanino/tiny_dpo | kaanino | 2024-05-30T20:52:21Z | 545 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-30T12:15:51Z | ---
license: apache-2.0
---
# Model Card for Model ID
TinyLlama-1.1B fine-tuned using DPO for QA.
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
TinyLlama-1.1B fine-tuned using Direct Preference Optimization (DPO) for Question Answering (QA) tasks, specifically, stem courses QA. The model leverages quantization and parameter-efficient fine-tuning (PEFT) techniques to optimize performance and efficiency.
- **Developed by:** Kaan Uçar, Elias Naha, Albert Troussard
- **Model type:** AutoModelForCausalLM
- **Language(s) (NLP):** English
- **Finetuned from model:** [TinyLlama-1.1B-Chat-v0.1](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.1)
## Uses
### Direct Use
This model can be used directly for question answering tasks without additional fine-tuning.
### Downstream Use
The model can be fine-tuned further for specific QA datasets or integrated into larger systems for enhanced performance in question answering applications.
### Out-of-Scope Use
The model is not suitable for tasks outside of question answering, such as generating creative content, providing medical or legal advice, or any use case requiring high levels of accuracy and reliability without proper validation.
## Bias, Risks, and Limitations
The model may exhibit biases present in the training data and could potentially generate harmful content. Users should exercise caution and consider these limitations when deploying the model.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Continuous monitoring and evaluation are recommended to mitigate potential negative impacts.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "kaanino/tiny_dpo"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
## Training Details
### Training Data
We mainly used three sources of data :
- [Open Platypus Dataset on Hugging Face](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). This dataset provides Question and Chosen answer, we generated Rejected answer using GPT-2.
- [Stack Exchange Dataset on Hugging Face](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl). A prepocessed version of the [H4 Stack Exchange Dataset on Hugging Face](https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences).
- A preference dataset generated with GPT-2 using EPFL stem courses' questions.
### Training Procedure
Direct Preference Optimization
#### Training Hyperparameters
- Training regime: Mixed precision (fp16)
- Learning rate: 1e-5
- Batch size: 10
- Epochs: 1
- Optimizer: paged_adamw_8bit
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
### Results
#### Summary
|
pcuenq/Depth-Anything-V2-Small-hf | pcuenq | 2024-06-18T10:03:15Z | 545 | 0 | transformers | [
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"depth",
"relative depth",
"arxiv:2406.09414",
"arxiv:2401.10891",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| depth-estimation | 2024-06-18T10:01:15Z | ---
license: apache-2.0
tags:
- depth
- relative depth
pipeline_tag: depth-estimation
library: transformers
widget:
- inference: false
---
# Depth Anything V2 Small – Transformers Version
Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:
- more fine-grained details than Depth Anything V1
- more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard)
- more efficient (10x faster) and more lightweight than SD-based models
- impressive fine-tuned performance with our pre-trained models
This model checkpoint is compatible with the transformers library.
Depth Anything V2 was introduced in [the paper of the same name](https://arxiv.org/abs/2406.09414) by Lihe Yang et al. It uses the same architecture as the original Depth Anything release, but uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions. The original Depth Anything model was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al., and was first released in [this repository](https://github.com/LiheYoung/Depth-Anything).
[Online demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2).
## Model description
Depth Anything V2 leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone.
The model is trained on ~600K synthetic labeled images and ~62 million real unlabeled images, obtaining state-of-the-art results for both relative and absolute depth estimation.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
alt="drawing" width="600"/>
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
## Intended uses & limitations
You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for
other versions on a task that interests you.
### How to use
Here is how to use this model to perform zero-shot depth estimation:
```python
from transformers import pipeline
from PIL import Image
import requests
# load pipe
pipe = pipeline(task="depth-estimation", model="pcuenq/Depth-Anything-V2-Small-hf")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
depth = pipe(image)["depth"]
```
Alternatively, you can use the model and processor classes:
```python
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("pcuenq/Depth-Anything-V2-Small-hf")
model = AutoModelForDepthEstimation.from_pretrained("pcuenq/Depth-Anything-V2-Small-hf")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
```
For more code examples, please refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#).
### Citation
```bibtex
@misc{yang2024depth,
title={Depth Anything V2},
author={Lihe Yang and Bingyi Kang and Zilong Huang and Zhen Zhao and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao},
year={2024},
eprint={2406.09414},
archivePrefix={arXiv},
primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'}
}
```
|
woojing/T3Q-ko-solar-dpo-v7.0-IQ4_NL-GGUF | woojing | 2024-06-20T01:34:32Z | 545 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v7.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-20T01:34:06Z | ---
base_model: chihoonlee10/T3Q-ko-solar-dpo-v7.0
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# woojing/T3Q-ko-solar-dpo-v7.0-IQ4_NL-GGUF
This model was converted to GGUF format from [`chihoonlee10/T3Q-ko-solar-dpo-v7.0`](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v7.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v7.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo woojing/T3Q-ko-solar-dpo-v7.0-IQ4_NL-GGUF --hf-file t3q-ko-solar-dpo-v7.0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo woojing/T3Q-ko-solar-dpo-v7.0-IQ4_NL-GGUF --hf-file t3q-ko-solar-dpo-v7.0-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo woojing/T3Q-ko-solar-dpo-v7.0-IQ4_NL-GGUF --hf-file t3q-ko-solar-dpo-v7.0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo woojing/T3Q-ko-solar-dpo-v7.0-IQ4_NL-GGUF --hf-file t3q-ko-solar-dpo-v7.0-iq4_nl-imat.gguf -c 2048
```
|
Cran-May/openbuddy-llama3-8b-v21.2-32k-IQ4_XS-GGUF | Cran-May | 2024-06-21T11:02:25Z | 545 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:OpenBuddy/openbuddy-llama3-8b-v21.2-32k",
"region:us"
]
| null | 2024-06-21T11:02:04Z | ---
base_model: OpenBuddy/openbuddy-llama3-8b-v21.2-32k
tags:
- llama-cpp
- gguf-my-repo
---
# Cran-May/openbuddy-llama3-8b-v21.2-32k-IQ4_XS-GGUF
This model was converted to GGUF format from [`OpenBuddy/openbuddy-llama3-8b-v21.2-32k`](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.2-32k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.2-32k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Cran-May/openbuddy-llama3-8b-v21.2-32k-IQ4_XS-GGUF --hf-file openbuddy-llama3-8b-v21.2-32k-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Cran-May/openbuddy-llama3-8b-v21.2-32k-IQ4_XS-GGUF --hf-file openbuddy-llama3-8b-v21.2-32k-iq4_xs-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Cran-May/openbuddy-llama3-8b-v21.2-32k-IQ4_XS-GGUF --hf-file openbuddy-llama3-8b-v21.2-32k-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Cran-May/openbuddy-llama3-8b-v21.2-32k-IQ4_XS-GGUF --hf-file openbuddy-llama3-8b-v21.2-32k-iq4_xs-imat.gguf -c 2048
```
|
advanced-stack/NuExtract-tiny-GGUF | advanced-stack | 2024-06-27T12:50:19Z | 545 | 2 | null | [
"gguf",
"license:mit",
"region:us"
]
| null | 2024-06-24T21:06:01Z | ---
license: mit
---
Tutorial (5 min read): [Use NuExtract to parse unstructured text locally in less than 5 min](https://advanced-stack.com/resources/how-to-parse-unstructured-text-with-nuextract.html)
Original model card: https://huggingface.co/numind/NuExtract-tiny |
YorkieOH10/Qwen2-7B-Multilingual-RP-Q5_K_M-GGUF | YorkieOH10 | 2024-06-28T15:44:21Z | 545 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"ko",
"ja",
"zh",
"es",
"base_model:maywell/Qwen2-7B-Multilingual-RP",
"license:apache-2.0",
"region:us"
]
| null | 2024-06-28T15:43:55Z | ---
base_model: maywell/Qwen2-7B-Multilingual-RP
language:
- en
- ko
- ja
- zh
- es
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# YorkieOH10/Qwen2-7B-Multilingual-RP-Q5_K_M-GGUF
This model was converted to GGUF format from [`maywell/Qwen2-7B-Multilingual-RP`](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q5_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q5_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q5_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YorkieOH10/Qwen2-7B-Multilingual-RP-Q5_K_M-GGUF --hf-file qwen2-7b-multilingual-rp-q5_k_m.gguf -c 2048
```
|
castorini/mdpr-tied-pft-msmarco | castorini | 2023-11-19T15:52:25Z | 544 | 0 | transformers | [
"transformers",
"pytorch",
"rust",
"safetensors",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | Entry not found |
ckiplab/albert-tiny-chinese-pos | ckiplab | 2022-05-10T03:28:11Z | 544 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"token-classification",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- albert
- zh
license: gpl-3.0
---
# CKIP ALBERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/albert-tiny-chinese-pos')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
uw-madison/nystromformer-512 | uw-madison | 2022-01-11T14:13:39Z | 544 | 2 | transformers | [
"transformers",
"pytorch",
"nystromformer",
"fill-mask",
"arxiv:2102.03902",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | # Nyströmformer
Nyströmformer model for masked language modeling (MLM) pretrained on BookCorpus and English Wikipedia for sequence length 512.
## About Nyströmformer
The Nyströmformer model was proposed in [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences — a topic being actively studied in the community. To address this limitation, we propose Nyströmformer — a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL.
## Usage
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uw-madison/nystromformer-512')
>>> unmasker("Paris is the [MASK] of France.")
[{'score': 0.829957902431488,
'token': 1030,
'token_str': 'capital',
'sequence': 'paris is the capital of france.'},
{'score': 0.022157637402415276,
'token': 16081,
'token_str': 'birthplace',
'sequence': 'paris is the birthplace of france.'},
{'score': 0.01904447190463543,
'token': 197,
'token_str': 'name',
'sequence': 'paris is the name of france.'},
{'score': 0.017583081498742104,
'token': 1107,
'token_str': 'kingdom',
'sequence': 'paris is the kingdom of france.'},
{'score': 0.005948934704065323,
'token': 148,
'token_str': 'city',
'sequence': 'paris is the city of france.'}]
``` |
vblagoje/bart_lfqa | vblagoje | 2022-02-14T15:54:47Z | 544 | 49 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:vblagoje/lfqa",
"dataset:vblagoje/lfqa_support_docs",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- vblagoje/lfqa
- vblagoje/lfqa_support_docs
license: mit
---
## Introduction
See [blog post](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb) for more details.
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModel, AutoModelForSeq2SeqLM
model_name = "vblagoje/bart_lfqa"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
model = model.to(device)
# it all starts with a question/query
query = "Why does water heated to room temperature feel colder than the air around it?"
# given the question above suppose these documents below were found in some document store
documents = ["when the skin is completely wet. The body continuously loses water by...",
"at greater pressures. There is an ambiguity, however, as to the meaning of the terms 'heating' and 'cooling'...",
"are not in a relation of thermal equilibrium, heat will flow from the hotter to the colder, by whatever pathway...",
"air condition and moving along a line of constant enthalpy toward a state of higher humidity. A simple example ...",
"Thermal contact conductance In physics, thermal contact conductance is the study of heat conduction between solid ..."]
# concatenate question and support documents into BART input
conditioned_doc = "<P> " + " <P> ".join([d for d in documents])
query_and_docs = "question: {} context: {}".format(query, conditioned_doc)
model_input = tokenizer(query_and_docs, truncation=True, padding=True, return_tensors="pt")
generated_answers_encoded = model.generate(input_ids=model_input["input_ids"].to(device),
attention_mask=model_input["attention_mask"].to(device),
min_length=64,
max_length=256,
do_sample=False,
early_stopping=True,
num_beams=8,
temperature=1.0,
top_k=None,
top_p=None,
eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
num_return_sequences=1)
tokenizer.batch_decode(generated_answers_encoded, skip_special_tokens=True,clean_up_tokenization_spaces=True)
# below is the abstractive answer generated by the model
["When you heat water to room temperature, it loses heat to the air around it. When you cool it down, it gains heat back from the air, which is why it feels colder than the air surrounding it. It's the same reason why you feel cold when you turn on a fan. The air around you is losing heat, and the water is gaining heat."]
```
## Author
- Vladimir Blagojevic: `dovlex [at] gmail.com` [Twitter](https://twitter.com/vladblagoje) | [LinkedIn](https://www.linkedin.com/in/blagojevicvladimir/) |
roneneldan/TinyStories-2Layers-33M | roneneldan | 2023-05-17T22:10:20Z | 544 | 3 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-05-12T21:50:36Z | Entry not found |
Felladrin/Pythia-31M-Chat-v1 | Felladrin | 2024-05-23T11:54:13Z | 544 | 5 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"conversational",
"en",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:databricks/databricks-dolly-15k",
"dataset:THUDM/webglm-qa",
"dataset:starfishmedical/webGPT_x_dolly",
"dataset:Amod/mental_health_counseling_conversations",
"dataset:sablo/oasst2_curated",
"dataset:cognitivecomputations/wizard_vicuna_70k_unfiltered",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:EleutherAI/pythia-31m",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-06T12:44:07Z | ---
language:
- en
license: apache-2.0
base_model: EleutherAI/pythia-31m
datasets:
- totally-not-an-llm/EverythingLM-data-V3
- databricks/databricks-dolly-15k
- THUDM/webglm-qa
- starfishmedical/webGPT_x_dolly
- Amod/mental_health_counseling_conversations
- sablo/oasst2_curated
- cognitivecomputations/wizard_vicuna_70k_unfiltered
- mlabonne/chatml_dpo_pairs
pipeline_tag: text-generation
widget:
- messages:
- role: system
content: >-
You are a career counselor. The user will provide you with an individual
looking for guidance in their professional life, and your task is to assist
them in determining what careers they are most suited for based on their skills,
interests, and experience. You should also conduct research into the various
options available, explain the job market trends in different industries, and
advice on which qualifications would be beneficial for pursuing particular fields.
- role: user
content: Heya!
- role: assistant
content: Hi! How may I help you?
- role: user
content: >-
I am interested in developing a career in software engineering. What
would you recommend me to do?
- messages:
- role: system
content: "You are a helpful assistant who answers user's questions with details and curiosity."
- role: user
content: What are some potential applications for quantum computing?
- messages:
- role: system
content: You are a highly knowledgeable assistant. Help the user as much as you can.
- role: user
content: What are some steps I can take to become a healthier person?
inference:
parameters:
max_new_tokens: 250
penalty_alpha: 0.5
top_k: 2
repetition_penalty: 1.0016
model-index:
- name: Pythia-31M-Chat-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 22.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 25.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 0.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Pythia-31M-Chat-v1
name: Open LLM Leaderboard
---
# A Pythia Chat Model of 31M Parameters
- Base model: [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m)
- Availability in other ML formats:
- GGUF: [Felladrin/gguf-Pythia-31M-Chat-v1](https://huggingface.co/Felladrin/gguf-Pythia-31M-Chat-v1)
- ONNX: [Felladrin/onnx-Pythia-31M-Chat-v1](https://huggingface.co/Felladrin/onnx-Pythia-31M-Chat-v1)
## Recommended prompt format
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
```
## Recommended inference parameters
```yml
penalty_alpha: 0.5
top_k: 2
repetition_penalty: 1.0016
```
## Datasets and parameters used for training
| Dataset | License Type |
|---------|--------------|
| [totally-not-an-llm/EverythingLM-data-V3](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) | mit |
| [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | cc-by-sa-3.0 |
| [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa) | apache-2.0 |
| [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly) | cc-by-sa-3.0 |
| [Amod/mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) | openrail |
| [sablo/oasst2_curated](https://huggingface.co/datasets/sablo/oasst2_curated) | apache-2.0 |
| [cognitivecomputations/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/cognitivecomputations/wizard_vicuna_70k_unfiltered) | apache-2.0 |
| [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) | apache-2.0 |
```python
SFTTrainer(
model,
train_dataset=train_dataset,
dataset_text_field="text",
eval_dataset=eval_dataset,
max_seq_length=2048,
packing=True,
args=TrainingArguments(
learning_rate=2e-6,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=16,
lr_scheduler_type="cosine",
num_train_epochs=1,
logging_strategy="steps",
save_strategy="steps",
evaluation_strategy="steps",
logging_steps=10,
eval_steps=10,
save_steps=10,
warmup_steps=50,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False,
weight_decay=0.01,
save_total_limit=10,
neftune_noise_alpha=5,
),
callbacks=[
EarlyStoppingCallback(
early_stopping_patience=3,
early_stopping_threshold=0.005
),
],
)
```
```python
DPOTrainer(
model,
beta=0.1,
train_dataset=dataset,
tokenizer=tokenizer,
eval_dataset=eval_dataset,
max_length=1536,
max_prompt_length=1024,
args=TrainingArguments(
learning_rate=2e-6,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=1,
lr_scheduler_type="cosine",
num_train_epochs=1,
logging_strategy="steps",
save_strategy="steps",
evaluation_strategy="steps",
logging_steps=1,
eval_steps=1,
save_steps=1,
warmup_steps=0,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False,
weight_decay=0.0,
neftune_noise_alpha=5,
remove_unused_columns=False,
),
callbacks=[
EarlyStoppingCallback(
early_stopping_patience=3,
early_stopping_threshold=0.005
),
],
)
```
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Pythia-31M-Chat-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |19.92|
|AI2 Reasoning Challenge (25-Shot)|22.70|
|HellaSwag (10-Shot) |25.60|
|MMLU (5-Shot) |23.24|
|TruthfulQA (0-shot) | 0.00|
|Winogrande (5-shot) |47.99|
|GSM8k (5-shot) | 0.00|
|
sayhan/gemma-2b-GGUF-quantized | sayhan | 2024-02-22T12:17:11Z | 544 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-02-21T15:29:57Z | Entry not found |
Felladrin/Minueza-32M-Chat | Felladrin | 2024-03-04T12:18:27Z | 544 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:databricks/databricks-dolly-15k",
"dataset:Felladrin/ChatML-databricks-dolly-15k",
"dataset:euclaise/reddit-instruct-curated",
"dataset:Felladrin/ChatML-reddit-instruct-curated",
"dataset:THUDM/webglm-qa",
"dataset:Felladrin/ChatML-WebGLM-QA",
"dataset:starfishmedical/webGPT_x_dolly",
"dataset:Felladrin/ChatML-webGPT_x_dolly",
"dataset:LDJnr/Capybara",
"dataset:Felladrin/ChatML-Capybara",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:Felladrin/ChatML-SlimOrca-Dedup",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Felladrin/ChatML-ultrachat_200k",
"dataset:nvidia/HelpSteer",
"dataset:Felladrin/ChatML-HelpSteer",
"dataset:sablo/oasst2_curated",
"dataset:Felladrin/ChatML-oasst2_curated",
"dataset:CohereForAI/aya_dataset",
"dataset:Felladrin/ChatML-aya_dataset",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"dataset:Felladrin/ChatML-distilabel-capybara-dpo-7k-binarized",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:Felladrin/ChatML-distilabel-intel-orca-dpo-pairs",
"dataset:argilla/ultrafeedback-binarized-preferences",
"dataset:Felladrin/ChatML-ultrafeedback-binarized-preferences",
"dataset:sablo/oasst2_dpo_pairs_en",
"dataset:Felladrin/ChatML-oasst2_dpo_pairs_en",
"dataset:NeuralNovel/Neural-DPO",
"dataset:Felladrin/ChatML-Neural-DPO",
"base_model:Felladrin/Minueza-32M-Base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-25T02:16:21Z | ---
language:
- en
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
- Felladrin/ChatML-databricks-dolly-15k
- euclaise/reddit-instruct-curated
- Felladrin/ChatML-reddit-instruct-curated
- THUDM/webglm-qa
- Felladrin/ChatML-WebGLM-QA
- starfishmedical/webGPT_x_dolly
- Felladrin/ChatML-webGPT_x_dolly
- LDJnr/Capybara
- Felladrin/ChatML-Capybara
- Open-Orca/SlimOrca-Dedup
- Felladrin/ChatML-SlimOrca-Dedup
- HuggingFaceH4/ultrachat_200k
- Felladrin/ChatML-ultrachat_200k
- nvidia/HelpSteer
- Felladrin/ChatML-HelpSteer
- sablo/oasst2_curated
- Felladrin/ChatML-oasst2_curated
- CohereForAI/aya_dataset
- Felladrin/ChatML-aya_dataset
- argilla/distilabel-capybara-dpo-7k-binarized
- Felladrin/ChatML-distilabel-capybara-dpo-7k-binarized
- argilla/distilabel-intel-orca-dpo-pairs
- Felladrin/ChatML-distilabel-intel-orca-dpo-pairs
- argilla/ultrafeedback-binarized-preferences
- Felladrin/ChatML-ultrafeedback-binarized-preferences
- sablo/oasst2_dpo_pairs_en
- Felladrin/ChatML-oasst2_dpo_pairs_en
- NeuralNovel/Neural-DPO
- Felladrin/ChatML-Neural-DPO
base_model: Felladrin/Minueza-32M-Base
pipeline_tag: text-generation
widget:
- messages:
- role: system
content: You are a career counselor. The user will provide you with an individual
looking for guidance in their professional life, and your task is to assist
them in determining what careers they are most suited for based on their skills,
interests, and experience. You should also conduct research into the various
options available, explain the job market trends in different industries, and
advice on which qualifications would be beneficial for pursuing particular fields.
- role: user
content: Heya!
- role: assistant
content: Hi! How may I help you?
- role: user
content: I am interested in developing a career in software engineering. What
would you recommend me to do?
- messages:
- role: system
content: You are a highly knowledgeable assistant. Help the user as much as you
can.
- role: user
content: How can I become a healthier person?
- messages:
- role: system
content: You are a helpful assistant who gives creative responses.
- role: user
content: Write the specs of a game about mages in a fantasy world.
- messages:
- role: system
content: You are a helpful assistant who answers user's questions with details.
- role: user
content: Tell me about the pros and cons of social media.
- messages:
- role: system
content: You are a helpful assistant who answers user's questions with details
and curiosity.
- role: user
content: What are some potential applications for quantum computing?
inference:
parameters:
max_new_tokens: 250
do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176
model-index:
- name: Minueza-32M-Chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 20.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 26.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-Chat
name: Open LLM Leaderboard
---
# Minueza-32M-Chat: A chat model with 32 million parameters
- Base model: [Felladrin/Minueza-32M-Base](https://huggingface.co/Felladrin/Minueza-32M-Base)
- Datasets used during SFT:
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-databricks-dolly-15k)] [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-reddit-instruct-curated)] [euclaise/reddit-instruct-curated](https://huggingface.co/datasets/euclaise/reddit-instruct-curated)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-WebGLM-QA)] [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-webGPT_x_dolly)] [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-Capybara)] [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-SlimOrca-Dedup)] [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-ultrachat_200k)] [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-HelpSteer)] [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-oasst2_curated)] [sablo/oasst2_curated](https://huggingface.co/datasets/sablo/oasst2_curated)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-aya_dataset)] [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset)
- Datasets used during DPO:
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-distilabel-capybara-dpo-7k-binarized)] [argilla/distilabel-capybara-dpo-7k-binarized](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-distilabel-intel-orca-dpo-pairs)] [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-ultrafeedback-binarized-preferences)] [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-oasst2_dpo_pairs_en)] [sablo/oasst2_dpo_pairs_en](https://huggingface.co/datasets/sablo/oasst2_dpo_pairs_en)
- [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-Neural-DPO)] [NeuralNovel/Neural-DPO](https://huggingface.co/datasets/NeuralNovel/Neural-DPO)
- License: [Apache License 2.0](https://huggingface.co/Felladrin/Minueza-32M-Chat/resolve/main/license.txt)
- Availability in other ML formats:
- GGUF: [Felladrin/gguf-Minueza-32M-Chat](https://huggingface.co/Felladrin/gguf-Minueza-32M-Chat)
- ONNX: [Felladrin/onnx-Minueza-32M-Chat](https://huggingface.co/Felladrin/onnx-Minueza-32M-Chat)
## Recommended Prompt Format
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
```
## Recommended Inference Parameters
```yml
do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176
```
## Usage Example
```python
from transformers import pipeline
generate = pipeline("text-generation", "Felladrin/Minueza-32M-Chat")
messages = [
{
"role": "system",
"content": "You are a helpful assistant who answers the user's questions with details and curiosity.",
},
{
"role": "user",
"content": "What are some potential applications for quantum computing?",
},
]
prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = generate(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.65,
top_k=35,
top_p=0.55,
repetition_penalty=1.176,
)
print(output[0]["generated_text"])
```
## How it was trained
This model was trained with [SFT Trainer](https://huggingface.co/docs/trl/main/en/sft_trainer) and [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer), in several sessions, using the following settings:
For Supervised Fine-Tuning:
| Hyperparameter | Value |
| :-------------------------- | :-------------------------------------------- |
| learning_rate | 2e-5 |
| total_train_batch_size | 24 |
| max_seq_length | 2048 |
| weight_decay | 0 |
| warmup_ratio | 0.02 |
For Direct Preference Optimization:
| Hyperparameter | Value |
| :-------------------------- | :-------------------------------------------- |
| learning_rate | 7.5e-7 |
| total_train_batch_size | 6 |
| max_length | 2048 |
| max_prompt_length | 1536 |
| max_steps | 200 |
| weight_decay | 0 |
| warmup_ratio | 0.02 |
| beta | 0.1 |
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Minueza-32M-Chat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.49|
|AI2 Reasoning Challenge (25-Shot)|20.39|
|HellaSwag (10-Shot) |26.54|
|MMLU (5-Shot) |25.75|
|TruthfulQA (0-shot) |47.27|
|Winogrande (5-shot) |50.99|
|GSM8k (5-shot) | 0.00|
|
kmfoda/gpt2-1b | kmfoda | 2024-06-12T11:36:25Z | 544 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"0",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| feature-extraction | 2024-02-28T19:19:28Z | ---
library_name: transformers
tags:
- '0'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Uncensored-Jordan-33B-i1-GGUF | mradermacher | 2024-05-06T05:34:22Z | 544 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ajibawa-2023/Uncensored-Jordan-33B",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-30T06:00:18Z | ---
base_model: ajibawa-2023/Uncensored-Jordan-33B
language:
- en
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
---
## About
weighted/imatrix quants of https://huggingface.co/ajibawa-2023/Uncensored-Jordan-33B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Uncensored-Jordan-33B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ3_M.gguf) | i1-IQ3_M | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 18.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored-Jordan-33B-i1-GGUF/resolve/main/Uncensored-Jordan-33B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ninagroot/Llama-450M | ninagroot | 2024-04-22T11:40:39Z | 544 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-22T09:01:23Z | ---
tags:
- generated_from_trainer
model-index:
- name: Llama-450M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-450M
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.6051 | 0.89 | 2 | 8.5427 |
| 8.1233 | 1.78 | 4 | 8.2081 |
| 7.2688 | 2.67 | 6 | 7.6786 |
| 6.3982 | 4.0 | 9 | 7.0782 |
| 5.8794 | 4.89 | 11 | 6.7779 |
| 5.4786 | 5.78 | 13 | 6.5717 |
| 4.994 | 6.67 | 15 | 6.3356 |
| 4.35 | 8.0 | 18 | 6.2257 |
| 3.9757 | 8.89 | 20 | 6.0451 |
| 3.4479 | 9.78 | 22 | 6.0242 |
| 3.1004 | 10.67 | 24 | 5.9219 |
| 2.5207 | 12.0 | 27 | 5.8224 |
| 2.1123 | 12.89 | 29 | 5.9286 |
| 1.7641 | 13.33 | 30 | 5.8986 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF | LiteLLMs | 2024-05-08T09:04:44Z | 544 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"GGUF",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-08T07:02:43Z |
---
language:
- en
license: other
library_name: transformers
tags:
- mergekit
- merge
- GGUF
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
quantized_by: andrijdavid
---
# Meta-Llama-3-13B-Instruct-GGUF
- Original model: [Meta-Llama-3-13B-Instruct](https://huggingface.co/andrijdavid/Meta-Llama-3-13B-Instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta-Llama-3-13B-Instruct](https://huggingface.co/andrijdavid/Meta-Llama-3-13B-Instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Meta-Llama-3-13B-Instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta-Llama-3-13B-Instruct
# Meta-Llama-3-13B-Instruct
Meta-Llama-3-13B-Instruct is a [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
## Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 16]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [4, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [8, 31]
model: meta-llama/Meta-Llama-3-8B-Instruct
merge_method: passthrough
dtype: float16
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "andrijdavid/Meta-Llama-3-13B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
<!-- original-model-card end -->
|
bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF | bartowski | 2024-05-08T23:26:51Z | 544 | 7 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"text-generation",
"en",
"base_model:Replete-AI/Llama-3-11.5B-Instruct-V2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-08T23:03:47Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: Replete-AI/Llama-3-11.5B-Instruct-V2
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-11.5B-Instruct-Coder-v2
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/rombodawg/Llama-3-11.5B-Instruct-Coder-v2
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-11.5B-Instruct-Coder-v2-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q8_0.gguf) | Q8_0 | 12.24GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q6_K.gguf) | Q6_K | 9.45GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q5_K_M.gguf) | Q5_K_M | 8.19GB | High quality, *recommended*. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q5_K_S.gguf) | Q5_K_S | 7.99GB | High quality, *recommended*. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q4_K_M.gguf) | Q4_K_M | 7.01GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q4_K_S.gguf) | Q4_K_S | 6.67GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ4_NL.gguf) | IQ4_NL | 6.64GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ4_XS.gguf) | IQ4_XS | 6.31GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q3_K_L.gguf) | Q3_K_L | 6.15GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q3_K_M.gguf) | Q3_K_M | 5.69GB | Even lower quality. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ3_M.gguf) | IQ3_M | 5.34GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ3_S.gguf) | IQ3_S | 5.19GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q3_K_S.gguf) | Q3_K_S | 5.16GB | Low quality, not recommended. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ3_XS.gguf) | IQ3_XS | 4.94GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ3_XXS.gguf) | IQ3_XXS | 4.61GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-11.5B-Instruct-Coder-v2-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-Q2_K.gguf) | Q2_K | 4.46GB | Very low quality but surprisingly usable. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ2_M.gguf) | IQ2_M | 4.12GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ2_S.gguf) | IQ2_S | 3.84GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ2_XS.gguf) | IQ2_XS | 3.63GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ2_XXS.gguf) | IQ2_XXS | 3.32GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ1_M.gguf) | IQ1_M | 2.97GB | Extremely low quality, *not* recommended. |
| [Llama-3-11.5B-Instruct-Coder-v2-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF/blob/main/Llama-3-11.5B-Instruct-Coder-v2-IQ1_S.gguf) | IQ1_S | 2.75GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF --include "Llama-3-11.5B-Instruct-Coder-v2-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-11.5B-Instruct-Coder-v2-GGUF --include "Llama-3-11.5B-Instruct-Coder-v2-Q8_0.gguf/*" --local-dir Llama-3-11.5B-Instruct-Coder-v2-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Llama-3-11.5B-Instruct-Coder-v2-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF | mradermacher | 2024-05-14T19:09:22Z | 544 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:failspy/llama-3-70B-Instruct-abliterated",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-13T00:41:06Z | ---
base_model: failspy/llama-3-70B-Instruct-abliterated
language:
- en
library_name: transformers
license: llama3
license_link: LICENSE
license_name: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-70B-Instruct-abliterated-i1-GGUF/resolve/main/llama-3-70B-Instruct-abliterated.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aymanboufarhi/gemma2B-chat-bot-fstt | aymanboufarhi | 2024-05-28T13:52:40Z | 544 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-28T13:49:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf | RichardErkhov | 2024-06-03T03:03:18Z | 544 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-02T16:12:04Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Iambe-20b-DARE-v2 - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/Iambe-20b-DARE-v2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Iambe-20b-DARE-v2.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q2_K.gguf) | Q2_K | 6.91GB |
| [Iambe-20b-DARE-v2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [Iambe-20b-DARE-v2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [Iambe-20b-DARE-v2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [Iambe-20b-DARE-v2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [Iambe-20b-DARE-v2.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q3_K.gguf) | Q3_K | 9.04GB |
| [Iambe-20b-DARE-v2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [Iambe-20b-DARE-v2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [Iambe-20b-DARE-v2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [Iambe-20b-DARE-v2.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q4_0.gguf) | Q4_0 | 10.52GB |
| [Iambe-20b-DARE-v2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [Iambe-20b-DARE-v2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [Iambe-20b-DARE-v2.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q4_K.gguf) | Q4_K | 11.22GB |
| [Iambe-20b-DARE-v2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [Iambe-20b-DARE-v2.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q4_1.gguf) | Q4_1 | 11.67GB |
| [Iambe-20b-DARE-v2.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q5_0.gguf) | Q5_0 | 12.83GB |
| [Iambe-20b-DARE-v2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [Iambe-20b-DARE-v2.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q5_K.gguf) | Q5_K | 13.18GB |
| [Iambe-20b-DARE-v2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [Iambe-20b-DARE-v2.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q5_1.gguf) | Q5_1 | 13.98GB |
| [Iambe-20b-DARE-v2.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q6_K.gguf) | Q6_K | 15.28GB |
| [Iambe-20b-DARE-v2.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_Iambe-20b-DARE-v2-gguf/blob/main/Iambe-20b-DARE-v2.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
language:
- en
---
<p align="center"><font size="5"> <i>Strange quirk: This model seems to need a context size of EXACTLY 4096 ONLY. I'm assuming this is a dares_ties effect?</i> </font></p>
<p align="center"><img src="https://i.ibb.co/pbpJHpk/iambe-sml.png"/><font size="6"> <b>Iambe-20b-DARE-v2</b> </font></p>
<p align="center"><font size="4"> <b>Alpaca prompt formatting</b> </font></p>
### Description
Named after a charming daughter of Echo and Pan in Greek myth, Iambe-20b-DARE-v2 is an improved [DARE](https://github.com/yule-BUAA/MergeLM) merge building on my recent experiments.
Iambe is intended to have the best realistically possible understanding of anatomy and of a scene's state for a 20b merge, while remaining personable and authentic in "voice".
### Update Methodology
Noromaid and the general "no-robots" vibe didn't come through like I'd hoped in v1. My hypothesis is that the "soul" MythoMax and Noromaid have is probably distributed widely over many low-value deltas, due to the "ephemeral" nature of such a thing.
My old base model was likely giving DARE conniption fits, so I replaced that with a truly vanilla 20b base model.
CleverGirl was updated to the DARE version, as Sir Hillary said, simply because it was there.
Without a large base of dare_ties models to compare to, I'm basically feeling my way through this intuitively, so here's to good results!
### Recipe
merge_method: dare_ties
- base_model: athirdpath/BigLlama-20b-v1.1
- model: Noromaid-20b-v0.1.1
weight: 0.38 / density: 0.60
- model: athirdpath/athirdpath/Eileithyia-20b
weight: 0.22 / density: 0.40
- model: athirdpath/CleverGirl-20b-Blended-v1.1-DARE
weight: 0.40 / density: 0.33
int8_mask: true
dtype: bfloat16
|
nDimensional/War-and-Diffusion-Warhammer-40K-SDXL | nDimensional | 2024-06-25T17:38:51Z | 544 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"StableDiffusion",
"image-generation",
"SDXL",
"SDXLv1.0",
"Warhammer 40,000",
"Warhammer",
"Warhammer 40k",
"en",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-20T11:04:10Z | ---
license: creativeml-openrail-m
tags:
- diffusers
- safetensors
- text-to-image
- stable-diffusion
- StableDiffusion
- image-generation
- SDXL
- SDXLv1.0
- Warhammer 40,000
- Warhammer
- Warhammer 40k
language:
- en
library_name: diffusers
---
<html>
<head>
<link href="https://fonts.googleapis.com/css2?family=Skranji&display=swap" rel="stylesheet">
</head>
<style>
.main-container {
display: flex;
flex-direction: column;
align-items: center;
}
.title-container {
text-align: center;
}
.title {
font-size: 2.5em;
color: #1d1d1d!important;
font-family: 'Skranji';
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
background: transparent;
margin-bottom: 0;
}
.title span {
background: -webkit-linear-gradient(45deg, #EEB21F, #ff6900);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.icons {
display: flex;
justify-content: center;
margin-top: 5px;
margin-bottom: 20px;
}
.icons img {
width: 50px;
height: auto;
margin: 0 5px;
}
img {
max-width: 100%;
height: auto;
}
.center {
display: block;
margin-left: auto;
margin-right: auto;
width: 832px;
}
</style>
<div clss='main-container'>
<div class="title-container">
<h1 class="title">
<span>War and Diffusion</span>
</h1>
</div>
<div class="icons">
<img src="assets/aquila.svg" alt="Aquila">
<img src="assets/xenos.svg" alt="Xenos">
<img src="assets/chaos.svg" alt="Chaos">
</div>
</div>
<figure>
<img src="assets/sample_image.png" alt="Sample Image" class="center"/>
<figcaption>An epic digital painting of the Emperor of Mankind from Warhammer 40k. He is depicted as a handsome man with long black hair and piercing red eyes. He wears golden power armor and a laurel wreath. The background is a bright light. portrait, grimdark, centered, gritty, textured, extremely detailed, atmospheric, emotional, captivating, mythical and enchanting, surrounded by colorful ethereal flames, [warp energy], By Boru D Brian</figcaption>
</figure>
<!-- START SUPPORT -->
<a href='https://ko-fi.com/M4M3ZBOIT' target='_blank'>
<img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' />
</a>
<!-- END SUPPORT -->
<b>More Details on <a href="https://civitai.com/models/525065/war-and-diffusion-warhammer-40k">Civitai</a></b>
<br>
<b><a href="https://huggingface.co/nDimensional/Waifus-at-War-Warhammer-40K-SDXL">Anime Version</a></b><br>
<b><a href="wildcards/WH40K_Artists.txt">Wildcards For 40K Artist Names</a></b>
<!-- START MODEL INFO -->
<div class="model-info-container" style='font-family:Monospace'>
<table id="model-info-table">
<tr>
<th></th>
<th></th>
</tr>
<tr>
<td>Base Model</td>
<td>SDXL v1.0</td>
</tr>
<tr>
<td>VAE</td>
<td><a href="https://huggingface.co/madebyollin/sdxl-vae-fp16-fix">SDXL-VAE-FP16-FIX</a></td>
</tr>
<tr>
<td>Type</td>
<td>Full Parameter Fine-Tune</td>
</tr>
</table>
</div>
<!-- END MODEL INFO -->
|
google/pegasus-newsroom | google | 2023-01-24T16:42:38Z | 543 | 14 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mit-han-lab/opt-1.3b-smoothquant | mit-han-lab | 2022-11-20T19:35:10Z | 543 | 3 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-11-20T19:22:11Z | ---
license: mit
---
|
xyn-ai/DreamShaper | xyn-ai | 2023-03-23T04:26:44Z | 543 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"en",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-23T04:26:44Z | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
inference: false
duplicated_from: Lykon/DreamShaper
---
# Dream Shaper
## Official Repository
Read more about this model here: https://civitai.com/models/4384/dreamshaper
Also please support by giving 5 stars and a heart, which will notify new updates.
Also consider supporting me on Patreon or ByuMeACoffee
- https://www.patreon.com/Lykon275
- https://www.buymeacoffee.com/lykon
You can run this model on:
- https://huggingface.co/spaces/Lykon/DreamShaper-webui
- https://sinkin.ai/m/4zdwGOB
Be sure to check out NeverEnding Dream, which is another semi-realistic model which aims at being fully compatible with booru tag loras and prompts
- https://huggingface.co/Lykon/NeverEnding-Dream
Some sample output:





|
infgrad/stella-base-zh | infgrad | 2024-04-06T02:47:54Z | 543 | 14 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"mteb",
"arxiv:1612.00796",
"model-index",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
]
| feature-extraction | 2023-09-09T15:15:44Z | ---
tags:
- mteb
model-index:
- name: stella-base-zh
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 49.34825050234731
- type: cos_sim_spearman
value: 51.74726338428475
- type: euclidean_pearson
value: 50.14955499038012
- type: euclidean_spearman
value: 51.74730359287025
- type: manhattan_pearson
value: 50.016703594410615
- type: manhattan_spearman
value: 51.63936364317057
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 52.26876163587667
- type: cos_sim_spearman
value: 52.818410137444374
- type: euclidean_pearson
value: 55.24925286208574
- type: euclidean_spearman
value: 52.818404507964686
- type: manhattan_pearson
value: 55.21236977375391
- type: manhattan_spearman
value: 52.80289117015117
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.245999999999995
- type: f1
value: 38.55443674287747
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 61.553652835163255
- type: cos_sim_spearman
value: 63.29065064027392
- type: euclidean_pearson
value: 62.000329557485
- type: euclidean_spearman
value: 63.290650638944825
- type: manhattan_pearson
value: 62.02786936153664
- type: manhattan_spearman
value: 63.32720383880146
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringP2P
name: MTEB CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 39.71224230526474
- task:
type: Clustering
dataset:
type: C-MTEB/CLSClusteringS2S
name: MTEB CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 36.55705201882987
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv1-reranking
name: MTEB CMedQAv1
config: default
split: test
revision: None
metrics:
- type: map
value: 85.69418720521168
- type: mrr
value: 87.97444444444446
- task:
type: Reranking
dataset:
type: C-MTEB/CMedQAv2-reranking
name: MTEB CMedQAv2
config: default
split: test
revision: None
metrics:
- type: map
value: 86.46348358482606
- type: mrr
value: 88.81428571428572
- task:
type: Retrieval
dataset:
type: C-MTEB/CmedqaRetrieval
name: MTEB CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.721
- type: map_at_10
value: 35.428
- type: map_at_100
value: 37.438
- type: map_at_1000
value: 37.557
- type: map_at_3
value: 31.589
- type: map_at_5
value: 33.647
- type: mrr_at_1
value: 36.709
- type: mrr_at_10
value: 44.590999999999994
- type: mrr_at_100
value: 45.684999999999995
- type: mrr_at_1000
value: 45.732
- type: mrr_at_3
value: 42.331
- type: mrr_at_5
value: 43.532
- type: ndcg_at_1
value: 36.709
- type: ndcg_at_10
value: 41.858000000000004
- type: ndcg_at_100
value: 49.775999999999996
- type: ndcg_at_1000
value: 51.844
- type: ndcg_at_3
value: 37.067
- type: ndcg_at_5
value: 38.875
- type: precision_at_1
value: 36.709
- type: precision_at_10
value: 9.411999999999999
- type: precision_at_100
value: 1.5709999999999997
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.154999999999998
- type: precision_at_5
value: 15.184000000000001
- type: recall_at_1
value: 23.721
- type: recall_at_10
value: 51.714000000000006
- type: recall_at_100
value: 84.60600000000001
- type: recall_at_1000
value: 98.414
- type: recall_at_3
value: 37.091
- type: recall_at_5
value: 42.978
- task:
type: PairClassification
dataset:
type: C-MTEB/CMNLI
name: MTEB Cmnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 73.61395069152135
- type: cos_sim_ap
value: 81.65459344597652
- type: cos_sim_f1
value: 75.66718995290425
- type: cos_sim_precision
value: 68.4918529746116
- type: cos_sim_recall
value: 84.5218611176058
- type: dot_accuracy
value: 73.61395069152135
- type: dot_ap
value: 81.64596407363373
- type: dot_f1
value: 75.66718995290425
- type: dot_precision
value: 68.4918529746116
- type: dot_recall
value: 84.5218611176058
- type: euclidean_accuracy
value: 73.61395069152135
- type: euclidean_ap
value: 81.6546013070452
- type: euclidean_f1
value: 75.66718995290425
- type: euclidean_precision
value: 68.4918529746116
- type: euclidean_recall
value: 84.5218611176058
- type: manhattan_accuracy
value: 73.51773902585688
- type: manhattan_ap
value: 81.57345451483191
- type: manhattan_f1
value: 75.7393958530681
- type: manhattan_precision
value: 68.87442572741195
- type: manhattan_recall
value: 84.12438625204582
- type: max_accuracy
value: 73.61395069152135
- type: max_ap
value: 81.6546013070452
- type: max_f1
value: 75.7393958530681
- task:
type: Retrieval
dataset:
type: C-MTEB/CovidRetrieval
name: MTEB CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 73.551
- type: map_at_10
value: 81.513
- type: map_at_100
value: 81.734
- type: map_at_1000
value: 81.73700000000001
- type: map_at_3
value: 80.27300000000001
- type: map_at_5
value: 81.017
- type: mrr_at_1
value: 73.762
- type: mrr_at_10
value: 81.479
- type: mrr_at_100
value: 81.699
- type: mrr_at_1000
value: 81.702
- type: mrr_at_3
value: 80.33
- type: mrr_at_5
value: 80.999
- type: ndcg_at_1
value: 73.867
- type: ndcg_at_10
value: 84.711
- type: ndcg_at_100
value: 85.714
- type: ndcg_at_1000
value: 85.803
- type: ndcg_at_3
value: 82.244
- type: ndcg_at_5
value: 83.514
- type: precision_at_1
value: 73.867
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.001
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 29.505
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 73.551
- type: recall_at_10
value: 94.521
- type: recall_at_100
value: 99.05199999999999
- type: recall_at_1000
value: 99.789
- type: recall_at_3
value: 87.777
- type: recall_at_5
value: 90.83200000000001
- task:
type: Retrieval
dataset:
type: C-MTEB/DuRetrieval
name: MTEB DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.230999999999998
- type: map_at_10
value: 80.635
- type: map_at_100
value: 83.393
- type: map_at_1000
value: 83.431
- type: map_at_3
value: 55.717000000000006
- type: map_at_5
value: 70.387
- type: mrr_at_1
value: 90.75
- type: mrr_at_10
value: 93.569
- type: mrr_at_100
value: 93.648
- type: mrr_at_1000
value: 93.65
- type: mrr_at_3
value: 93.27499999999999
- type: mrr_at_5
value: 93.482
- type: ndcg_at_1
value: 90.75
- type: ndcg_at_10
value: 87.801
- type: ndcg_at_100
value: 90.44
- type: ndcg_at_1000
value: 90.776
- type: ndcg_at_3
value: 86.556
- type: ndcg_at_5
value: 85.468
- type: precision_at_1
value: 90.75
- type: precision_at_10
value: 42.08
- type: precision_at_100
value: 4.816
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 77.60000000000001
- type: precision_at_5
value: 65.49000000000001
- type: recall_at_1
value: 26.230999999999998
- type: recall_at_10
value: 89.00200000000001
- type: recall_at_100
value: 97.866
- type: recall_at_1000
value: 99.569
- type: recall_at_3
value: 57.778
- type: recall_at_5
value: 74.895
- task:
type: Retrieval
dataset:
type: C-MTEB/EcomRetrieval
name: MTEB EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 47.599999999999994
- type: map_at_10
value: 57.296
- type: map_at_100
value: 58.011
- type: map_at_1000
value: 58.028
- type: map_at_3
value: 54.300000000000004
- type: map_at_5
value: 56.21000000000001
- type: mrr_at_1
value: 47.599999999999994
- type: mrr_at_10
value: 57.296
- type: mrr_at_100
value: 58.011
- type: mrr_at_1000
value: 58.028
- type: mrr_at_3
value: 54.300000000000004
- type: mrr_at_5
value: 56.21000000000001
- type: ndcg_at_1
value: 47.599999999999994
- type: ndcg_at_10
value: 62.458000000000006
- type: ndcg_at_100
value: 65.589
- type: ndcg_at_1000
value: 66.059
- type: ndcg_at_3
value: 56.364000000000004
- type: ndcg_at_5
value: 59.815
- type: precision_at_1
value: 47.599999999999994
- type: precision_at_10
value: 7.89
- type: precision_at_100
value: 0.928
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 20.767
- type: precision_at_5
value: 14.14
- type: recall_at_1
value: 47.599999999999994
- type: recall_at_10
value: 78.9
- type: recall_at_100
value: 92.80000000000001
- type: recall_at_1000
value: 96.6
- type: recall_at_3
value: 62.3
- type: recall_at_5
value: 70.7
- task:
type: Classification
dataset:
type: C-MTEB/IFlyTek-classification
name: MTEB IFlyTek
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 47.46440938822624
- type: f1
value: 34.587004997852524
- task:
type: Classification
dataset:
type: C-MTEB/JDReview-classification
name: MTEB JDReview
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 84.9906191369606
- type: ap
value: 52.31309789960497
- type: f1
value: 79.55556102310072
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 69.80872804636063
- type: cos_sim_spearman
value: 75.83290476813391
- type: euclidean_pearson
value: 74.09865882324753
- type: euclidean_spearman
value: 75.83290698376118
- type: manhattan_pearson
value: 74.0616102379577
- type: manhattan_spearman
value: 75.81278969865738
- task:
type: Retrieval
dataset:
type: C-MTEB/MMarcoRetrieval
name: MTEB MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 65.029
- type: map_at_10
value: 74.39
- type: map_at_100
value: 74.734
- type: map_at_1000
value: 74.74300000000001
- type: map_at_3
value: 72.52
- type: map_at_5
value: 73.724
- type: mrr_at_1
value: 67.192
- type: mrr_at_10
value: 74.95100000000001
- type: mrr_at_100
value: 75.25500000000001
- type: mrr_at_1000
value: 75.263
- type: mrr_at_3
value: 73.307
- type: mrr_at_5
value: 74.355
- type: ndcg_at_1
value: 67.192
- type: ndcg_at_10
value: 78.22200000000001
- type: ndcg_at_100
value: 79.76299999999999
- type: ndcg_at_1000
value: 80.018
- type: ndcg_at_3
value: 74.656
- type: ndcg_at_5
value: 76.697
- type: precision_at_1
value: 67.192
- type: precision_at_10
value: 9.513
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 28.204
- type: precision_at_5
value: 18.009
- type: recall_at_1
value: 65.029
- type: recall_at_10
value: 89.462
- type: recall_at_100
value: 96.418
- type: recall_at_1000
value: 98.409
- type: recall_at_3
value: 80.029
- type: recall_at_5
value: 84.882
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.56489576328177
- type: f1
value: 63.37174551232159
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.4862138533961
- type: f1
value: 71.171374964826
- task:
type: Retrieval
dataset:
type: C-MTEB/MedicalRetrieval
name: MTEB MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 48.6
- type: map_at_10
value: 54.92700000000001
- type: map_at_100
value: 55.528
- type: map_at_1000
value: 55.584
- type: map_at_3
value: 53.55
- type: map_at_5
value: 54.379999999999995
- type: mrr_at_1
value: 48.8
- type: mrr_at_10
value: 55.028999999999996
- type: mrr_at_100
value: 55.629
- type: mrr_at_1000
value: 55.684999999999995
- type: mrr_at_3
value: 53.65
- type: mrr_at_5
value: 54.48
- type: ndcg_at_1
value: 48.6
- type: ndcg_at_10
value: 57.965999999999994
- type: ndcg_at_100
value: 61.043000000000006
- type: ndcg_at_1000
value: 62.624
- type: ndcg_at_3
value: 55.132000000000005
- type: ndcg_at_5
value: 56.621
- type: precision_at_1
value: 48.6
- type: precision_at_10
value: 6.75
- type: precision_at_100
value: 0.823
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.659999999999998
- type: recall_at_1
value: 48.6
- type: recall_at_10
value: 67.5
- type: recall_at_100
value: 82.3
- type: recall_at_1000
value: 94.89999999999999
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.3
- task:
type: Reranking
dataset:
type: C-MTEB/Mmarco-reranking
name: MTEB MMarcoReranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.196130696027474
- type: mrr
value: 28.43730158730159
- task:
type: Classification
dataset:
type: C-MTEB/MultilingualSentiment-classification
name: MTEB MultilingualSentiment
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 72.48333333333333
- type: f1
value: 72.00258522357558
- task:
type: PairClassification
dataset:
type: C-MTEB/OCNLI
name: MTEB Ocnli
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 65.13264753654575
- type: cos_sim_ap
value: 70.52831936800807
- type: cos_sim_f1
value: 71.35353535353535
- type: cos_sim_precision
value: 57.787958115183244
- type: cos_sim_recall
value: 93.24181626187962
- type: dot_accuracy
value: 65.13264753654575
- type: dot_ap
value: 70.52828597418102
- type: dot_f1
value: 71.35353535353535
- type: dot_precision
value: 57.787958115183244
- type: dot_recall
value: 93.24181626187962
- type: euclidean_accuracy
value: 65.13264753654575
- type: euclidean_ap
value: 70.52828597418102
- type: euclidean_f1
value: 71.35353535353535
- type: euclidean_precision
value: 57.787958115183244
- type: euclidean_recall
value: 93.24181626187962
- type: manhattan_accuracy
value: 64.8077964266378
- type: manhattan_ap
value: 70.39954487476643
- type: manhattan_f1
value: 71.2270200940573
- type: manhattan_precision
value: 59.84195402298851
- type: manhattan_recall
value: 87.96198521647307
- type: max_accuracy
value: 65.13264753654575
- type: max_ap
value: 70.52831936800807
- type: max_f1
value: 71.35353535353535
- task:
type: Classification
dataset:
type: C-MTEB/OnlineShopping-classification
name: MTEB OnlineShopping
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 90.34
- type: ap
value: 87.79622626876444
- type: f1
value: 90.32357430051181
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 27.9175458105215
- type: cos_sim_spearman
value: 32.024302491613014
- type: euclidean_pearson
value: 33.01780461609846
- type: euclidean_spearman
value: 32.024301939183374
- type: manhattan_pearson
value: 32.94874897942371
- type: manhattan_spearman
value: 31.902283210178012
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 36.288219964332754
- type: cos_sim_spearman
value: 36.46838652731507
- type: euclidean_pearson
value: 35.11414028811812
- type: euclidean_spearman
value: 36.468386523814104
- type: manhattan_pearson
value: 35.20922826624027
- type: manhattan_spearman
value: 36.55349180906185
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.18186265837434
- type: cos_sim_spearman
value: 67.52365178443915
- type: euclidean_pearson
value: 65.46342439169497
- type: euclidean_spearman
value: 67.52365178443915
- type: manhattan_pearson
value: 67.3476263677961
- type: manhattan_spearman
value: 69.09476240936812
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 72.53864906415339
- type: cos_sim_spearman
value: 72.63037820118355
- type: euclidean_pearson
value: 72.42255276991672
- type: euclidean_spearman
value: 72.63037820118355
- type: manhattan_pearson
value: 72.36324244766192
- type: manhattan_spearman
value: 72.58609772740323
- task:
type: Reranking
dataset:
type: C-MTEB/T2Reranking
name: MTEB T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.45708148192449
- type: mrr
value: 76.08372693469173
- task:
type: Retrieval
dataset:
type: C-MTEB/T2Retrieval
name: MTEB T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.436999999999998
- type: map_at_10
value: 74.516
- type: map_at_100
value: 78.29899999999999
- type: map_at_1000
value: 78.372
- type: map_at_3
value: 52.217
- type: map_at_5
value: 64.24
- type: mrr_at_1
value: 88.23
- type: mrr_at_10
value: 91.06400000000001
- type: mrr_at_100
value: 91.18
- type: mrr_at_1000
value: 91.184
- type: mrr_at_3
value: 90.582
- type: mrr_at_5
value: 90.88300000000001
- type: ndcg_at_1
value: 88.23
- type: ndcg_at_10
value: 82.511
- type: ndcg_at_100
value: 86.531
- type: ndcg_at_1000
value: 87.244
- type: ndcg_at_3
value: 83.987
- type: ndcg_at_5
value: 82.46900000000001
- type: precision_at_1
value: 88.23
- type: precision_at_10
value: 41.245
- type: precision_at_100
value: 4.987
- type: precision_at_1000
value: 0.515
- type: precision_at_3
value: 73.675
- type: precision_at_5
value: 61.71
- type: recall_at_1
value: 26.436999999999998
- type: recall_at_10
value: 81.547
- type: recall_at_100
value: 94.548
- type: recall_at_1000
value: 98.197
- type: recall_at_3
value: 54.056000000000004
- type: recall_at_5
value: 67.93
- task:
type: Classification
dataset:
type: C-MTEB/TNews-classification
name: MTEB TNews
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 50.784
- type: f1
value: 48.89471168071432
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringP2P
name: MTEB ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 63.19039347990962
- task:
type: Clustering
dataset:
type: C-MTEB/ThuNewsClusteringS2S
name: MTEB ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 55.357378578603225
- task:
type: Retrieval
dataset:
type: C-MTEB/VideoRetrieval
name: MTEB VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 58.8
- type: map_at_10
value: 68.623
- type: map_at_100
value: 69.074
- type: map_at_1000
value: 69.085
- type: map_at_3
value: 66.767
- type: map_at_5
value: 67.972
- type: mrr_at_1
value: 58.699999999999996
- type: mrr_at_10
value: 68.573
- type: mrr_at_100
value: 69.024
- type: mrr_at_1000
value: 69.035
- type: mrr_at_3
value: 66.717
- type: mrr_at_5
value: 67.92200000000001
- type: ndcg_at_1
value: 58.8
- type: ndcg_at_10
value: 73.038
- type: ndcg_at_100
value: 75.16199999999999
- type: ndcg_at_1000
value: 75.422
- type: ndcg_at_3
value: 69.297
- type: ndcg_at_5
value: 71.475
- type: precision_at_1
value: 58.8
- type: precision_at_10
value: 8.67
- type: precision_at_100
value: 0.9650000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 25.533
- type: precision_at_5
value: 16.38
- type: recall_at_1
value: 58.8
- type: recall_at_10
value: 86.7
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 98.5
- type: recall_at_3
value: 76.6
- type: recall_at_5
value: 81.89999999999999
- task:
type: Classification
dataset:
type: C-MTEB/waimai-classification
name: MTEB Waimai
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.61999999999999
- type: ap
value: 69.93149123197975
- type: f1
value: 84.99670691559903
---
**新闻 | News**
**[2024-04-06]** 开源[puff](https://huggingface.co/infgrad/puff-base-v1)系列模型,**专门针对检索和语义匹配任务,更多的考虑泛化性和私有通用测试集效果,向量维度可变,中英双语**。
**[2024-02-27]** 开源stella-mrl-large-zh-v3.5-1792d模型,支持**向量可变维度**。
**[2024-02-17]** 开源stella v3系列、dialogue编码模型和相关训练数据。
**[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。
**[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。
**[2023-09-11]** 开源stella-base-zh和stella-large-zh
欢迎去[本人主页](https://huggingface.co/infgrad)查看最新模型,并提出您的宝贵意见!
## stella model
stella是一个通用的文本编码模型,主要有以下模型:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
完整的训练思路和训练过程已记录在[博客1](https://zhuanlan.zhihu.com/p/655322183)和[博客2](https://zhuanlan.zhihu.com/p/662209559),欢迎阅读讨论。
**训练数据:**
1. 开源数据(wudao_base_200GB[1]、m3e[2]和simclue[3]),着重挑选了长度大于512的文本
2. 在通用语料库上使用LLM构造一批(question, paragraph)和(sentence, paragraph)数据
**训练方法:**
1. 对比学习损失函数
2. 带有难负例的对比学习损失函数(分别基于bm25和vector构造了难负例)
3. EWC(Elastic Weights Consolidation)[4]
4. cosent loss[5]
5. 每一种类型的数据一个迭代器,分别计算loss进行更新
stella-v2在stella模型的基础上,使用了更多的训练数据,同时知识蒸馏等方法去除了前置的instruction(
比如piccolo的`查询:`, `结果:`, e5的`query:`和`passage:`)。
**初始权重:**\
stella-base-zh和stella-large-zh分别以piccolo-base-zh[6]和piccolo-large-zh作为基础模型,512-1024的position
embedding使用层次分解位置编码[7]进行初始化。\
感谢商汤科技研究院开源的[piccolo系列模型](https://huggingface.co/sensenova)。
stella is a general-purpose text encoder, which mainly includes the following models:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
The training data mainly includes:
1. Open-source training data (wudao_base_200GB, m3e, and simclue), with a focus on selecting texts with lengths greater
than 512.
2. A batch of (question, paragraph) and (sentence, paragraph) data constructed on a general corpus using LLM.
The loss functions mainly include:
1. Contrastive learning loss function
2. Contrastive learning loss function with hard negative examples (based on bm25 and vector hard negatives)
3. EWC (Elastic Weights Consolidation)
4. cosent loss
Model weight initialization:\
stella-base-zh and stella-large-zh use piccolo-base-zh and piccolo-large-zh as the base models, respectively, and the
512-1024 position embedding uses the initialization strategy of hierarchical decomposed position encoding.
Training strategy:\
One iterator for each type of data, separately calculating the loss.
Based on stella models, stella-v2 use more training data and remove instruction by Knowledge Distillation.
## Metric
#### C-MTEB leaderboard (Chinese)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) |
|:------------------:|:---------------:|:---------:|:---------------:|:------------:|:------------------:|:--------------:|:-----------------------:|:-------------:|:-------------:|:-------:|
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.95 | 66.1 | 70.08 | 56.92 |
| stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 |
| stella-base-zh | 0.2 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 |
#### MTEB leaderboard (English)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) |
|:-----------------:|:---------------:|:---------:|:---------------:|:------------:|:-------------------:|:---------------:|:-----------------------:|:-------------:|:--------------:|:--------:|:------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | 62.61 | 75.28 | 44.9 | 86.45 | 58.77 | 50.1 | 83.02 | 32.52 |
#### Reproduce our results
**C-MTEB:**
```python
import torch
import numpy as np
from typing import List
from mteb import MTEB
from sentence_transformers import SentenceTransformer
class FastTextEncoder():
def __init__(self, model_name):
self.model = SentenceTransformer(model_name).cuda().half().eval()
self.model.max_seq_length = 512
def encode(
self,
input_texts: List[str],
*args,
**kwargs
):
new_sens = list(set(input_texts))
new_sens.sort(key=lambda x: len(x), reverse=True)
vecs = self.model.encode(
new_sens, normalize_embeddings=True, convert_to_numpy=True, batch_size=256
).astype(np.float32)
sen2arrid = {sen: idx for idx, sen in enumerate(new_sens)}
vecs = vecs[[sen2arrid[sen] for sen in input_texts]]
torch.cuda.empty_cache()
return vecs
if __name__ == '__main__':
model_name = "infgrad/stella-base-zh-v2"
output_folder = "zh_mteb_results/stella-base-zh-v2"
task_names = [t.description["name"] for t in MTEB(task_langs=['zh', 'zh-CN']).tasks]
model = FastTextEncoder(model_name)
for task in task_names:
MTEB(tasks=[task], task_langs=['zh', 'zh-CN']).run(model, output_folder=output_folder)
```
**MTEB:**
You can use official script to reproduce our result. [scripts/run_mteb_english.py](https://github.com/embeddings-benchmark/mteb/blob/main/scripts/run_mteb_english.py)
#### Evaluation for long text
经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的,
更致命的是那些长度大于512的文本,其重点都在前半部分
这里以CMRC2018的数据为例说明这个问题:
```
question: 《无双大蛇z》是谁旗下ω-force开发的动作游戏?
passage:《无双大蛇z》是光荣旗下ω-force开发的动作游戏,于2009年3月12日登陆索尼playstation3,并于2009年11月27日推......
```
passage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。\
简言之,现有数据集的2个问题:\
1)长度大于512的过少\
2)即便大于512,对于检索而言也只需要前512的文本内容\
导致**无法准确评估模型的长文本编码能力。**
为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是:
- CMRC2018,通用百科
- CAIL,法律阅读理解
- DRCD,繁体百科,已转简体
- Military,军工问答
- Squad,英文阅读理解,已转中文
- Multifieldqa_zh,清华的大模型长文本理解能力评测数据[9]
处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。
除了Military数据集,我们提供了其他5个测试数据的下载地址:https://drive.google.com/file/d/1WC6EWaCbVgz-vPMDFH4TwAMkLyh5WNcN/view?usp=sharing
评测指标为Recall@5, 结果如下:
| Dataset | piccolo-base-zh | piccolo-large-zh | bge-base-zh | bge-large-zh | stella-base-zh | stella-large-zh |
|:---------------:|:---------------:|:----------------:|:-----------:|:------------:|:--------------:|:---------------:|
| CMRC2018 | 94.34 | 93.82 | 91.56 | 93.12 | 96.08 | 95.56 |
| CAIL | 28.04 | 33.64 | 31.22 | 33.94 | 34.62 | 37.18 |
| DRCD | 78.25 | 77.9 | 78.34 | 80.26 | 86.14 | 84.58 |
| Military | 76.61 | 73.06 | 75.65 | 75.81 | 83.71 | 80.48 |
| Squad | 91.21 | 86.61 | 87.87 | 90.38 | 93.31 | 91.21 |
| Multifieldqa_zh | 81.41 | 83.92 | 83.92 | 83.42 | 79.9 | 80.4 |
| **Average** | 74.98 | 74.83 | 74.76 | 76.15 | **78.96** | **78.24** |
**注意:** 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。
## Usage
#### stella 中文系列模型
stella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此**用法和piccolo完全一致**
,即在检索重排任务上给query和passage加上`查询: `和`结果: `。对于短短匹配不需要做任何操作。
stella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,**任何使用场景中都不需要加前缀文本**。
stella中文系列模型均使用mean pooling做为文本向量。
在sentence-transformer库中的使用方法:
```python
from sentence_transformers import SentenceTransformer
sentences = ["数据1", "数据2"]
model = SentenceTransformer('infgrad/stella-base-zh-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
直接使用transformers库:
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-zh-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-zh-v2')
sentences = ["数据1", "数据ABCDEFGH"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=1024,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
#### stella models for English
**Using Sentence-Transformers:**
```python
from sentence_transformers import SentenceTransformer
sentences = ["one car come", "one car go"]
model = SentenceTransformer('infgrad/stella-base-en-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
**Using HuggingFace Transformers:**
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-en-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-en-v2')
sentences = ["one car come", "one car go"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=512,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
## Training Detail
**硬件:** 单卡A100-80GB
**环境:** torch1.13.*; transformers-trainer + deepspeed + gradient-checkpointing
**学习率:** 1e-6
**batch_size:** base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例
**数据量:** 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。
## ToDoList
**评测的稳定性:**
评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random_seed,差距可以忽略不计,不影响评测结论。
**更高质量的长文本训练和测试数据:** 训练数据多是用13b模型构造的,肯定会存在噪声。
测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。
**OOD的性能:** 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere,
它们的效果均比不上BM25。
## Reference
1. https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab
2. https://github.com/wangyuxinwhy/uniem
3. https://github.com/CLUEbenchmark/SimCLUE
4. https://arxiv.org/abs/1612.00796
5. https://kexue.fm/archives/8847
6. https://huggingface.co/sensenova/piccolo-base-zh
7. https://kexue.fm/archives/7947
8. https://github.com/FlagOpen/FlagEmbedding
9. https://github.com/THUDM/LongBench
|
TheBloke/ALMA-13B-Pretrain-GGUF | TheBloke | 2023-09-27T12:54:03Z | 543 | 11 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2309.11674",
"base_model:haoranxu/ALMA-13B-Pretrain",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-09-23T00:07:03Z | ---
license: mit
model_name: ALMA 13B Pretrain
base_model: haoranxu/ALMA-13B-Pretrain
inference: false
model_creator: haoranxu
model_type: llama
prompt_template: 'Translate this from Chinese to English:
Chinese: {prompt}
English:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ALMA 13B Pretrain - GGUF
- Model creator: [haoranxu](https://huggingface.co/haoranxu)
- Original model: [ALMA 13B Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain)
<!-- description start -->
## Description
This repo contains GGUF format model files for [haoranxu's ALMA 13B Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF)
* [haoranxu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/haoranxu/ALMA-13B-Pretrain)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ALMA
```
Translate this from Chinese to English:
Chinese: {prompt}
English:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [haoranxu's ALMA 13B Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [alma-13b-pretrain.Q2_K.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [alma-13b-pretrain.Q3_K_S.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [alma-13b-pretrain.Q3_K_M.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [alma-13b-pretrain.Q3_K_L.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [alma-13b-pretrain.Q4_0.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [alma-13b-pretrain.Q4_K_S.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [alma-13b-pretrain.Q4_K_M.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [alma-13b-pretrain.Q5_0.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [alma-13b-pretrain.Q5_K_S.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [alma-13b-pretrain.Q5_K_M.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [alma-13b-pretrain.Q6_K.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [alma-13b-pretrain.Q8_0.gguf](https://huggingface.co/TheBloke/ALMA-13B-Pretrain-GGUF/blob/main/alma-13b-pretrain.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/ALMA-13B-Pretrain-GGUF and below it, a specific filename to download, such as: alma-13b-pretrain.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/ALMA-13B-Pretrain-GGUF alma-13b-pretrain.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/ALMA-13B-Pretrain-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ALMA-13B-Pretrain-GGUF alma-13b-pretrain.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m alma-13b-pretrain.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Translate this from Chinese to English:\nChinese: {prompt}\nEnglish:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/ALMA-13B-Pretrain-GGUF", model_file="alma-13b-pretrain.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: haoranxu's ALMA 13B Pretrain
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release four translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
Note that Base Model Link for `*-LoRA` models are LLaMA-2 fine-tuned by monolingual data (20B for the 7B model and 12B for the 13B model)
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
<!-- original-model-card end -->
|
juntaoyuan/elements-7b | juntaoyuan | 2023-11-21T07:17:44Z | 543 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2023-11-12T05:24:41Z | ---
license: apache-2.0
---
|
Startup-Exchange/tps_gender_prediction | Startup-Exchange | 2023-11-19T09:48:31Z | 543 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-11-19T08:59:39Z | ---
license: mit
---
|
TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF | TheBloke | 2023-11-30T05:22:21Z | 543 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"base_model:S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b",
"license:apache-2.0",
"text-generation-inference",
"region:us"
]
| null | 2023-11-28T22:05:13Z | ---
base_model: S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b
inference: false
license: apache-2.0
model_creator: "Sascha L\xFCscher"
model_name: Open Hermes 2.5 Neural Chat 3.1 Frankenmerge 11B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Open Hermes 2.5 Neural Chat 3.1 Frankenmerge 11B - GGUF
- Model creator: [Sascha Lüscher](https://huggingface.co/S4sch)
- Original model: [Open Hermes 2.5 Neural Chat 3.1 Frankenmerge 11B](https://huggingface.co/S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sascha Lüscher's Open Hermes 2.5 Neural Chat 3.1 Frankenmerge 11B](https://huggingface.co/S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF)
* [Sascha Lüscher's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q2_K.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q2_K.gguf) | Q2_K | 2 | 4.82 GB| 7.32 GB | smallest, significant quality loss - not recommended for most purposes |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_S.gguf) | Q3_K_S | 3 | 4.95 GB| 7.45 GB | very small, high quality loss |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_M.gguf) | Q3_K_M | 3 | 5.50 GB| 8.00 GB | very small, high quality loss |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_L.gguf) | Q3_K_L | 3 | 5.99 GB| 8.49 GB | small, substantial quality loss |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_0.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_0.gguf) | Q4_0 | 4 | 6.44 GB| 8.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_S.gguf) | Q4_K_S | 4 | 6.47 GB| 8.97 GB | small, greater quality loss |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf) | Q4_K_M | 4 | 6.85 GB| 9.35 GB | medium, balanced quality - recommended |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_0.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_0.gguf) | Q5_0 | 5 | 7.85 GB| 10.35 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_S.gguf) | Q5_K_S | 5 | 7.85 GB| 10.35 GB | large, low quality loss - recommended |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_M.gguf) | Q5_K_M | 5 | 8.06 GB| 10.56 GB | large, very low quality loss - recommended |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q6_K.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q6_K.gguf) | Q6_K | 6 | 9.34 GB| 11.84 GB | very large, extremely low quality loss |
| [open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q8_0.gguf](https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF/blob/main/open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q8_0.gguf) | Q8_0 | 8 | 12.10 GB| 14.60 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF and below it, a specific filename to download, such as: open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./open-hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sascha Lüscher's Open Hermes 2.5 Neural Chat 3.1 Frankenmerge 11B
Frankenmerge 11b between teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-1
Merge with the following conditions
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 8]
- model: Intel/neural-chat-7b-v3-1
layer_range: [4, 12]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [9, 16]
- model: Intel/neural-chat-7b-v3-1
layer_range: [13, 20]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [17, 24]
- model: Intel/neural-chat-7b-v3-1
layer_range: [21, 28]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [25, 32]
merge_method: passthrough
Benchmarks are coming soon...
<!-- original-model-card end -->
|
TheBloke/WhiteRabbitNeo-33B-v1-GGUF | TheBloke | 2024-01-12T13:35:06Z | 543 | 27 | transformers | [
"transformers",
"gguf",
"deepseek",
"base_model:whiterabbitneo/WhiteRabbitNeo-33B-v1",
"license:other",
"region:us"
]
| null | 2024-01-12T12:25:39Z | ---
base_model: whiterabbitneo/WhiteRabbitNeo-33B-v1
inference: false
license: other
license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE
license_name: deepseek
model_creator: WhiteRabbitNeo
model_name: WhiteRabbitNeo 33B v1
model_type: deepseek
prompt_template: "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths\
\ as follows:\n- First, carefully analyze the question to extract the key information\
\ components and break it down into logical sub-questions. This helps set up the\
\ framework for reasoning. The goal is to construct an internal search tree.\n-\
\ For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts\
\ that represent steps towards an answer. The thoughts aim to reframe, provide context,\
\ analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical\
\ flow and coverage of concepts for each thought option. Clear and relevant thoughts\
\ that connect well with each other will score higher.\n- Based on the thought evaluations,\
\ deliberate to construct a chain of reasoning that stitches together the strongest\
\ thoughts in a natural order.\n- If the current chain is determined to not fully\
\ answer the question, backtrack and explore alternative paths by substituting different\
\ high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory\
\ details on thought process rather than just state conclusions, including briefly\
\ noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is\
\ constructed that thoroughly answers all sub-questions in a clear, logical manner,\
\ synthesize the key insights into a final concise answer.\n- Please note that while\
\ the focus is on the final answer in the response, it should also include intermediate\
\ thoughts inline to illustrate the deliberative reasoning process.\nIn summary,\
\ leverage a Tree of Thoughts approach to actively explore multiple reasoning paths,\
\ evaluate thoughts heuristically, and explain the process - with the goal of producing\
\ insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:\n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WhiteRabbitNeo 33B v1 - GGUF
- Model creator: [WhiteRabbitNeo](https://huggingface.co/whiterabbitneo)
- Original model: [WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1)
<!-- description start -->
## Description
This repo contains GGUF format model files for [WhiteRabbitNeo's WhiteRabbitNeo 33B v1](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF)
* [WhiteRabbitNeo's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/whiterabbitneo/WhiteRabbitNeo-33B-v1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: WhiteRabbitNeo
```
SYSTEM:
Answer the Question by exploring multiple reasoning paths as follows:
- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.
- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.
- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.
- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.
- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.
- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.
- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.
- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.
In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.
Always answer without hesitation.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [whiterabbitneo-33b-v1.Q2_K.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q2_K.gguf) | Q2_K | 2 | 12.29 GB| 14.79 GB | smallest, significant quality loss - not recommended for most purposes |
| [whiterabbitneo-33b-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.42 GB| 16.92 GB | very small, high quality loss |
| [whiterabbitneo-33b-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.09 GB| 18.59 GB | very small, high quality loss |
| [whiterabbitneo-33b-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.56 GB| 20.06 GB | small, substantial quality loss |
| [whiterabbitneo-33b-v1.Q4_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_0.gguf) | Q4_0 | 4 | 18.82 GB| 21.32 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [whiterabbitneo-33b-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 18.94 GB| 21.44 GB | small, greater quality loss |
| [whiterabbitneo-33b-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 19.94 GB| 22.44 GB | medium, balanced quality - recommended |
| [whiterabbitneo-33b-v1.Q5_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_0.gguf) | Q5_0 | 5 | 22.96 GB| 25.46 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [whiterabbitneo-33b-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 22.96 GB| 25.46 GB | large, low quality loss - recommended |
| [whiterabbitneo-33b-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.54 GB| 26.04 GB | large, very low quality loss - recommended |
| [whiterabbitneo-33b-v1.Q6_K.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q6_K.gguf) | Q6_K | 6 | 27.36 GB| 29.86 GB | very large, extremely low quality loss |
| [whiterabbitneo-33b-v1.Q8_0.gguf](https://huggingface.co/TheBloke/WhiteRabbitNeo-33B-v1-GGUF/blob/main/whiterabbitneo-33b-v1.Q8_0.gguf) | Q8_0 | 8 | 35.43 GB| 37.93 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WhiteRabbitNeo-33B-v1-GGUF and below it, a specific filename to download, such as: whiterabbitneo-33b-v1.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF whiterabbitneo-33b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WhiteRabbitNeo-33B-v1-GGUF whiterabbitneo-33b-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m whiterabbitneo-33b-v1.Q4_K_M.gguf --color -c 16384 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM:\nAnswer the Question by exploring multiple reasoning paths as follows:\n- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.\n- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.\n- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.\n- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.\n- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.\nIn summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 16384` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./whiterabbitneo-33b-v1.Q4_K_M.gguf", # Download the model file first
n_ctx=16384, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"SYSTEM:\nAnswer the Question by exploring multiple reasoning paths as follows:\n- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.\n- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.\n- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.\n- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.\n- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.\n- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.\n- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.\n- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.\nIn summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.\n Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./whiterabbitneo-33b-v1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: WhiteRabbitNeo's WhiteRabbitNeo 33B v1
# Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)!
33B-v1.1 model comes with a "Prompt Enhancement" feature. Access at: https://www.whiterabbitneo.com/
# Our Discord Server
Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
# DeepSeek Coder Licence + WhiteRabbitNeo Extended Version
# Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
```
# Topics Covered:
```
- Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445).
- Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software.
- Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited.
- Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities.
- Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications.
- Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data.
- Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS.
- Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts.
- Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input.
- Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information.
- Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities.
- Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information.
- API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage.
- Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users.
- Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code.
```
# WhiteRabbitNeo
<br>

<br>
WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.
Our 33B model is now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI.
```
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "whiterabbitneo/WhiteRabbitNeo-33B-v-1"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
load_in_8bit=True,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
tot_system_prompt = """
Answer the Question by exploring multiple reasoning paths as follows:
- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.
- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.
- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.
- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.
- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.
- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.
- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.
- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.
In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.
"""
conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
# print(conversation)
json_data = {"prompt": user_input, "answer": answer}
# print(json_data)
# with open(output_file_path, "a") as output_file:
# output_file.write(json.dumps(json_data) + "\n")
```
# Sample Conversations:
1. "Write me a Fast API server with one end-point. The endpoint returns files from a S3 bucket.": https://www.whiterabbitneo.com/share/y06Po0e
2. "How can Metasploit be used for exploiting Android based IoT devices? What are some of the IoT devices that run Android? Show an example with code": https://www.whiterabbitneo.com/share/gWBwKlz
3. "How do I attack a wifi network?": https://www.whiterabbitneo.com/share/WLovxcu
4. "How do I create a reverse shell in Python": https://www.whiterabbitneo.com/share/LERgm8w
5. "How do we use Scapy for vulnerability assessment?": https://www.whiterabbitneo.com/share/t73iMzv
<!-- original-model-card end -->
|
mradermacher/Lambda-17b-GGUF | mradermacher | 2024-05-06T06:17:29Z | 543 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:athirdpath/Lambda-17b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-03-10T08:38:42Z | ---
base_model: athirdpath/Lambda-17b
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
static quants of https://huggingface.co/athirdpath/Lambda-17b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q2_K.gguf) | Q2_K | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.IQ3_XS.gguf) | IQ3_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q3_K_S.gguf) | Q3_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.IQ3_S.gguf) | IQ3_S | 7.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.IQ3_M.gguf) | IQ3_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q3_K_M.gguf) | Q3_K_M | 8.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q3_K_L.gguf) | Q3_K_L | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.IQ4_XS.gguf) | IQ4_XS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q4_K_S.gguf) | Q4_K_S | 9.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q4_K_M.gguf) | Q4_K_M | 10.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q5_K_S.gguf) | Q5_K_S | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q5_K_M.gguf) | Q5_K_M | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q6_K.gguf) | Q6_K | 14.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Lambda-17b-GGUF/resolve/main/Lambda-17b.Q8_0.gguf) | Q8_0 | 18.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rhaymison/gemma-portuguese-luana-2b | rhaymison | 2024-05-17T11:19:26Z | 543 | 3 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"portuguese",
"brasil",
"portugues",
"instrucao",
"conversational",
"pt",
"dataset:rhaymison/superset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-25T14:47:01Z | ---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- portuguese
- brasil
- gemma
- portugues
- instrucao
datasets:
- rhaymison/superset
pipeline_tag: text-generation
widget:
- text: Me explique como funciona um computador.
example_title: Computador.
- text: Me conte sobre a ida do homem a Lua.
example_title: Homem na Lua.
- text: Fale sobre uma curiosidade sobre a história do mundo
example_title: História.
- text: Escreva um poema bem interessante sobre o Sol e as flores.
example_title: Escreva um poema.
model-index:
- name: gemma-portuguese-luana-2b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 24.42
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 24.34
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 27.11
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 70.86
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 1.51
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 43.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 40.05
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 51.83
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 30.42
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
name: Open Portuguese LLM Leaderboard
---
# gemma-portuguese-2b-luana
<p align="center">
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/luana-2b.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
## Model description
updated: 2024-04-10 20:06
The gemma-portuguese-2b model is a portuguese model trained with the superset dataset with 250,000 instructions.
The model is mainly focused on text generation and instruction.
The model was not trained on math and code tasks.
The model is generalist with focus on understand portuguese inferences.
With this fine tuning for portuguese, you can adjust the model for a specific field.
## How to Use
```python
from transformers import AutoTokenizer, pipeline
import torch
model = "rhaymison/gemma-portuguese-luana-2b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages = [
{
"role": "system",
"content": "Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido."
},
{"role": "user", "content": "Me conte sobre a ida do homem a Lua."},
]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.2,
top_k=50,
top_p=0.95
)
print(outputs[0]["generated_text"][len(prompt):].replace("model",""))
#A viagem à Lua foi um esforço monumental realizado pela Agência Espacial dos EUA entre 1969 e 1972.
#Foi um marco significativo na exploração espacial e na ciência humana.
#Aqui está uma visão geral de sua jornada: 1. O primeiro voo espacial humano foi o de Yuri Gagarin, que voou a Terra em 12 de abril de 1961.
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer2 = AutoTokenizer.from_pretrained("rhaymison/gemma-portuguese-luana-2b")
model2 = AutoModelForCausalLM.from_pretrained("rhaymison/gemma-portuguese-luana-2b", device_map={"":0})
tokenizer2.pad_token = tokenizer2.eos_token
tokenizer2.add_eos_token = True
tokenizer2.add_bos_token, tokenizer2.add_eos_token
tokenizer2.padding_side = "right"
```
```python
text = f"""<start_of_turn>user
Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
###instrução:Me conte sobre a ida do homem a Lua.<end_of_turn>
<start_of_turn>model """
device = "cuda:0"
inputs = tokenizer2(text, return_tensors="pt").to(device)
outputs = model2.generate(**inputs, max_new_tokens=256, do_sample=False)
output = tokenizer2.decode(outputs[0], skip_special_tokens=True, skip_prompt=True)
print(output.replace("model"," "))
#A viagem à Lua foi um esforço monumental realizado pela Agência Espacial dos EUA entre 1969 e 1972.
#Foi um marco significativo na exploração espacial e na ciência humana.
#Aqui está uma visão geral de sua jornada: 1. O primeiro voo espacial humano foi o de Yuri Gagarin, que voou a Terra em 12 de abril de 1961.
```
```python
text = f"""<start_of_turn>user
Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
###instrução:Me explique como funciona um computador.<end_of_turn>
<start_of_turn>model """
device = "cuda:0"
inputs = tokenizer2(text, return_tensors="pt").to(device)
outputs = model2.generate(**inputs, max_new_tokens=256, do_sample=False)
output = tokenizer2.decode(outputs[0], skip_special_tokens=True, skip_prompt=True)
print(output.replace("model"," "))
#Um computador é um dispositivo eletrônico que pode executar tarefas que um humano pode fazer.
#Ele usa um conjunto de circuitos elétricos, componentes eletrônicos e software para processar informações e executar tarefas.
#Os componentes de um computador incluem um processador, memória, unidade de armazenamento, unidade de processamento gráfica,
#unidade de controle, unidade de entrada e saída,e dispositivos de entrada e saída.
#O processador é o coração do computador e executa instruções de software.A memória é onde o computador armazena
```
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/gemma-portuguese-luana-2b) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**34.94**|
|ENEM Challenge (No Images)| 24.42|
|BLUEX (No Images) | 24.34|
|OAB Exams | 27.11|
|Assin2 RTE | 70.86|
|Assin2 STS | 1.51|
|FaQuAD NLI | 43.97|
|HateBR Binary | 40.05|
|PT Hate Speech Binary | 51.83|
|tweetSentBR | 30.42|
### Comments
Any idea, help or report will always be welcome.
email: [email protected]
<div style="display:flex; flex-direction:row; justify-content:left">
<a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
</a>
<a href="https://github.com/rhaymisonbetini" target="_blank">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a>
</div> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.