modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
biu-nlp/lingmess-coref | biu-nlp | "2022-10-26T08:55:32Z" | 4,200 | 7 | transformers | [
"transformers",
"pytorch",
"longformer",
"coreference-resolution",
"en",
"dataset:ontonotes",
"arxiv:2205.12644",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2022-06-09T19:05:32Z" | ---
language:
- en
tags:
- coreference-resolution
license: mit
datasets:
- ontonotes
metrics:
- CoNLL
task_categories:
- coreference-resolution
model-index:
- name: biu-nlp/lingmess-coref
results:
- task:
type: coreference-resolution
name: coreference-resolution
dataset:
name: ontonotes
type: coreference
metrics:
- name: Avg. F1
type: CoNLL
value: 81.4
---
## LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution
[LingMess](https://arxiv.org/abs/2205.12644) is a linguistically motivated categorization of mention-pairs into 6 types of coreference decisions and learn a dedicated trainable scoring function for each category. This significantly improves the accuracy of the pairwise scorer as well as of the overall coreference performance on the English Ontonotes coreference corpus.
Please check the [official repository](https://github.com/shon-otmazgin/lingmess-coref) for more details and updates.
#### Training on OntoNotes
We present the test results on OntoNotes 5.0 dataset.
| Model | Avg. F1 |
|---------------------------------|---------|
| SpanBERT-large + e2e | 79.6 |
| Longformer-large + s2e | 80.3 |
| **Longformer-large + LingMess** | 81.4 |
### Citation
If you find LingMess useful for your work, please cite the following paper:
``` latex
@misc{https://doi.org/10.48550/arxiv.2205.12644,
doi = {10.48550/ARXIV.2205.12644},
url = {https://arxiv.org/abs/2205.12644},
author = {Otmazgin, Shon and Cattan, Arie and Goldberg, Yoav},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf | RichardErkhov | "2024-06-17T07:06:22Z" | 4,199 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-17T05:57:31Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
neuronovo-9B-v0.1 - GGUF
- Model creator: https://huggingface.co/Neuronovo/
- Original model: https://huggingface.co/Neuronovo/neuronovo-9B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [neuronovo-9B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [neuronovo-9B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [neuronovo-9B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [neuronovo-9B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [neuronovo-9B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [neuronovo-9B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [neuronovo-9B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [neuronovo-9B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [neuronovo-9B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [neuronovo-9B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [neuronovo-9B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [neuronovo-9B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [neuronovo-9B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [neuronovo-9B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [neuronovo-9B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [neuronovo-9B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [neuronovo-9B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [neuronovo-9B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [neuronovo-9B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [neuronovo-9B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [neuronovo-9B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q6_K.gguf) | Q6_K | 5.53GB |
| [neuronovo-9B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.1-gguf/blob/main/neuronovo-9B-v0.1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
The model described by the provided code, named "Neuronovo/neuronovo-9B-v0.1," is a sophisticated and fine-tuned version of a large language model, originally based on the "teknium/OpenHermes-2.5-Mistral-7B." This model exhibits several distinct characteristics and functionalities as derived from the code snippet:
1. **Dataset and Preprocessing**: It is trained on a dataset named "Intel/orca_dpo_pairs," which is likely a specialized dataset for dialogue systems. The data is preprocessed to format dialogues, with specific attention to system messages, user queries, chosen answers, and rejected answers.
2. **Tokenizer**: The model utilizes a tokenizer from the original "OpenHermes-2.5-Mistral-7B" model. This tokenizer is configured to have the end-of-sequence token as the padding token and pads from the left, indicating a particular focus on language generation tasks.
3. **LoRA Configuration**: The model employs a LoRA (Low-Rank Adaptation) configuration with specific parameters (r=16, lora_alpha=16, etc.) and targets multiple modules within the transformer architecture. This suggests an approach focused on efficient fine-tuning and adaptation of the model while preserving the majority of the pre-trained weights.
4. **Fine-Tuning Specifications**: The model is fine-tuned using a custom training setup, including a special DPO (Data Parallel Optimization) Trainer. This indicates an advanced fine-tuning process that likely emphasizes both efficiency and effectiveness, possibly with a focus on parallel processing and optimization.
5. **Training Arguments**: The training uses specific arguments like a cosine learning rate scheduler, paged AdamW optimizer, and training in 4-bit precision (indicating a focus on memory efficiency). It also employs gradient checkpointing and accumulation steps, which are typical in training large models efficiently.
6. **Performance and Output**: The model is configured for causal language modeling (indicative of generating text or continuing dialogues), with a maximum prompt length of 1024 and maximum generation length of 1536 tokens. This setup suggests its capability for handling extended dialogues or text generation tasks.
7. **Special Features**: The use of LoRA, DPO training, and specific fine-tuning methods highlight the model's advanced capabilities in adapting large-scale language models to specific tasks or datasets while maintaining computational efficiency.
In summary, "Neuronovo/neuronovo-9B-v0.1" is a highly specialized, efficient, and capable large language model fine-tuned for advanced language generation tasks, particularly in the context of dialogues or interactions, leveraging cutting-edge techniques in NLP model adaptation and training.
---
license: apache-2.0
---
|
nvidia/mit-b3 | nvidia | "2022-08-06T10:24:57Z" | 4,198 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"segformer",
"image-classification",
"vision",
"dataset:imagenet_1k",
"arxiv:2105.15203",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-03-02T23:29:05Z" | ---
license: other
tags:
- vision
datasets:
- imagenet_1k
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# SegFormer (b3-sized) encoder pre-trained-only
SegFormer encoder fine-tuned on Imagenet-1k. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
This repository only contains the pre-trained hierarchical Transformer, hence it can be used for fine-tuning purposes.
## Intended uses & limitations
You can use the model for fine-tuning of semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b3")
model = SegformerForImageClassification.from_pretrained("nvidia/mit-b3")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
keremberke/yolov8m-chest-xray-classification | keremberke | "2023-02-22T13:04:08Z" | 4,198 | 5 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"image-classification",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/chest-xray-classification",
"model-index",
"region:us"
] | image-classification | "2023-01-28T03:58:34Z" |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- image-classification
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.23
inference: false
datasets:
- keremberke/chest-xray-classification
model-index:
- name: keremberke/yolov8m-chest-xray-classification
results:
- task:
type: image-classification
dataset:
type: keremberke/chest-xray-classification
name: chest-xray-classification
split: validation
metrics:
- type: accuracy
value: 0.95533 # min: 0.0 - max: 1.0
name: top1 accuracy
- type: accuracy
value: 1 # min: 0.0 - max: 1.0
name: top5 accuracy
---
<div align="center">
<img width="640" alt="keremberke/yolov8m-chest-xray-classification" src="https://huggingface.co/keremberke/yolov8m-chest-xray-classification/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['NORMAL', 'PNEUMONIA']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.24 ultralytics==8.0.23
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, postprocess_classify_output
# load model
model = YOLO('keremberke/yolov8m-chest-xray-classification')
# set model parameters
model.overrides['conf'] = 0.25 # model confidence threshold
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].probs) # [0.1, 0.2, 0.3, 0.4]
processed_result = postprocess_classify_output(model, result=results[0])
print(processed_result) # {"cat": 0.4, "dog": 0.6}
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
keremberke/yolov8m-blood-cell-detection | keremberke | "2023-02-22T13:04:24Z" | 4,193 | 8 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/blood-cell-object-detection",
"model-index",
"region:us"
] | object-detection | "2023-01-29T06:04:44Z" |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.23
inference: false
datasets:
- keremberke/blood-cell-object-detection
model-index:
- name: keremberke/yolov8m-blood-cell-detection
results:
- task:
type: object-detection
dataset:
type: keremberke/blood-cell-object-detection
name: blood-cell-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.92674 # min: 0.0 - max: 1.0
name: [email protected](box)
---
<div align="center">
<img width="640" alt="keremberke/yolov8m-blood-cell-detection" src="https://huggingface.co/keremberke/yolov8m-blood-cell-detection/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['Platelets', 'RBC', 'WBC']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.24 ultralytics==8.0.23
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('keremberke/yolov8m-blood-cell-detection')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf | RichardErkhov | "2024-06-03T03:19:02Z" | 4,193 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-03T00:21:18Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-SFT - GGUF
- Model creator: https://huggingface.co/JYKIM-AI/
- Original model: https://huggingface.co/JYKIM-AI/Mistral-7B-SFT/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-7B-SFT.Q2_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-7B-SFT.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Mistral-7B-SFT.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Mistral-7B-SFT.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-7B-SFT.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-7B-SFT.Q3_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-7B-SFT.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-7B-SFT.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral-7B-SFT.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-7B-SFT.Q4_0.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-7B-SFT.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-7B-SFT.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-7B-SFT.Q4_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-7B-SFT.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-7B-SFT.Q4_1.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-7B-SFT.Q5_0.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-7B-SFT.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-7B-SFT.Q5_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-7B-SFT.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-7B-SFT.Q5_1.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-7B-SFT.Q6_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q6_K.gguf) | Q6_K | 5.53GB |
| [Mistral-7B-SFT.Q8_0.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-gguf/blob/main/Mistral-7B-SFT.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
Entry not found
|
ybelkada/hubert-tiny-random | ybelkada | "2023-02-21T22:10:40Z" | 4,192 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-11-15T08:28:32Z" | Entry not found |
keremberke/yolov8m-painting-classification | keremberke | "2023-02-22T13:04:03Z" | 4,192 | 1 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"image-classification",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/painting-style-classification",
"model-index",
"region:us"
] | image-classification | "2023-01-29T16:28:22Z" |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- image-classification
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.23
inference: false
datasets:
- keremberke/painting-style-classification
model-index:
- name: keremberke/yolov8m-painting-classification
results:
- task:
type: image-classification
dataset:
type: keremberke/painting-style-classification
name: painting-style-classification
split: validation
metrics:
- type: accuracy
value: 0.05723 # min: 0.0 - max: 1.0
name: top1 accuracy
- type: accuracy
value: 0.21463 # min: 0.0 - max: 1.0
name: top5 accuracy
---
<div align="center">
<img width="640" alt="keremberke/yolov8m-painting-classification" src="https://huggingface.co/keremberke/yolov8m-painting-classification/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['Abstract_Expressionism', 'Action_painting', 'Analytical_Cubism', 'Art_Nouveau_Modern', 'Baroque', 'Color_Field_Painting', 'Contemporary_Realism', 'Cubism', 'Early_Renaissance', 'Expressionism', 'Fauvism', 'High_Renaissance', 'Impressionism', 'Mannerism_Late_Renaissance', 'Minimalism', 'Naive_Art_Primitivism', 'New_Realism', 'Northern_Renaissance', 'Pointillism', 'Pop_Art', 'Post_Impressionism', 'Realism', 'Rococo', 'Romanticism', 'Symbolism', 'Synthetic_Cubism', 'Ukiyo_e']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.24 ultralytics==8.0.23
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, postprocess_classify_output
# load model
model = YOLO('keremberke/yolov8m-painting-classification')
# set model parameters
model.overrides['conf'] = 0.25 # model confidence threshold
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].probs) # [0.1, 0.2, 0.3, 0.4]
processed_result = postprocess_classify_output(model, result=results[0])
print(processed_result) # {"cat": 0.4, "dog": 0.6}
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
kotoba-tech/kotoba-whisper-v1.1 | kotoba-tech | "2024-05-08T15:34:40Z" | 4,190 | 21 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"ja",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-29T14:53:45Z" | ---
language: ja
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
widget:
- example_title: CommonVoice 8.0 (Test Split)
src: https://huggingface.co/datasets/japanese-asr/ja_asr.common_voice_8_0/resolve/main/sample.flac
- example_title: JSUT Basic 5000
src: https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000/resolve/main/sample.flac
- example_title: ReazonSpeech (Test Split)
src: https://huggingface.co/datasets/japanese-asr/ja_asr.reazonspeech_test/resolve/main/sample.flac
pipeline_tag: automatic-speech-recognition
model-index:
- name: kotoba-tech/kotoba-whisper-v1.1
results:
- task:
type: automatic-speech-recognition
dataset:
name: CommonVoice_8.0 (Japanese)
type: japanese-asr/ja_asr.common_voice_8_0
metrics:
- type: WER
value: 59.27
name: WER
- type: CER
value: 9.44
name: CER
- task:
type: automatic-speech-recognition
dataset:
name: ReazonSpeech (Test)
type: japanese-asr/ja_asr.reazonspeech_test
metrics:
- type: WER
value: 56.62
name: WER
- type: CER
value: 12.6
name: CER
- task:
type: automatic-speech-recognition
dataset:
name: JSUT Basic5000
type: japanese-asr/ja_asr.jsut_basic5000
metrics:
- type: WER
value: 64.36
name: WER
- type: CER
value: 8.48
name: CER
---
# Kotoba-Whisper-v1.1
_Kotoba-Whisper-v1.1_ is a Japanese ASR model based on [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0), with
additional postprocessing stacks integrated as [`pipeline`](https://huggingface.co/docs/transformers/en/main_classes/pipelines). The new features includes
(i) improved timestamp achieved by [stable-ts](https://github.com/jianfch/stable-ts) and (ii) adding punctuation with [punctuators](https://github.com/1-800-BAD-CODE/punctuators/tree/main).
These libraries are merged into Kotoba-Whisper-v1.1 via pipeline and will be applied seamlessly to the predicted transcription from [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0).
The pipeline has been developed through the collaboration between [Asahi Ushio](https://asahiushio.com) and [Kotoba Technologies](https://twitter.com/kotoba_tech)
Following table presents the raw CER (unlike usual CER where the punctuations are removed before computing the metrics, see the evaluation script [here](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.1/blob/main/run_short_form_eval.py))
along with the.
| model | CommonVoice 8.0 (Japanese) | JSUT Basic 5000 | ReazonSpeech Test |
|:---------------------------------------------------------|---------------------------------------:|-------------------------------------:|----------------------------------------:|
| kotoba-tech/kotoba-whisper-v1.0 | 15.6 | 15.2 | 17.8 |
| kotoba-tech/kotoba-whisper-v1.1 (punctuator + stable-ts) | 13.7 | ***11.2*** | ***17.4*** |
| kotoba-tech/kotoba-whisper-v1.1 (punctuator) | 13.9 | 11.4 | 18 |
| kotoba-tech/kotoba-whisper-v1.1 (stable-ts) | 15.7 | 15 | 17.7 |
| openai/whisper-large-v3 | ***12.9*** | 13.4 | 20.6 |
Regarding to the normalized CER, since those update from v1.1 will be removed by the normalization, kotoba-tech/kotoba-whisper-v1.1 marks the same CER values as [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0).
### Latency
Kotoba-whisper-v1.1 improves the punctuation and the timestamp of the output from Kotoba-whisper-v1.0. However, since we apply the punctuator and stable-ts to each chunk,
we need to obtain the timestamps, which decreases the latency of the original kotoba-whisper-v1.0. See the following table comparing the inference speed on
transcribing **50min** Japanese speech audio, where we report the average over five independent runs.
| model | return_timestamps | time (mean) |
|:---------------------------------------------------------|:--------------------|--------------:|
| kotoba-tech/kotoba-whisper-v1.0 | False | 10.8 |
| kotoba-tech/kotoba-whisper-v1.0 | True | 15.7 |
| kotoba-tech/kotoba-whisper-v1.1 (punctuator + stable-ts) | True | 17.9 |
| kotoba-tech/kotoba-whisper-v1.1 (punctuator) | True | 17.7 |
| kotoba-tech/kotoba-whisper-v1.1 (stable-ts) | True | 16.1 |
| openai/whisper-large-v3 | False | 29.1 |
| openai/whisper-large-v3 | True | 37.9 |
See the full table [here](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.1/raw/main/latency.csv).
## Transformers Usage
Kotoba-Whisper-v1.1 is supported in the Hugging Face 🤗 Transformers library from version 4.39 onwards. To run the model, first
install the latest version of Transformers.
```bash
pip install --upgrade pip
pip install --upgrade transformers accelerate torchaudio
pip install stable-ts==2.16.0
pip install punctuators==0.0.5
```
### Transcription
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class to transcribe audio files as follows:
```python
import torch
from transformers import pipeline
from datasets import load_dataset
# config
model_id = "kotoba-tech/kotoba-whisper-v1.1"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "japanese", "task": "transcribe"}
# load model
pipe = pipeline(
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs,
chunk_length_s=15,
batch_size=16,
trust_remote_code=True,
stable_ts=True,
punctuator=True
)
# load sample audio
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
sample = dataset[0]["audio"]
# run inference
result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs)
print(result)
```
- To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
```diff
- result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs)
+ result = pipe("audio.mp3", return_timestamps=True, generate_kwargs=generate_kwargs)
```
- To deactivate stable-ts:
```diff
- stable_ts=True,
+ stable_ts=False,
```
- To deactivate punctuator:
```diff
- punctuator=True,
+ punctuator=False,
```
### Transcription with Prompt
Kotoba-whisper can generate transcription with prompting as below:
```python
import re
import torch
from transformers import pipeline
from datasets import load_dataset
# config
model_id = "kotoba-tech/kotoba-whisper-v1.1"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "japanese", "task": "transcribe"}
# load model
pipe = pipeline(
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs,
chunk_length_s=15,
batch_size=16,
trust_remote_code=True
)
# load sample audio
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
# --- Without prompt ---
text = pipe(dataset[10]["audio"], generate_kwargs=generate_kwargs)['text']
print(text)
# 81歳、力強い走りに変わってきます。
# --- With prompt ---: Let's change `81` to `91`.
prompt = "91歳"
generate_kwargs['prompt_ids'] = pipe.tokenizer.get_prompt_ids(prompt, return_tensors="pt").to(device)
text = pipe(dataset[10]["audio"], generate_kwargs=generate_kwargs)['text']
# currently the pipeline for ASR appends the prompt at the beginning of the transcription, so remove it
text = re.sub(rf"\A\s*{prompt}\s*", "", text)
print(text)
# あっぶったでもスルガさん、91歳、力強い走りに変わってきます。
```
### Flash Attention 2
We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2)
if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
```
pip install flash-attn --no-build-isolation
```
Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`:
```diff
- model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
+ model_kwargs = {"attn_implementation": "flash_attention_2"} if torch.cuda.is_available() else {}
```
## Acknowledgements
* [OpenAI](https://openai.com/) for the Whisper [model](https://huggingface.co/openai/whisper-large-v3).
* Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration.
* Hugging Face 🤗 for the [Distil-Whisper codebase](https://github.com/huggingface/distil-whisper).
* [Reazon Human Interaction Lab](https://research.reazon.jp/) for the [ReazonSpeech dataset](https://huggingface.co/datasets/reazon-research/reazonspeech). |
RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf | RichardErkhov | "2024-06-03T01:19:53Z" | 4,188 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-02T21:21:31Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-SFT-v0.1 - GGUF
- Model creator: https://huggingface.co/JYKIM-AI/
- Original model: https://huggingface.co/JYKIM-AI/Mistral-7B-SFT-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-7B-SFT-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-7B-SFT-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Mistral-7B-SFT-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Mistral-7B-SFT-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-7B-SFT-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-7B-SFT-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-7B-SFT-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-7B-SFT-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral-7B-SFT-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-7B-SFT-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-7B-SFT-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-7B-SFT-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-7B-SFT-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-7B-SFT-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-7B-SFT-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-7B-SFT-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-7B-SFT-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-7B-SFT-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-7B-SFT-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-7B-SFT-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-7B-SFT-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q6_K.gguf) | Q6_K | 5.53GB |
| [Mistral-7B-SFT-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/JYKIM-AI_-_Mistral-7B-SFT-v0.1-gguf/blob/main/Mistral-7B-SFT-v0.1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
Entry not found
|
Yntec/DisneyPixarCartoon768 | Yntec | "2024-06-07T09:59:40Z" | 4,187 | 3 | diffusers | [
"diffusers",
"safetensors",
"Disney",
"Pixar",
"Western Art",
"PromptSharingSamaritan",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-07T01:12:29Z" | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Disney
- Pixar
- Western Art
- PromptSharingSamaritan
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Disney Pixar Cartoon Type B
768x768 version of this model with the 84KVAE baked in for better details. Original page: https://civitai.com/models/75650/disney-pixar-cartoon-type-b
Comparison:

(Click for larger)
Samples and prompts:

(Click for larger)
Top right: hyperrealistic, professional-dark-portrait, Ultra-Realistic , sexy, Tinker_Bell, (late-night), sitting-on-the-window, cozy-childresn's-room, dramatic-scene, looking-outside-window, (fairy-high-heels), deep focus, 105mm, aesthetic-picture, professional-photography, hdr, UHD
Top left: Father with daughter. festive scene at a copper brewery with a wooden keg of beer in the center. Pretty cute little girl sitting with Santa Claus chef. Display mugs of dark beer accompanied ingredients halloween happy colorful by
Bottom right (prompt by digiplay): 1girl,night, waterfall, white wavy hair Angel 22y.o, (realistic:2),Mucha,4k,rabbits and birds, close up,
Bottom left: an illustration of a baby hedgehog with headphones holding an ribbon umbrella in the rain
512x512 version: https://huggingface.co/stablediffusionapi/disney-pixar-cartoon |
valeriojob/flashcardsGPT-Mistral-7B-v0.1-GGUF | valeriojob | "2024-06-27T00:41:17Z" | 4,187 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:unsloth/mistral-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T16:50:24Z" | ---
base_model: unsloth/mistral-7b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# flashcardsGPT-Mistral-7B-v0.1-GGUF
- This model is a fine-tuned version of [unsloth/mistral-7b](https://huggingface.co/unsloth/mistral-7b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) based on real university lecture data.
- Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university ([more info](https://www.fhnw.ch/en/degree-programmes/business/bsc-in-business-information-technology)).
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/flashcardsGPT-Mistral-7B-v0.1](https://huggingface.co/valeriojob/flashcardsGPT-Mistral-7B-v0.1) that includes the default format of the model as well as the LoRA adapters of the model.
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Model description
This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object.
It uses the following Prompt Engineering template:
"""
Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic.
Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example.
Ensure the 'back' field contains no line breaks.
No additional text or explanation should be provided—only respond with the JSON object.
Here is the OCR-extracted text:
""""
## Intended uses & limitations
The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university.
## Training and evaluation data
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/FHNW-Flashcards-Data-v0.1](https://huggingface.co/datasets/valeriojob/FHNW-Flashcards-Data-v0.1)
## Licenses
- **License:** apache-2.0 |
KBLab/robust-swedish-sentiment-multiclass | KBLab | "2023-12-08T11:33:36Z" | 4,186 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"megatron-bert",
"text-classification",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-08T09:28:15Z" | ---
license: apache-2.0
language:
- sv
---
The National Library of Sweden/KBLab releases a robust, multi-label sentiment classifier finetuned on [Megatron-BERT-large-165K](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-165k). The model was trained on approximately 75K Swedish texts from multiple linguistic domains and datasets.
There is a post on [the KBLab blog](https://kb-labb.github.io/posts/2023-06-16-a-robust-multi-label-sentiment-classifier-for-swedish/) describing the model in further detail.
## Citation
```
@misc{hägglöf2023a,
author = {Hägglöf, Hillevi},
title = {The KBLab Blog: A robust, multi-label sentiment classifier for Swedish},
url = {https://kb-labb.github.io/posts/2023-06-16-a-robust-multi-label-sentiment-classifier-for-swedish/},
year = {2023}
}
``` |
McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised | McGill-NLP | "2024-04-30T03:48:00Z" | 4,186 | 30 | peft | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | sentence-similarity | "2024-04-30T02:35:26Z" | ---
library_name: peft
license: mit
language:
- en
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
model-index:
- name: LLM2Vec-Meta-Llama-3-supervised
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.94029850746269
- type: ap
value: 44.93223506764482
- type: f1
value: 74.30328994013465
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 86.06680000000001
- type: ap
value: 81.97124658709345
- type: f1
value: 86.00558036874241
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.836
- type: f1
value: 46.05094679201488
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.980000000000004
- type: map_at_10
value: 54.167
- type: map_at_100
value: 54.735
- type: map_at_1000
value: 54.738
- type: map_at_3
value: 49.384
- type: map_at_5
value: 52.285000000000004
- type: mrr_at_1
value: 38.549
- type: mrr_at_10
value: 54.351000000000006
- type: mrr_at_100
value: 54.932
- type: mrr_at_1000
value: 54.935
- type: mrr_at_3
value: 49.585
- type: mrr_at_5
value: 52.469
- type: ndcg_at_1
value: 37.980000000000004
- type: ndcg_at_10
value: 62.778999999999996
- type: ndcg_at_100
value: 64.986
- type: ndcg_at_1000
value: 65.036
- type: ndcg_at_3
value: 53.086999999999996
- type: ndcg_at_5
value: 58.263
- type: precision_at_1
value: 37.980000000000004
- type: precision_at_10
value: 9.011
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 21.266
- type: precision_at_5
value: 15.248999999999999
- type: recall_at_1
value: 37.980000000000004
- type: recall_at_10
value: 90.114
- type: recall_at_100
value: 99.289
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 63.798
- type: recall_at_5
value: 76.24499999999999
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.27081216556421
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 46.8490872532913
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 65.18525400430678
- type: mrr
value: 78.80149936244119
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.92301936595548
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.0487012987013
- type: f1
value: 88.00953788281542
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 32.34687321141145
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69881680534123
- task:
type: Retrieval
dataset:
type: cqadupstack/android
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.742
- type: map_at_10
value: 51.803
- type: map_at_100
value: 53.556000000000004
- type: map_at_1000
value: 53.652
- type: map_at_3
value: 47.286
- type: map_at_5
value: 50.126000000000005
- type: mrr_at_1
value: 46.924
- type: mrr_at_10
value: 57.857
- type: mrr_at_100
value: 58.592
- type: mrr_at_1000
value: 58.619
- type: mrr_at_3
value: 55.340999999999994
- type: mrr_at_5
value: 57.150999999999996
- type: ndcg_at_1
value: 46.924
- type: ndcg_at_10
value: 58.733999999999995
- type: ndcg_at_100
value: 63.771
- type: ndcg_at_1000
value: 64.934
- type: ndcg_at_3
value: 53.189
- type: ndcg_at_5
value: 56.381
- type: precision_at_1
value: 46.924
- type: precision_at_10
value: 11.431
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.213
- type: precision_at_3
value: 25.942
- type: precision_at_5
value: 19.113
- type: recall_at_1
value: 37.742
- type: recall_at_10
value: 71.34
- type: recall_at_100
value: 91.523
- type: recall_at_1000
value: 98.494
- type: recall_at_3
value: 55.443
- type: recall_at_5
value: 64.122
- task:
type: Retrieval
dataset:
type: cqadupstack/english
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.183
- type: map_at_10
value: 46.837
- type: map_at_100
value: 48.126000000000005
- type: map_at_1000
value: 48.25
- type: map_at_3
value: 43.171
- type: map_at_5
value: 45.318999999999996
- type: mrr_at_1
value: 43.376
- type: mrr_at_10
value: 52.859
- type: mrr_at_100
value: 53.422000000000004
- type: mrr_at_1000
value: 53.456
- type: mrr_at_3
value: 50.434999999999995
- type: mrr_at_5
value: 51.861999999999995
- type: ndcg_at_1
value: 43.376
- type: ndcg_at_10
value: 53.223
- type: ndcg_at_100
value: 57.175
- type: ndcg_at_1000
value: 58.86900000000001
- type: ndcg_at_3
value: 48.417
- type: ndcg_at_5
value: 50.77
- type: precision_at_1
value: 43.376
- type: precision_at_10
value: 10.236
- type: precision_at_100
value: 1.5730000000000002
- type: precision_at_1000
value: 0.203
- type: precision_at_3
value: 23.97
- type: precision_at_5
value: 17.134
- type: recall_at_1
value: 34.183
- type: recall_at_10
value: 64.866
- type: recall_at_100
value: 81.26100000000001
- type: recall_at_1000
value: 91.412
- type: recall_at_3
value: 50.080000000000005
- type: recall_at_5
value: 56.871
- task:
type: Retrieval
dataset:
type: cqadupstack/gaming
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 44.878
- type: map_at_10
value: 58.656
- type: map_at_100
value: 59.668
- type: map_at_1000
value: 59.704
- type: map_at_3
value: 54.891
- type: map_at_5
value: 57.050999999999995
- type: mrr_at_1
value: 51.975
- type: mrr_at_10
value: 62.357
- type: mrr_at_100
value: 62.907999999999994
- type: mrr_at_1000
value: 62.925
- type: mrr_at_3
value: 59.801
- type: mrr_at_5
value: 61.278
- type: ndcg_at_1
value: 51.975
- type: ndcg_at_10
value: 64.95100000000001
- type: ndcg_at_100
value: 68.414
- type: ndcg_at_1000
value: 69.077
- type: ndcg_at_3
value: 58.897999999999996
- type: ndcg_at_5
value: 61.866
- type: precision_at_1
value: 51.975
- type: precision_at_10
value: 10.502
- type: precision_at_100
value: 1.31
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 26.290000000000003
- type: precision_at_5
value: 18.093999999999998
- type: recall_at_1
value: 44.878
- type: recall_at_10
value: 79.746
- type: recall_at_100
value: 94.17
- type: recall_at_1000
value: 98.80499999999999
- type: recall_at_3
value: 63.70099999999999
- type: recall_at_5
value: 70.878
- task:
type: Retrieval
dataset:
type: cqadupstack/gis
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.807
- type: map_at_10
value: 39.431
- type: map_at_100
value: 40.56
- type: map_at_1000
value: 40.617999999999995
- type: map_at_3
value: 36.436
- type: map_at_5
value: 37.955
- type: mrr_at_1
value: 31.186000000000003
- type: mrr_at_10
value: 41.654
- type: mrr_at_100
value: 42.58
- type: mrr_at_1000
value: 42.623
- type: mrr_at_3
value: 38.983000000000004
- type: mrr_at_5
value: 40.35
- type: ndcg_at_1
value: 31.186000000000003
- type: ndcg_at_10
value: 45.297
- type: ndcg_at_100
value: 50.515
- type: ndcg_at_1000
value: 52.005
- type: ndcg_at_3
value: 39.602
- type: ndcg_at_5
value: 42.027
- type: precision_at_1
value: 31.186000000000003
- type: precision_at_10
value: 7.073
- type: precision_at_100
value: 1.0210000000000001
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 17.1
- type: precision_at_5
value: 11.729000000000001
- type: recall_at_1
value: 28.807
- type: recall_at_10
value: 61.138999999999996
- type: recall_at_100
value: 84.491
- type: recall_at_1000
value: 95.651
- type: recall_at_3
value: 45.652
- type: recall_at_5
value: 51.522
- task:
type: Retrieval
dataset:
type: cqadupstack/mathematica
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.607
- type: map_at_10
value: 31.944
- type: map_at_100
value: 33.317
- type: map_at_1000
value: 33.428000000000004
- type: map_at_3
value: 28.508
- type: map_at_5
value: 30.348999999999997
- type: mrr_at_1
value: 25.622
- type: mrr_at_10
value: 36.726
- type: mrr_at_100
value: 37.707
- type: mrr_at_1000
value: 37.761
- type: mrr_at_3
value: 33.934
- type: mrr_at_5
value: 35.452
- type: ndcg_at_1
value: 25.622
- type: ndcg_at_10
value: 38.462
- type: ndcg_at_100
value: 44.327
- type: ndcg_at_1000
value: 46.623
- type: ndcg_at_3
value: 32.583
- type: ndcg_at_5
value: 35.175
- type: precision_at_1
value: 25.622
- type: precision_at_10
value: 7.425
- type: precision_at_100
value: 1.173
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 16.418
- type: precision_at_5
value: 11.866
- type: recall_at_1
value: 20.607
- type: recall_at_10
value: 53.337
- type: recall_at_100
value: 78.133
- type: recall_at_1000
value: 94.151
- type: recall_at_3
value: 37.088
- type: recall_at_5
value: 43.627
- task:
type: Retrieval
dataset:
type: cqadupstack/physics
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.814
- type: map_at_10
value: 47.609
- type: map_at_100
value: 48.972
- type: map_at_1000
value: 49.061
- type: map_at_3
value: 43.397999999999996
- type: map_at_5
value: 45.839
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.074
- type: mrr_at_100
value: 53.76800000000001
- type: mrr_at_1000
value: 53.794
- type: mrr_at_3
value: 50.241
- type: mrr_at_5
value: 51.805
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 54.419
- type: ndcg_at_100
value: 59.508
- type: ndcg_at_1000
value: 60.858000000000004
- type: ndcg_at_3
value: 48.296
- type: ndcg_at_5
value: 51.28
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 10.231
- type: precision_at_100
value: 1.4789999999999999
- type: precision_at_1000
value: 0.17700000000000002
- type: precision_at_3
value: 23.419999999999998
- type: precision_at_5
value: 16.843
- type: recall_at_1
value: 33.814
- type: recall_at_10
value: 68.88
- type: recall_at_100
value: 89.794
- type: recall_at_1000
value: 98.058
- type: recall_at_3
value: 51.915
- type: recall_at_5
value: 59.704
- task:
type: Retrieval
dataset:
type: cqadupstack/programmers
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.668
- type: map_at_10
value: 43.032
- type: map_at_100
value: 44.48
- type: map_at_1000
value: 44.574000000000005
- type: map_at_3
value: 38.609
- type: map_at_5
value: 41.164
- type: mrr_at_1
value: 37.785000000000004
- type: mrr_at_10
value: 48.898
- type: mrr_at_100
value: 49.728
- type: mrr_at_1000
value: 49.769000000000005
- type: mrr_at_3
value: 45.909
- type: mrr_at_5
value: 47.61
- type: ndcg_at_1
value: 37.785000000000004
- type: ndcg_at_10
value: 50.21099999999999
- type: ndcg_at_100
value: 55.657999999999994
- type: ndcg_at_1000
value: 57.172
- type: ndcg_at_3
value: 43.726
- type: ndcg_at_5
value: 46.758
- type: precision_at_1
value: 37.785000000000004
- type: precision_at_10
value: 9.669
- type: precision_at_100
value: 1.4409999999999998
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 21.651
- type: precision_at_5
value: 15.822
- type: recall_at_1
value: 29.668
- type: recall_at_10
value: 65.575
- type: recall_at_100
value: 87.977
- type: recall_at_1000
value: 97.615
- type: recall_at_3
value: 47.251
- type: recall_at_5
value: 55.359
- task:
type: Retrieval
dataset:
type: mteb/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.29925
- type: map_at_10
value: 41.98708333333333
- type: map_at_100
value: 43.306916666666666
- type: map_at_1000
value: 43.40716666666667
- type: map_at_3
value: 38.431666666666665
- type: map_at_5
value: 40.4195
- type: mrr_at_1
value: 36.24483333333334
- type: mrr_at_10
value: 46.32666666666667
- type: mrr_at_100
value: 47.13983333333333
- type: mrr_at_1000
value: 47.18058333333334
- type: mrr_at_3
value: 43.66799999999999
- type: mrr_at_5
value: 45.163666666666664
- type: ndcg_at_1
value: 36.24483333333334
- type: ndcg_at_10
value: 48.251916666666666
- type: ndcg_at_100
value: 53.3555
- type: ndcg_at_1000
value: 55.024249999999995
- type: ndcg_at_3
value: 42.599583333333335
- type: ndcg_at_5
value: 45.24166666666666
- type: precision_at_1
value: 36.24483333333334
- type: precision_at_10
value: 8.666833333333333
- type: precision_at_100
value: 1.3214166666666665
- type: precision_at_1000
value: 0.16475
- type: precision_at_3
value: 19.9955
- type: precision_at_5
value: 14.271999999999998
- type: recall_at_1
value: 30.29925
- type: recall_at_10
value: 62.232333333333344
- type: recall_at_100
value: 84.151
- type: recall_at_1000
value: 95.37333333333333
- type: recall_at_3
value: 46.45541666666667
- type: recall_at_5
value: 53.264
- task:
type: Retrieval
dataset:
type: cqadupstack/stats
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.996
- type: map_at_10
value: 38.047
- type: map_at_100
value: 39.121
- type: map_at_1000
value: 39.202999999999996
- type: map_at_3
value: 35.376000000000005
- type: map_at_5
value: 36.763
- type: mrr_at_1
value: 32.362
- type: mrr_at_10
value: 40.717999999999996
- type: mrr_at_100
value: 41.586
- type: mrr_at_1000
value: 41.641
- type: mrr_at_3
value: 38.292
- type: mrr_at_5
value: 39.657
- type: ndcg_at_1
value: 32.362
- type: ndcg_at_10
value: 43.105
- type: ndcg_at_100
value: 48.026
- type: ndcg_at_1000
value: 49.998
- type: ndcg_at_3
value: 38.147999999999996
- type: ndcg_at_5
value: 40.385
- type: precision_at_1
value: 32.362
- type: precision_at_10
value: 6.7940000000000005
- type: precision_at_100
value: 1.0170000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 16.411
- type: precision_at_5
value: 11.35
- type: recall_at_1
value: 28.996
- type: recall_at_10
value: 55.955
- type: recall_at_100
value: 77.744
- type: recall_at_1000
value: 92.196
- type: recall_at_3
value: 42.254999999999995
- type: recall_at_5
value: 47.776
- task:
type: Retrieval
dataset:
type: cqadupstack/tex
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.029
- type: map_at_10
value: 29.188
- type: map_at_100
value: 30.484
- type: map_at_1000
value: 30.608
- type: map_at_3
value: 26.195
- type: map_at_5
value: 27.866999999999997
- type: mrr_at_1
value: 24.57
- type: mrr_at_10
value: 33.461
- type: mrr_at_100
value: 34.398
- type: mrr_at_1000
value: 34.464
- type: mrr_at_3
value: 30.856
- type: mrr_at_5
value: 32.322
- type: ndcg_at_1
value: 24.57
- type: ndcg_at_10
value: 34.846
- type: ndcg_at_100
value: 40.544000000000004
- type: ndcg_at_1000
value: 43.019
- type: ndcg_at_3
value: 29.683999999999997
- type: ndcg_at_5
value: 32.11
- type: precision_at_1
value: 24.57
- type: precision_at_10
value: 6.535
- type: precision_at_100
value: 1.11
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 14.338000000000001
- type: precision_at_5
value: 10.496
- type: recall_at_1
value: 20.029
- type: recall_at_10
value: 47.509
- type: recall_at_100
value: 72.61999999999999
- type: recall_at_1000
value: 89.778
- type: recall_at_3
value: 33.031
- type: recall_at_5
value: 39.306000000000004
- task:
type: Retrieval
dataset:
type: cqadupstack/unix
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.753999999999998
- type: map_at_10
value: 43.814
- type: map_at_100
value: 45.072
- type: map_at_1000
value: 45.155
- type: map_at_3
value: 40.316
- type: map_at_5
value: 42.15
- type: mrr_at_1
value: 38.06
- type: mrr_at_10
value: 48.311
- type: mrr_at_100
value: 49.145
- type: mrr_at_1000
value: 49.181000000000004
- type: mrr_at_3
value: 45.678000000000004
- type: mrr_at_5
value: 47.072
- type: ndcg_at_1
value: 38.06
- type: ndcg_at_10
value: 50.083
- type: ndcg_at_100
value: 55.342
- type: ndcg_at_1000
value: 56.87
- type: ndcg_at_3
value: 44.513999999999996
- type: ndcg_at_5
value: 46.886
- type: precision_at_1
value: 38.06
- type: precision_at_10
value: 8.638
- type: precision_at_100
value: 1.253
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 20.709
- type: precision_at_5
value: 14.44
- type: recall_at_1
value: 31.753999999999998
- type: recall_at_10
value: 64.473
- type: recall_at_100
value: 86.832
- type: recall_at_1000
value: 96.706
- type: recall_at_3
value: 48.937000000000005
- type: recall_at_5
value: 55.214
- task:
type: Retrieval
dataset:
type: cqadupstack/webmasters
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.815
- type: map_at_10
value: 40.595
- type: map_at_100
value: 42.337
- type: map_at_1000
value: 42.559000000000005
- type: map_at_3
value: 37.120999999999995
- type: map_at_5
value: 38.912
- type: mrr_at_1
value: 34.585
- type: mrr_at_10
value: 45.068000000000005
- type: mrr_at_100
value: 45.93
- type: mrr_at_1000
value: 45.974
- type: mrr_at_3
value: 42.26
- type: mrr_at_5
value: 43.742
- type: ndcg_at_1
value: 34.585
- type: ndcg_at_10
value: 47.519
- type: ndcg_at_100
value: 53.102000000000004
- type: ndcg_at_1000
value: 54.949999999999996
- type: ndcg_at_3
value: 41.719
- type: ndcg_at_5
value: 44.17
- type: precision_at_1
value: 34.585
- type: precision_at_10
value: 9.368
- type: precision_at_100
value: 1.7870000000000001
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 19.895
- type: precision_at_5
value: 14.506
- type: recall_at_1
value: 28.815
- type: recall_at_10
value: 61.414
- type: recall_at_100
value: 85.922
- type: recall_at_1000
value: 97.15
- type: recall_at_3
value: 45.076
- type: recall_at_5
value: 51.271
- task:
type: Retrieval
dataset:
type: cqadupstack/wordpress
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.298000000000002
- type: map_at_10
value: 32.889
- type: map_at_100
value: 33.989999999999995
- type: map_at_1000
value: 34.074
- type: map_at_3
value: 29.873
- type: map_at_5
value: 31.539
- type: mrr_at_1
value: 26.433
- type: mrr_at_10
value: 34.937000000000005
- type: mrr_at_100
value: 35.914
- type: mrr_at_1000
value: 35.96
- type: mrr_at_3
value: 32.286
- type: mrr_at_5
value: 33.663
- type: ndcg_at_1
value: 26.433
- type: ndcg_at_10
value: 38.173
- type: ndcg_at_100
value: 43.884
- type: ndcg_at_1000
value: 45.916000000000004
- type: ndcg_at_3
value: 32.419
- type: ndcg_at_5
value: 35.092
- type: precision_at_1
value: 26.433
- type: precision_at_10
value: 6.1
- type: precision_at_100
value: 0.963
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 13.802
- type: precision_at_5
value: 9.871
- type: recall_at_1
value: 24.298000000000002
- type: recall_at_10
value: 52.554
- type: recall_at_100
value: 79.345
- type: recall_at_1000
value: 94.464
- type: recall_at_3
value: 37.036
- type: recall_at_5
value: 43.518
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.194999999999999
- type: map_at_10
value: 24.563
- type: map_at_100
value: 26.775
- type: map_at_1000
value: 26.965
- type: map_at_3
value: 19.983999999999998
- type: map_at_5
value: 22.24
- type: mrr_at_1
value: 31.661
- type: mrr_at_10
value: 44.804
- type: mrr_at_100
value: 45.655
- type: mrr_at_1000
value: 45.678000000000004
- type: mrr_at_3
value: 41.292
- type: mrr_at_5
value: 43.468
- type: ndcg_at_1
value: 31.661
- type: ndcg_at_10
value: 34.271
- type: ndcg_at_100
value: 42.04
- type: ndcg_at_1000
value: 45.101
- type: ndcg_at_3
value: 27.529999999999998
- type: ndcg_at_5
value: 29.862
- type: precision_at_1
value: 31.661
- type: precision_at_10
value: 10.925
- type: precision_at_100
value: 1.92
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 20.456
- type: precision_at_5
value: 16.012999999999998
- type: recall_at_1
value: 14.194999999999999
- type: recall_at_10
value: 41.388999999999996
- type: recall_at_100
value: 67.58800000000001
- type: recall_at_1000
value: 84.283
- type: recall_at_3
value: 25.089
- type: recall_at_5
value: 31.642
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.898
- type: map_at_10
value: 23.226
- type: map_at_100
value: 33.372
- type: map_at_1000
value: 35.407
- type: map_at_3
value: 15.892999999999999
- type: map_at_5
value: 18.747
- type: mrr_at_1
value: 73.5
- type: mrr_at_10
value: 80.404
- type: mrr_at_100
value: 80.671
- type: mrr_at_1000
value: 80.676
- type: mrr_at_3
value: 78.958
- type: mrr_at_5
value: 79.683
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 48.337
- type: ndcg_at_100
value: 53.474
- type: ndcg_at_1000
value: 60.999
- type: ndcg_at_3
value: 52.538
- type: ndcg_at_5
value: 49.659
- type: precision_at_1
value: 73.5
- type: precision_at_10
value: 39.25
- type: precision_at_100
value: 12.4
- type: precision_at_1000
value: 2.4459999999999997
- type: precision_at_3
value: 56.333
- type: precision_at_5
value: 48.15
- type: recall_at_1
value: 9.898
- type: recall_at_10
value: 29.511
- type: recall_at_100
value: 60.45700000000001
- type: recall_at_1000
value: 84.47200000000001
- type: recall_at_3
value: 17.064
- type: recall_at_5
value: 21.258
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.19999999999999
- type: f1
value: 46.23854137552949
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 80.093
- type: map_at_10
value: 87.139
- type: map_at_100
value: 87.333
- type: map_at_1000
value: 87.344
- type: map_at_3
value: 86.395
- type: map_at_5
value: 86.866
- type: mrr_at_1
value: 86.36399999999999
- type: mrr_at_10
value: 91.867
- type: mrr_at_100
value: 91.906
- type: mrr_at_1000
value: 91.90700000000001
- type: mrr_at_3
value: 91.484
- type: mrr_at_5
value: 91.759
- type: ndcg_at_1
value: 86.36399999999999
- type: ndcg_at_10
value: 90.197
- type: ndcg_at_100
value: 90.819
- type: ndcg_at_1000
value: 91.01599999999999
- type: ndcg_at_3
value: 89.166
- type: ndcg_at_5
value: 89.74
- type: precision_at_1
value: 86.36399999999999
- type: precision_at_10
value: 10.537
- type: precision_at_100
value: 1.106
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 33.608
- type: precision_at_5
value: 20.618
- type: recall_at_1
value: 80.093
- type: recall_at_10
value: 95.003
- type: recall_at_100
value: 97.328
- type: recall_at_1000
value: 98.485
- type: recall_at_3
value: 92.072
- type: recall_at_5
value: 93.661
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.063
- type: map_at_10
value: 47.113
- type: map_at_100
value: 49.294
- type: map_at_1000
value: 49.422
- type: map_at_3
value: 40.955000000000005
- type: map_at_5
value: 44.5
- type: mrr_at_1
value: 55.401
- type: mrr_at_10
value: 62.99400000000001
- type: mrr_at_100
value: 63.63999999999999
- type: mrr_at_1000
value: 63.661
- type: mrr_at_3
value: 61.034
- type: mrr_at_5
value: 62.253
- type: ndcg_at_1
value: 55.401
- type: ndcg_at_10
value: 55.332
- type: ndcg_at_100
value: 61.931000000000004
- type: ndcg_at_1000
value: 63.841
- type: ndcg_at_3
value: 50.92
- type: ndcg_at_5
value: 52.525
- type: precision_at_1
value: 55.401
- type: precision_at_10
value: 15.262
- type: precision_at_100
value: 2.231
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 33.848
- type: precision_at_5
value: 25.031
- type: recall_at_1
value: 29.063
- type: recall_at_10
value: 62.498
- type: recall_at_100
value: 85.86
- type: recall_at_1000
value: 97.409
- type: recall_at_3
value: 45.472
- type: recall_at_5
value: 53.344
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.205
- type: map_at_10
value: 64.19399999999999
- type: map_at_100
value: 65.183
- type: map_at_1000
value: 65.23299999999999
- type: map_at_3
value: 60.239
- type: map_at_5
value: 62.695
- type: mrr_at_1
value: 74.409
- type: mrr_at_10
value: 80.84
- type: mrr_at_100
value: 81.10199999999999
- type: mrr_at_1000
value: 81.109
- type: mrr_at_3
value: 79.739
- type: mrr_at_5
value: 80.46600000000001
- type: ndcg_at_1
value: 74.409
- type: ndcg_at_10
value: 71.757
- type: ndcg_at_100
value: 75.152
- type: ndcg_at_1000
value: 76.098
- type: ndcg_at_3
value: 66.174
- type: ndcg_at_5
value: 69.283
- type: precision_at_1
value: 74.409
- type: precision_at_10
value: 15.503
- type: precision_at_100
value: 1.8110000000000002
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 43.457
- type: precision_at_5
value: 28.532000000000004
- type: recall_at_1
value: 37.205
- type: recall_at_10
value: 77.515
- type: recall_at_100
value: 90.56
- type: recall_at_1000
value: 96.759
- type: recall_at_3
value: 65.18599999999999
- type: recall_at_5
value: 71.33
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 82.9448
- type: ap
value: 78.25923353099166
- type: f1
value: 82.86422040179993
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.834
- type: map_at_10
value: 35.85
- type: map_at_100
value: 37.013
- type: map_at_1000
value: 37.056
- type: map_at_3
value: 31.613000000000003
- type: map_at_5
value: 34.113
- type: mrr_at_1
value: 23.424
- type: mrr_at_10
value: 36.398
- type: mrr_at_100
value: 37.498
- type: mrr_at_1000
value: 37.534
- type: mrr_at_3
value: 32.275999999999996
- type: mrr_at_5
value: 34.705000000000005
- type: ndcg_at_1
value: 23.424
- type: ndcg_at_10
value: 43.236999999999995
- type: ndcg_at_100
value: 48.776
- type: ndcg_at_1000
value: 49.778
- type: ndcg_at_3
value: 34.692
- type: ndcg_at_5
value: 39.119
- type: precision_at_1
value: 23.424
- type: precision_at_10
value: 6.918
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.881
- type: precision_at_5
value: 11.183
- type: recall_at_1
value: 22.834
- type: recall_at_10
value: 66.03999999999999
- type: recall_at_100
value: 91.532
- type: recall_at_1000
value: 99.068
- type: recall_at_3
value: 42.936
- type: recall_at_5
value: 53.539
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.1377108983128
- type: f1
value: 95.87034720246666
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 86.10579115367078
- type: f1
value: 70.20810321445228
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.80497646267652
- type: f1
value: 77.32475274059293
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 81.52320107599192
- type: f1
value: 81.22312939311655
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.709106678767018
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.95879128399585
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.67476691128679
- type: mrr
value: 33.921654478513986
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.223
- type: map_at_10
value: 15.992999999999999
- type: map_at_100
value: 21.09
- type: map_at_1000
value: 22.822
- type: map_at_3
value: 11.475
- type: map_at_5
value: 13.501
- type: mrr_at_1
value: 53.251000000000005
- type: mrr_at_10
value: 61.878
- type: mrr_at_100
value: 62.307
- type: mrr_at_1000
value: 62.342
- type: mrr_at_3
value: 60.01
- type: mrr_at_5
value: 61.202
- type: ndcg_at_1
value: 51.702999999999996
- type: ndcg_at_10
value: 41.833999999999996
- type: ndcg_at_100
value: 39.061
- type: ndcg_at_1000
value: 47.397
- type: ndcg_at_3
value: 47.083000000000006
- type: ndcg_at_5
value: 44.722
- type: precision_at_1
value: 53.251000000000005
- type: precision_at_10
value: 31.3
- type: precision_at_100
value: 10.254000000000001
- type: precision_at_1000
value: 2.338
- type: precision_at_3
value: 43.756
- type: precision_at_5
value: 38.824
- type: recall_at_1
value: 7.223
- type: recall_at_10
value: 20.529
- type: recall_at_100
value: 39.818
- type: recall_at_1000
value: 70.152
- type: recall_at_3
value: 12.666
- type: recall_at_5
value: 15.798000000000002
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.847
- type: map_at_10
value: 56.255
- type: map_at_100
value: 57.019
- type: map_at_1000
value: 57.03
- type: map_at_3
value: 51.665000000000006
- type: map_at_5
value: 54.543
- type: mrr_at_1
value: 43.801
- type: mrr_at_10
value: 58.733999999999995
- type: mrr_at_100
value: 59.206
- type: mrr_at_1000
value: 59.21300000000001
- type: mrr_at_3
value: 55.266999999999996
- type: mrr_at_5
value: 57.449
- type: ndcg_at_1
value: 43.772
- type: ndcg_at_10
value: 64.213
- type: ndcg_at_100
value: 67.13
- type: ndcg_at_1000
value: 67.368
- type: ndcg_at_3
value: 55.977
- type: ndcg_at_5
value: 60.597
- type: precision_at_1
value: 43.772
- type: precision_at_10
value: 10.272
- type: precision_at_100
value: 1.193
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.261
- type: precision_at_5
value: 17.885
- type: recall_at_1
value: 38.847
- type: recall_at_10
value: 85.76700000000001
- type: recall_at_100
value: 98.054
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 64.82
- type: recall_at_5
value: 75.381
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.77
- type: map_at_10
value: 83.195
- type: map_at_100
value: 83.869
- type: map_at_1000
value: 83.883
- type: map_at_3
value: 80.04599999999999
- type: map_at_5
value: 82.011
- type: mrr_at_1
value: 79.2
- type: mrr_at_10
value: 85.942
- type: mrr_at_100
value: 86.063
- type: mrr_at_1000
value: 86.064
- type: mrr_at_3
value: 84.82
- type: mrr_at_5
value: 85.56899999999999
- type: ndcg_at_1
value: 79.17999999999999
- type: ndcg_at_10
value: 87.161
- type: ndcg_at_100
value: 88.465
- type: ndcg_at_1000
value: 88.553
- type: ndcg_at_3
value: 83.958
- type: ndcg_at_5
value: 85.699
- type: precision_at_1
value: 79.17999999999999
- type: precision_at_10
value: 13.401
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.903000000000006
- type: precision_at_5
value: 24.404
- type: recall_at_1
value: 68.77
- type: recall_at_10
value: 95.132
- type: recall_at_100
value: 99.58200000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 86.119
- type: recall_at_5
value: 90.932
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.7204049654583
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.98164986883849
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.443
- type: map_at_10
value: 13.86
- type: map_at_100
value: 16.496
- type: map_at_1000
value: 16.836000000000002
- type: map_at_3
value: 9.661
- type: map_at_5
value: 11.745
- type: mrr_at_1
value: 26.8
- type: mrr_at_10
value: 37.777
- type: mrr_at_100
value: 38.928000000000004
- type: mrr_at_1000
value: 38.967
- type: mrr_at_3
value: 34.083000000000006
- type: mrr_at_5
value: 36.308
- type: ndcg_at_1
value: 26.8
- type: ndcg_at_10
value: 22.961000000000002
- type: ndcg_at_100
value: 32.582
- type: ndcg_at_1000
value: 37.972
- type: ndcg_at_3
value: 21.292
- type: ndcg_at_5
value: 18.945999999999998
- type: precision_at_1
value: 26.8
- type: precision_at_10
value: 12.06
- type: precision_at_100
value: 2.593
- type: precision_at_1000
value: 0.388
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 16.84
- type: recall_at_1
value: 5.443
- type: recall_at_10
value: 24.445
- type: recall_at_100
value: 52.602000000000004
- type: recall_at_1000
value: 78.767
- type: recall_at_3
value: 12.098
- type: recall_at_5
value: 17.077
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 83.9379272617096
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 79.26752176661364
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 84.8327309083665
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 82.9394255552954
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 88.08995363382608
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 86.53522220099619
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 89.57796559847532
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.66598855577894
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 88.0472708354572
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.04689157650684
- type: mrr
value: 96.51889958262507
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.827999999999996
- type: map_at_10
value: 73.54899999999999
- type: map_at_100
value: 73.892
- type: map_at_1000
value: 73.901
- type: map_at_3
value: 70.663
- type: map_at_5
value: 72.449
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 74.554
- type: mrr_at_100
value: 74.81700000000001
- type: mrr_at_1000
value: 74.82600000000001
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 73.717
- type: ndcg_at_1
value: 66.0
- type: ndcg_at_10
value: 78.218
- type: ndcg_at_100
value: 79.706
- type: ndcg_at_1000
value: 79.925
- type: ndcg_at_3
value: 73.629
- type: ndcg_at_5
value: 75.89
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 10.333
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.889
- type: precision_at_5
value: 19.067
- type: recall_at_1
value: 62.827999999999996
- type: recall_at_10
value: 91.533
- type: recall_at_100
value: 98.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 84.68900000000001
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8019801980198
- type: cos_sim_ap
value: 95.09301057928796
- type: cos_sim_f1
value: 89.71193415637859
- type: cos_sim_precision
value: 92.37288135593221
- type: cos_sim_recall
value: 87.2
- type: dot_accuracy
value: 99.72079207920792
- type: dot_ap
value: 92.77707970155015
- type: dot_f1
value: 85.88588588588588
- type: dot_precision
value: 85.97194388777555
- type: dot_recall
value: 85.8
- type: euclidean_accuracy
value: 99.7980198019802
- type: euclidean_ap
value: 95.04124481520121
- type: euclidean_f1
value: 89.61693548387096
- type: euclidean_precision
value: 90.34552845528455
- type: euclidean_recall
value: 88.9
- type: manhattan_accuracy
value: 99.7960396039604
- type: manhattan_ap
value: 95.02691504694813
- type: manhattan_f1
value: 89.60321446509292
- type: manhattan_precision
value: 90.0100908173562
- type: manhattan_recall
value: 89.2
- type: max_accuracy
value: 99.8019801980198
- type: max_ap
value: 95.09301057928796
- type: max_f1
value: 89.71193415637859
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 72.74124969197169
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.262798307863996
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.823414217790464
- type: mrr
value: 55.557133838383834
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.01226930465494
- type: cos_sim_spearman
value: 30.9368445798007
- type: dot_pearson
value: 30.204833368654533
- type: dot_spearman
value: 30.438900411966618
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22699999999999998
- type: map_at_10
value: 2.0420000000000003
- type: map_at_100
value: 13.33
- type: map_at_1000
value: 33.627
- type: map_at_3
value: 0.639
- type: map_at_5
value: 1.056
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 91.167
- type: mrr_at_100
value: 91.167
- type: mrr_at_1000
value: 91.167
- type: mrr_at_3
value: 90.667
- type: mrr_at_5
value: 91.167
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 80.337
- type: ndcg_at_100
value: 65.852
- type: ndcg_at_1000
value: 59.821000000000005
- type: ndcg_at_3
value: 81.061
- type: ndcg_at_5
value: 81.396
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 85.0
- type: precision_at_100
value: 67.75999999999999
- type: precision_at_1000
value: 26.272000000000002
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 86.4
- type: recall_at_1
value: 0.22699999999999998
- type: recall_at_10
value: 2.241
- type: recall_at_100
value: 16.478
- type: recall_at_1000
value: 56.442
- type: recall_at_3
value: 0.672
- type: recall_at_5
value: 1.143
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.836
- type: map_at_10
value: 8.536000000000001
- type: map_at_100
value: 14.184
- type: map_at_1000
value: 15.885
- type: map_at_3
value: 3.7359999999999998
- type: map_at_5
value: 5.253
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 34.77
- type: mrr_at_100
value: 36.18
- type: mrr_at_1000
value: 36.18
- type: mrr_at_3
value: 30.612000000000002
- type: mrr_at_5
value: 32.449
- type: ndcg_at_1
value: 20.408
- type: ndcg_at_10
value: 20.498
- type: ndcg_at_100
value: 33.354
- type: ndcg_at_1000
value: 45.699
- type: ndcg_at_3
value: 19.292
- type: ndcg_at_5
value: 19.541
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 19.387999999999998
- type: precision_at_100
value: 7.163
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 19.728
- type: precision_at_5
value: 20.0
- type: recall_at_1
value: 1.836
- type: recall_at_10
value: 15.212
- type: recall_at_100
value: 45.364
- type: recall_at_1000
value: 83.64
- type: recall_at_3
value: 4.651000000000001
- type: recall_at_5
value: 7.736
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.5856
- type: ap
value: 14.297836125608864
- type: f1
value: 54.45458507465688
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.89869835880024
- type: f1
value: 62.15163526419782
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 56.408998393035446
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.78822197055493
- type: cos_sim_ap
value: 81.73234934293887
- type: cos_sim_f1
value: 74.16373812312898
- type: cos_sim_precision
value: 73.18263549961469
- type: cos_sim_recall
value: 75.17150395778364
- type: dot_accuracy
value: 87.85837754068069
- type: dot_ap
value: 79.69812660365871
- type: dot_f1
value: 72.52999744702579
- type: dot_precision
value: 70.25222551928783
- type: dot_recall
value: 74.96042216358839
- type: euclidean_accuracy
value: 88.74649818203493
- type: euclidean_ap
value: 81.47777928110055
- type: euclidean_f1
value: 74.1248097412481
- type: euclidean_precision
value: 71.37274059599413
- type: euclidean_recall
value: 77.0976253298153
- type: manhattan_accuracy
value: 88.7286165583835
- type: manhattan_ap
value: 81.47766386927232
- type: manhattan_f1
value: 74.16730231375541
- type: manhattan_precision
value: 71.56526005888125
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 88.78822197055493
- type: max_ap
value: 81.73234934293887
- type: max_f1
value: 74.16730231375541
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.30026778437536
- type: cos_sim_ap
value: 86.56353001037664
- type: cos_sim_f1
value: 79.359197907585
- type: cos_sim_precision
value: 75.12379642365887
- type: cos_sim_recall
value: 84.10070834616569
- type: dot_accuracy
value: 88.8539604921023
- type: dot_ap
value: 85.44601003294055
- type: dot_f1
value: 78.20008094484713
- type: dot_precision
value: 74.88549080403072
- type: dot_recall
value: 81.82168155220204
- type: euclidean_accuracy
value: 89.25369658865992
- type: euclidean_ap
value: 86.46965679550075
- type: euclidean_f1
value: 79.16785612332285
- type: euclidean_precision
value: 73.77627028465017
- type: euclidean_recall
value: 85.4096088697259
- type: manhattan_accuracy
value: 89.26727985407692
- type: manhattan_ap
value: 86.46460344566123
- type: manhattan_f1
value: 79.1723543358
- type: manhattan_precision
value: 74.20875420875421
- type: manhattan_recall
value: 84.84755158607946
- type: max_accuracy
value: 89.30026778437536
- type: max_ap
value: 86.56353001037664
- type: max_f1
value: 79.359197907585
---
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading supervised model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + supervised (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6470, 0.1619],
[0.0786, 0.5844]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). |
kanishka/smolm-autoreg-bpe-seed_8128 | kanishka | "2024-03-19T20:53:56Z" | 4,185 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T20:53:52Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-seed_8128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-seed_8128
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4757
- Accuracy: 0.4994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 128
- seed: 8128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0573 | 1.0 | 2928 | 3.0221 | 0.4374 |
| 2.7148 | 2.0 | 5856 | 2.7910 | 0.4589 |
| 2.5912 | 3.0 | 8784 | 2.6989 | 0.4683 |
| 2.5153 | 4.0 | 11712 | 2.6402 | 0.4762 |
| 2.4585 | 5.0 | 14640 | 2.6094 | 0.4799 |
| 2.4202 | 6.0 | 17568 | 2.5849 | 0.4829 |
| 2.395 | 7.0 | 20496 | 2.5703 | 0.4845 |
| 2.363 | 8.0 | 23424 | 2.5577 | 0.4859 |
| 2.2878 | 9.0 | 26352 | 2.5095 | 0.4940 |
| 2.1407 | 10.0 | 29280 | 2.4757 | 0.4994 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
kanishka/smolm-autoreg-bpe-seed_1729 | kanishka | "2024-03-19T20:54:05Z" | 4,185 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T20:54:01Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-seed_1729
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-seed_1729
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4741
- Accuracy: 0.4993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 128
- seed: 1729
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0421 | 1.0 | 2928 | 3.0145 | 0.4376 |
| 2.7062 | 2.0 | 5856 | 2.7902 | 0.4590 |
| 2.5829 | 3.0 | 8784 | 2.6946 | 0.4682 |
| 2.5042 | 4.0 | 11712 | 2.6443 | 0.4750 |
| 2.4588 | 5.0 | 14640 | 2.6083 | 0.4793 |
| 2.4252 | 6.0 | 17568 | 2.5868 | 0.4829 |
| 2.3884 | 7.0 | 20496 | 2.5670 | 0.4854 |
| 2.3624 | 8.0 | 23424 | 2.5582 | 0.4855 |
| 2.2859 | 9.0 | 26352 | 2.5018 | 0.4944 |
| 2.1433 | 10.0 | 29280 | 2.4741 | 0.4993 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ugurcelebi/DevOpsGPT-1.2-q4_k_m | ugurcelebi | "2024-06-23T11:09:50Z" | 4,185 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/qwen2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-23T10:56:11Z" | ---
base_model: unsloth/qwen2-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
---
# Uploaded model
- **Developed by:** ugurcelebi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
keremberke/yolov8m-csgo-player-detection | keremberke | "2023-02-22T13:03:52Z" | 4,183 | 7 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/csgo-object-detection",
"model-index",
"region:us"
] | object-detection | "2023-01-29T03:32:30Z" |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.21
inference: false
datasets:
- keremberke/csgo-object-detection
model-index:
- name: keremberke/yolov8m-csgo-player-detection
results:
- task:
type: object-detection
dataset:
type: keremberke/csgo-object-detection
name: csgo-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.89165 # min: 0.0 - max: 1.0
name: [email protected](box)
---
<div align="center">
<img width="640" alt="keremberke/yolov8m-csgo-player-detection" src="https://huggingface.co/keremberke/yolov8m-csgo-player-detection/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['ct', 'cthead', 't', 'thead']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.23 ultralytics==8.0.21
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('keremberke/yolov8m-csgo-player-detection')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
Universal-NER/UniNER-7B-all | Universal-NER | "2023-08-11T21:24:35Z" | 4,179 | 82 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2308.03279",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-11T20:52:49Z" | ---
license: cc-by-nc-4.0
language:
- en
---
---
# UniNER-7B-all
**Description**: This model is the best UniNER model. It is trained on the combinations of three data splits: (1) ChatGPT-generated [Pile-NER-type data](https://huggingface.co/datasets/Universal-NER/Pile-NER-type), (2) ChatGPT-generated [Pile-NER-definition data](https://huggingface.co/datasets/Universal-NER/Pile-NER-definition), and (3) 40 supervised datasets in the Universal NER benchmark (see Fig. 4 in paper), where we randomly sample up to 10K instances from the train split of each dataset. Note that CrossNER and MIT datasets are excluded from training for OOD evaluation.
Check our [paper](https://arxiv.org/abs/2308.03279) for more information. Check our [repo](https://github.com/universal-ner/universal-ner) about how to use the model.
## Inference
The template for inference instances is as follows:
<div style="background-color: #f6f8fa; padding: 20px; border-radius: 10px; border: 1px solid #e1e4e8; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
<strong>Prompting template:</strong><br/>
A virtual assistant answers questions from a user based on the provided text.<br/>
USER: Text: <span style="color: #d73a49;">{Fill the input text here}</span><br/>
ASSISTANT: I’ve read this text.<br/>
USER: What describes <span style="color: #d73a49;">{Fill the entity type here}</span> in the text?<br/>
ASSISTANT: <span style="color: #0366d6;">(model's predictions in JSON format)</span><br/>
</div>
### Note: Inferences are based on one entity type at a time. For multiple entity types, create separate instances for each type.
## License
This model and its associated data are released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. They are primarily used for research purposes.
## Citation
```bibtex
@article{zhou2023universalner,
title={UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition},
author={Wenxuan Zhou and Sheng Zhang and Yu Gu and Muhao Chen and Hoifung Poon},
year={2023},
eprint={2308.03279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
philschmid/bart-base-samsum | philschmid | "2022-12-05T13:32:40Z" | 4,177 | 3 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"sagemaker",
"summarization",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-03-02T23:29:05Z" | ---
language: en
license: apache-2.0
tags:
- sagemaker
- bart
- summarization
datasets:
- samsum
widget:
- text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\
Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\
\ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\
\ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\
\ "
model-index:
- name: philschmid/bart-base-samsum
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 45.3438
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2JhY2U3M2ViYTVhNTAzM2M3NjhjMzBjYTk0N2I2MzlmN2Q0N2M1YzFlNGU1ZWVlMGI1YjYzMzZhYjNmMDk1MCIsInZlcnNpb24iOjF9.tLr7VUXSYDd9LaMtVIV8dheZRxX7pf1kyn9Kd4MQY8L_pj13_CeWenqOauVsHzRAZ5Jt5RuHjYFBWbV2TNjvDQ
- type: rouge
value: 21.6953
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmExODAyMTcwNjU5MjM0MzkzNjZlMGY5YzMyMjNiZjM5OWQ5NzFhODIyMWJiYjUwZGY4ZGM0MzE5OTJiYzEyMSIsInZlcnNpb24iOjF9.qR_Cge1A4NfJL_do4W7Y1kHxU0L98Ds6tbZy-4e-FVNW4aG5zRBxgOX8ieB93N2E19gtzqGE6BdpQfVcZAgXBQ
- type: rouge
value: 38.1365
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTA5ZTgyNDYxNzgzN2FhNTBlN2NjNzE0MDgyMzZkMTNjMGUyMDk3N2EzOThhMGFhZTQyYzZhZjQ5NjlkOTVlYyIsInZlcnNpb24iOjF9.dKns4BLmyWGUWweYSLYFttHIoWw57z1GKnvatMjkyVvcgwd_iF9imZ7QnJjjLAkc-AUMwwoxoOjEVF8FNf8JBA
- type: rouge
value: 41.5913
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmJiMzY3ODEwY2Q0YzNjM2QwMjI2MGRmOTEyYjQ3ZmNhZThmYWUxNDJkZDY1NTg3NGQzOGI0YmZlYjI2MDNlZSIsInZlcnNpb24iOjF9.pBrKwWa1mjacdhXSXMUQ0nv1wbcwscW_9uVFkicF2PbJ-JQjzUbL10Jy-b_yBOiJeY5I9ApJySgUH5JMq3_pBg
- type: loss
value: 1.5832244157791138
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWZhNGZjNjJiODIyNDU0NjZjMGExOWE1NWJhMmFiOGY5MDNiZWY0MjExYzA3Njg1OTJhNjEyZjI2MTg0N2I5YiIsInZlcnNpb24iOjF9.T6xwQM5yZ8eD8upqo5zjcUxcX0mqY9wx7f8j0zN9txAe39hURHY-8ibLYJvWckepTvpdUA6is4AC9RUWia24AA
- type: gen_len
value: 17.9927
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzU4ZGI1ZjJlMjg0NTBkYzlkOWQzMWUzZDZkODZkZjVhNTAyMTI4YTA2MWExM2U2YTQwM2YxMDQ2ODE0Yjc0NSIsInZlcnNpb24iOjF9.mDGhriDLXIJq_yb3Yqj6MBJSCxXXrRN1LfHsGkV8i1oOpkLiSLic7D8fSFMdTZTkl2XmzQfkVU2Wv298YyQEBg
---
## `bart-base-samsum`
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
You can find the notebook [here]() and the referring blog post [here]().
For more information look at:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
## Hyperparameters
```json
{
"dataset_name": "samsum",
"do_eval": true,
"do_train": true,
"fp16": true,
"learning_rate": 5e-05,
"model_name_or_path": "facebook/bart-base",
"num_train_epochs": 3,
"output_dir": "/opt/ml/model",
"per_device_eval_batch_size": 8,
"per_device_train_batch_size": 8,
"seed": 7
}
```
## Train results
| key | value |
| --- | ----- |
| epoch | 3 |
| init_mem_cpu_alloc_delta | 180190 |
| init_mem_cpu_peaked_delta | 18282 |
| init_mem_gpu_alloc_delta | 558658048 |
| init_mem_gpu_peaked_delta | 0 |
| train_mem_cpu_alloc_delta | 6658519 |
| train_mem_cpu_peaked_delta | 642937 |
| train_mem_gpu_alloc_delta | 2267624448 |
| train_mem_gpu_peaked_delta | 10355728896 |
| train_runtime | 98.4931 |
| train_samples | 14732 |
| train_samples_per_second | 3.533 |
## Eval results
| key | value |
| --- | ----- |
| epoch | 3 |
| eval_loss | 1.5356481075286865 |
| eval_mem_cpu_alloc_delta | 659047 |
| eval_mem_cpu_peaked_delta | 18254 |
| eval_mem_gpu_alloc_delta | 0 |
| eval_mem_gpu_peaked_delta | 300285440 |
| eval_runtime | 0.3116 |
| eval_samples | 818 |
| eval_samples_per_second | 2625.337 |
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="philschmid/bart-base-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
nlp(conversation)
```
|
TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T | TinyLlama | "2023-12-29T06:04:50Z" | 4,177 | 47 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T06:13:09Z" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
digiplay/OnlyRealistic_v29 | digiplay | "2024-06-27T21:39:48Z" | 4,177 | 2 | diffusers | [
"diffusers",
"safetensors",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-06-18T21:14:31Z" | ---
license: other
---
Model info:
https://civitai.com/models/112756/onlyrealistic-or?modelVersionId=130040
|
Wi/gptp | Wi | "2022-09-15T20:16:30Z" | 4,174 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"ace",
"en",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2022-09-15T20:07:18Z" | ---
language: en
license: apache-2.0
tags:
- ace
---
# ACE Example
|
severinsimmler/xlm-roberta-longformer-base-16384 | severinsimmler | "2023-07-10T22:15:52Z" | 4,172 | 16 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"feature-extraction",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2004.05150",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2023-04-20T15:41:44Z" | ---
model-index:
- name: xlm-roberta-longformer-base-16384
results: []
license: mit
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# xlm-roberta-longformer-base-16384
⚠️ This is just the PyTorch version of [`hyperonym/xlm-roberta-longformer-base-16384`](https://huggingface.co/hyperonym/xlm-roberta-longformer-base-16384) without any modifications.
**xlm-roberta-longformer** is a multilingual [Longformer](https://arxiv.org/abs/2004.05150) initialized with [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base)'s weights without further pretraining. It is intended to be fine-tuned on a downstream task.
The notebook for replicating the model is available on GitHub: https://github.com/hyperonym/dirge/blob/master/models/xlm-roberta-longformer/convert.ipynb
|
google/t5_xxl_true_nli_mixture | google | "2023-03-23T10:55:45Z" | 4,171 | 38 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:tals/vitaminc",
"dataset:SetFit/mnli",
"dataset:snli",
"dataset:fever",
"dataset:paws",
"dataset:scitail",
"arxiv:2204.04991",
"arxiv:1508.05326",
"arxiv:1904.01130",
"arxiv:2103.08541",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-12-07T16:51:46Z" | ---
license: apache-2.0
datasets:
- tals/vitaminc
- SetFit/mnli
- snli
- fever
- paws
- scitail
language:
- en
---
This is an NLI model based on T5-XXL that predicts a binary label ('1' - Entailment, '0' - No entailment).
It is trained similarly to the NLI model described in the [TRUE paper (Honovich et al, 2022)](https://arxiv.org/pdf/2204.04991.pdf), but using the following datasets instead of ANLI:
- SNLI ([Bowman et al., 2015](https://arxiv.org/abs/1508.05326))
- MNLI ([Williams et al., 2018](https://aclanthology.org/N18-1101.pdf))
- Fever ([Thorne et al., 2018](https://aclanthology.org/N18-1074.pdf))
- Scitail ([Khot et al., 2018](http://ai2-website.s3.amazonaws.com/publications/scitail-aaai-2018_cameraready.pdf))
- PAWS ([Zhang et al. 2019](https://arxiv.org/abs/1904.01130))
- VitaminC ([Schuster et al., 2021](https://arxiv.org/pdf/2103.08541.pdf))
The input format for the model is: "premise: PREMISE_TEXT hypothesis: HYPOTHESIS_TEXT".
If you use this model for a research publication, please cite the TRUE paper (using the bibtex entry below) and the dataset papers mentioned above.
```
@inproceedings{honovich-etal-2022-true-evaluating,
title = "{TRUE}: Re-evaluating Factual Consistency Evaluation",
author = "Honovich, Or and
Aharoni, Roee and
Herzig, Jonathan and
Taitelbaum, Hagai and
Kukliansy, Doron and
Cohen, Vered and
Scialom, Thomas and
Szpektor, Idan and
Hassidim, Avinatan and
Matias, Yossi",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.287",
doi = "10.18653/v1/2022.naacl-main.287",
pages = "3905--3920",
}
``` |
NikolayKozloff/Solar-Ko-Recovery-11B-Q8_0-GGUF | NikolayKozloff | "2024-07-01T18:31:01Z" | 4,171 | 1 | transformers | [
"transformers",
"gguf",
"solar",
"mistral",
"pytorch",
"solar-ko",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"en",
"base_model:beomi/Solar-Ko-Recovery-11B",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-07-01T18:30:13Z" | ---
base_model: beomi/Solar-Ko-Recovery-11B
language:
- ko
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- solar
- mistral
- pytorch
- solar-ko
- llama-cpp
- gguf-my-repo
inference: false
---
# NikolayKozloff/Solar-Ko-Recovery-11B-Q8_0-GGUF
This model was converted to GGUF format from [`beomi/Solar-Ko-Recovery-11B`](https://huggingface.co/beomi/Solar-Ko-Recovery-11B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/beomi/Solar-Ko-Recovery-11B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Solar-Ko-Recovery-11B-Q8_0-GGUF --hf-file solar-ko-recovery-11b-q8_0.gguf -c 2048
```
|
ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF | ZeroWw | "2024-06-28T14:58:21Z" | 4,170 | 1 | null | [
"gguf",
"en",
"license:mit",
"region:us"
] | null | "2024-06-28T14:45:57Z" |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
|
ybelkada/falcon-7b-sharded-bf16 | ybelkada | "2024-04-10T12:24:31Z" | 4,169 | 19 | transformers | [
"transformers",
"pytorch",
"safetensors",
"falcon",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-02T10:11:50Z" | Entry not found |
NousResearch/Yarn-Llama-2-7b-128k | NousResearch | "2023-09-04T05:26:59Z" | 4,167 | 38 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"dataset:pg19",
"arxiv:2309.00071",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-31T05:37:34Z" | ---
datasets:
- pg19
metrics:
- perplexity
library_name: transformers
---
# Model Card: Nous-Yarn-Llama-2-13b-64k
[Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
[GitHub](https://github.com/jquesnelle/yarn)
## Model Description
Nous-Yarn-Llama-2-13b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 600 steps.
This model is the Flash Attention 2 patched version of the original model: https://huggingface.co/conceptofmind/Yarn-Llama-2-13b-128k
Note that this model **requires** the [Flash Attention library](https://pypi.org/project/flash-attn/) in order to function correctly, see the Model Usage section for installation instructions.
## Model Training
Starting from the base Llama 2 models, this model was further pretrained on a subset of the PG19 dataset, allowing it to effectively utilize up to 128k tokens of context.
## Collaborators
- [bloc97](https://github.com/bloc97): Methods, Paper and evals
- [@theemozilla](https://twitter.com/theemozilla): Methods, Paper and evals
- [@EnricoShippole](https://twitter.com/EnricoShippole): Model Training
- [honglu2875](https://github.com/honglu2875): Paper and evals
The authors would like to thank Stability AI, Carper AI, and Eleuther AI for their generous support of significant computing resources that enabled the training of these models and the completion of this research. We would also like to thank Jonathan Tow and Dakota Mahan directly for their help in advising on the use of the Stability AI compute cluster. Additionally, we would like to thank a16z, and PygmalionAI, for providing resources to run evaluations and experiments on the models.
## Usage and Prompt Format
Install FA2 and Rotary Extensions:
```
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
There are no specific prompt formats as this is a pretrained base model.
## Benchmark Results
TODO
## Future Plans
We plan to continue training when we have more compute and to improve the dataset and/or instruct tune the models in order to improve the long context performance even further.
## Model Usage
The model is available for download on HuggingFace. |
valeriojob/flashcardsGPT-Qwen2-7B-v0.1-GGUF | valeriojob | "2024-06-27T00:56:30Z" | 4,167 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:unsloth/Qwen2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-27T00:38:11Z" | ---
base_model: unsloth/Qwen2-7B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# flashcardsGPT-Qwen2-7B-v0.1-GGUF
- This model is a fine-tuned version of [unsloth/Qwen2-7b](https://huggingface.co/unsloth/Qwen2-7b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) based on real university lecture data.
- Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university ([more info](https://www.fhnw.ch/en/degree-programmes/business/bsc-in-business-information-technology)).
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/flashcardsGPT-Qwen2-7B-v0.1](https://huggingface.co/valeriojob/flashcardsGPT-Qwen2-7B-v0.1) that includes the default format of the model as well as the LoRA adapters of the model.
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Model description
This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object.
It uses the following Prompt Engineering template:
"""
Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic.
Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example.
Ensure the 'back' field contains no line breaks.
No additional text or explanation should be provided—only respond with the JSON object.
Here is the OCR-extracted text:
""""
## Intended uses & limitations
The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university.
## Training and evaluation data
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/FHNW-Flashcards-Data-v0.1](https://huggingface.co/datasets/valeriojob/FHNW-Flashcards-Data-v0.1)
## Licenses
- **License:** apache-2.0 |
tomh/scotus-bert | tomh | "2023-06-13T17:38:22Z" | 4,165 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-06-13T12:27:45Z" | Entry not found |
intfloat/simlm-msmarco-reranker | intfloat | "2023-05-22T09:36:12Z" | 4,164 | 12 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"en",
"arxiv:2207.02578",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-08-11T05:27:09Z" | ---
license: mit
language:
- en
---
# SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval
paper available at [https://arxiv.org/pdf/2207.02578](https://arxiv.org/pdf/2207.02578)
code available at [https://github.com/microsoft/unilm/tree/master/simlm](https://github.com/microsoft/unilm/tree/master/simlm)
## Paper abstract
In this paper, we propose SimLM (Similarity matching with Language Model pre-training), a simple yet effective pre-training method for dense passage retrieval.
It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.
We use a replaced language modeling objective, which is inspired by ELECTRA,
to improve the sample efficiency and reduce the mismatch of the input distribution between pre-training and fine-tuning.
SimLM only requires access to unlabeled corpus, and is more broadly applicable when there are no labeled data or queries.
We conduct experiments on several large-scale passage retrieval datasets, and show substantial improvements over strong baselines under various settings.
Remarkably, SimLM even outperforms multi-vector approaches such as ColBERTv2 which incurs significantly more storage cost.
## Results on MS-MARCO passage ranking task
| Model | dev MRR@10 | dev R@50 | dev R@1k | TREC DL 2019 nDCG@10 | TREC DL 2020 nDCG@10 |
|--|---|---|---|---|---|
| **SimLM (this model)** | 43.8 | 89.2 | 98.6 | 74.6 | 72.7 |
## Usage
Since we use a listwise loss to train the re-ranker,
the relevance score is not bounded to a specific numerical range.
Higher scores mean more relevant between the given query and passage.
Get relevance score from our re-ranker:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer, BatchEncoding, PreTrainedTokenizerFast
from transformers.modeling_outputs import SequenceClassifierOutput
def encode(tokenizer: PreTrainedTokenizerFast,
query: str, passage: str, title: str = '-') -> BatchEncoding:
return tokenizer(query,
text_pair='{}: {}'.format(title, passage),
max_length=192,
padding=True,
truncation=True,
return_tensors='pt')
tokenizer = AutoTokenizer.from_pretrained('intfloat/simlm-msmarco-reranker')
model = AutoModelForSequenceClassification.from_pretrained('intfloat/simlm-msmarco-reranker')
model.eval()
with torch.no_grad():
batch_dict = encode(tokenizer, 'how long is super bowl game', 'The Super Bowl is typically four hours long. The game itself takes about three and a half hours, with a 30 minute halftime show built in.')
outputs: SequenceClassifierOutput = model(**batch_dict, return_dict=True)
print(outputs.logits[0])
batch_dict = encode(tokenizer, 'how long is super bowl game', 'The cost of a Super Bowl commercial runs about $5 million for 30 seconds of airtime. But the benefits that the spot can bring to a brand can help to justify the cost.')
outputs: SequenceClassifierOutput = model(**batch_dict, return_dict=True)
print(outputs.logits[0])
```
## Citation
```bibtex
@article{Wang2022SimLMPW,
title={SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval},
author={Liang Wang and Nan Yang and Xiaolong Huang and Binxing Jiao and Linjun Yang and Daxin Jiang and Rangan Majumder and Furu Wei},
journal={ArXiv},
year={2022},
volume={abs/2207.02578}
}
``` |
ashleyliu31/bert-finetuned-tech-product-name-ner | ashleyliu31 | "2023-10-19T02:13:28Z" | 4,163 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-10-19T00:32:20Z" | ---
license: apache-2.0
---
Finetuned BERT for Tech Product Names Named Entity Recognition (NER)
GitHub: https://github.com/ashleyliu31/finetuned_bert_for_ner
This NER model can recognize and tag tech product names like 'Asus ZenBook UX430UN', 'Acer Aspire 3', 'Nokia 110 4G', or 'Xiaomi 11T Pro 5G Hyperphone' in a sentence. The model was trained on the names of laptops and mobile phones. It might not be suitable for other tech products.
To test the model, enter a sentence that contains a laptop or mobile phone product name in the "Hosted inference API" input field and press "Compute". The model will highlight and tag the product name in the sentence.
Sample sentence to enter: "I love my new Razer Blade 16." "How much is the new IPhone 16 Pro Max?" |
demo-leaderboard/gpt2-demo | demo-leaderboard | "2023-10-16T08:14:18Z" | 4,161 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-16T07:25:31Z" | Entry not found |
HooshvareLab/bert-fa-zwnj-base | HooshvareLab | "2021-05-18T21:05:42Z" | 4,159 | 9 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language: fa
license: apache-2.0
---
# ParsBERT (v3.0)
A Transformer-based Model for Persian Language Understanding
The new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Introduction
ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news).
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. |
timm/swinv2_tiny_window8_256.ms_in1k | timm | "2024-02-10T23:31:13Z" | 4,159 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2111.09883",
"license:mit",
"region:us"
] | image-classification | "2023-03-18T03:37:25Z" | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for swinv2_tiny_window8_256.ms_in1k
A Swin Transformer V2 image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 28.3
- GMACs: 6.0
- Activations (M): 24.6
- Image size: 256 x 256
- **Papers:**
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Original:** https://github.com/microsoft/Swin-Transformer
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('swinv2_tiny_window8_256.ms_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swinv2_tiny_window8_256.ms_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for swin_base_patch4_window7_224 (NHWC output)
# torch.Size([1, 56, 56, 128])
# torch.Size([1, 28, 28, 256])
# torch.Size([1, 14, 14, 512])
# torch.Size([1, 7, 7, 1024])
# e.g. for swinv2_cr_small_ns_224 (NCHW output)
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'swinv2_tiny_window8_256.ms_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2
# or (batch_size, num_features, H, W) for swinv2_cr
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{liu2021swinv2,
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
jinaai/jina-reranker-v1-tiny-en | jinaai | "2024-06-20T06:50:01Z" | 4,159 | 11 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"reranker",
"cross-encoder",
"transformers.js",
"text-classification",
"custom_code",
"en",
"arxiv:2310.19923",
"arxiv:2108.12409",
"license:apache-2.0",
"region:eu"
] | text-classification | "2024-04-15T07:36:45Z" | ---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- reranker
- cross-encoder
- transformers.js
pipeline_tag: text-classification
---
<br><br>
<p align="center">
<img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
# jina-reranker-v1-tiny-en
This model is designed for **blazing-fast** reranking while maintaining **competitive performance**. What's more, it leverages the power of our [JinaBERT](https://arxiv.org/abs/2310.19923) model as its foundation. `JinaBERT` itself is a unique variant of the BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409). This allows `jina-reranker-v1-tiny-en` to process significantly longer sequences of text compared to other reranking models, up to an impressive **8,192** tokens.
To achieve the remarkable speed, the `jina-reranker-v1-tiny-en` employ a technique called knowledge distillation. Here, a complex, but slower, model (like our original [jina-reranker-v1-base-en](https://jina.ai/reranker/)) acts as a teacher, condensing its knowledge into a smaller, faster student model. This student retains most of the teacher's knowledge, allowing it to deliver similar accuracy in a fraction of the time.
Here's a breakdown of the reranker models we provide:
| Model Name | Layers | Hidden Size | Parameters (Millions) |
| ------------------------------------------------------------------------------------ | ------ | ----------- | --------------------- |
| [jina-reranker-v1-base-en](https://jina.ai/reranker/) | 12 | 768 | 137.0 |
| [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en) | 6 | 384 | 37.8 |
| [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) | 4 | 384 | 33.0 |
> Currently, the `jina-reranker-v1-base-en` model is not available on Hugging Face. You can access it via the [Jina AI Reranker API](https://jina.ai/reranker/).
As you can see, the `jina-reranker-v1-turbo-en` offers a balanced approach with **6 layers** and **37.8 million** parameters. This translates to fast search and reranking while preserving a high degree of accuracy. The `jina-reranker-v1-tiny-en` prioritizes speed even further, achieving the fastest inference speeds with its **4-layer**, **33.0 million** parameter architecture. This makes it ideal for scenarios where absolute top accuracy is less crucial.
# Usage
1. The easiest way to starting using `jina-reranker-v1-tiny-en` is to use Jina AI's [Reranker API](https://jina.ai/reranker/).
```bash
curl https://api.jina.ai/v1/rerank \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "jina-reranker-v1-tiny-en",
"query": "Organic skincare products for sensitive skin",
"documents": [
"Eco-friendly kitchenware for modern homes",
"Biodegradable cleaning supplies for eco-conscious consumers",
"Organic cotton baby clothes for sensitive skin",
"Natural organic skincare range for sensitive skin",
"Tech gadgets for smart homes: 2024 edition",
"Sustainable gardening tools and compost solutions",
"Sensitive skin-friendly facial cleansers and toners",
"Organic food wraps and storage solutions",
"All-natural pet food for dogs with allergies",
"Yoga mats made from recycled materials"
],
"top_n": 3
}'
```
2. Alternatively, you can use the latest version of the `sentence-transformers>=0.27.0` library. You can install it via pip:
```bash
pip install -U sentence-transformers
```
Then, you can use the following code to interact with the model:
```python
from sentence_transformers import CrossEncoder
# Load the model, here we use our tiny sized model
model = CrossEncoder("jinaai/jina-reranker-v1-tiny-en", trust_remote_code=True)
# Example query and documents
query = "Organic skincare products for sensitive skin"
documents = [
"Eco-friendly kitchenware for modern homes",
"Biodegradable cleaning supplies for eco-conscious consumers",
"Organic cotton baby clothes for sensitive skin",
"Natural organic skincare range for sensitive skin",
"Tech gadgets for smart homes: 2024 edition",
"Sustainable gardening tools and compost solutions",
"Sensitive skin-friendly facial cleansers and toners",
"Organic food wraps and storage solutions",
"All-natural pet food for dogs with allergies",
"Yoga mats made from recycled materials"
]
results = model.rank(query, documents, return_documents=True, top_k=3)
```
3. You can also use the `transformers` library to interact with the model programmatically.
```python
!pip install transformers
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
'jinaai/jina-reranker-v1-tiny-en', num_labels=1, trust_remote_code=True
)
# Example query and documents
query = "Organic skincare products for sensitive skin"
documents = [
"Eco-friendly kitchenware for modern homes",
"Biodegradable cleaning supplies for eco-conscious consumers",
"Organic cotton baby clothes for sensitive skin",
"Natural organic skincare range for sensitive skin",
"Tech gadgets for smart homes: 2024 edition",
"Sustainable gardening tools and compost solutions",
"Sensitive skin-friendly facial cleansers and toners",
"Organic food wraps and storage solutions",
"All-natural pet food for dogs with allergies",
"Yoga mats made from recycled materials"
]
# construct sentence pairs
sentence_pairs = [[query, doc] for doc in documents]
scores = model.compute_score(sentence_pairs)
```
4. You can also use the `transformers.js` library to run the model directly in JavaScript (in-browser, Node.js, Deno, etc.)!
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
Then, you can use the following code to interact with the model:
```js
import { AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers';
const model_id = 'jinaai/jina-reranker-v1-tiny-en';
const model = await AutoModelForSequenceClassification.from_pretrained(model_id, { quantized: false });
const tokenizer = await AutoTokenizer.from_pretrained(model_id);
/**
* Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores.
* @param {string} query A single query
* @param {string[]} documents A list of documents
* @param {Object} options Options for ranking
* @param {number} [options.top_k=undefined] Return the top-k documents. If undefined, all documents are returned.
* @param {number} [options.return_documents=false] If true, also returns the documents. If false, only returns the indices and scores.
*/
async function rank(query, documents, {
top_k = undefined,
return_documents = false,
} = {}) {
const inputs = tokenizer(
new Array(documents.length).fill(query),
{ text_pair: documents, padding: true, truncation: true }
)
const { logits } = await model(inputs);
return logits.sigmoid().tolist()
.map(([score], i) => ({
corpus_id: i,
score,
...(return_documents ? { text: documents[i] } : {})
})).sort((a, b) => b.score - a.score).slice(0, top_k);
}
// Example usage:
const query = "Organic skincare products for sensitive skin"
const documents = [
"Eco-friendly kitchenware for modern homes",
"Biodegradable cleaning supplies for eco-conscious consumers",
"Organic cotton baby clothes for sensitive skin",
"Natural organic skincare range for sensitive skin",
"Tech gadgets for smart homes: 2024 edition",
"Sustainable gardening tools and compost solutions",
"Sensitive skin-friendly facial cleansers and toners",
"Organic food wraps and storage solutions",
"All-natural pet food for dogs with allergies",
"Yoga mats made from recycled materials",
]
const results = await rank(query, documents, { return_documents: true, top_k: 3 });
console.log(results);
```
That's it! You can now use the `jina-reranker-v1-tiny-en` model in your projects.
# Evaluation
We evaluated Jina Reranker on 3 key benchmarks to ensure top-tier performance and search relevance.
| Model Name | NDCG@10 (17 BEIR datasets) | NDCG@10 (5 LoCo datasets) | Hit Rate (LlamaIndex RAG) |
| ------------------------------------------ | -------------------------- | ------------------------- | ------------------------- |
| `jina-reranker-v1-base-en` | **52.45** | **87.31** | **85.53** |
| `jina-reranker-v1-turbo-en` | **49.60** | **69.21** | **85.13** |
| `jina-reranker-v1-tiny-en` (you are here) | **48.54** | **70.29** | **85.00** |
| `mxbai-rerank-base-v1` | 49.19 | - | 82.50 |
| `mxbai-rerank-xsmall-v1` | 48.80 | - | 83.69 |
| `ms-marco-MiniLM-L-6-v2` | 48.64 | - | 82.63 |
| `ms-marco-MiniLM-L-4-v2` | 47.81 | - | 83.82 |
| `bge-reranker-base` | 47.89 | - | 83.03 |
**Note:**
- `NDCG@10` is a measure of ranking quality, with higher scores indicating better search results. `Hit Rate` measures the percentage of relevant documents that appear in the top 10 search results.
- The results of LoCo datasets on other models are not available since they **do not support** long documents more than 512 tokens.
For more details, please refer to our [benchmarking sheets](https://docs.google.com/spreadsheets/d/1V8pZjENdBBqrKMzZzOWc2aL60wtnR0yrEBY3urfO5P4/edit?usp=sharing).
# Contact
Join our [Discord community](https://discord.jina.ai/) and chat with other community members about ideas. |
TheLastBen/Filmic | TheLastBen | "2024-02-06T21:13:38Z" | 4,158 | 11 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-02-06T18:05:06Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
---
### Filmic Style
#### SDXL LoRA by TheLastBen
#### Prompts to start with :
Any prompt, "pov" token is optional
---
Trained using https://github.com/TheLastBen/fast-stable-diffusion SDXL trainer.
#### Sample pictures:
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
|
caidas/swin2SR-classical-sr-x4-64 | caidas | "2024-03-27T11:13:03Z" | 4,156 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"swin2sr",
"image-to-image",
"vision",
"arxiv:2209.11345",
"license:apache-2.0",
"region:us"
] | image-to-image | "2022-12-16T14:07:21Z" | ---
license: apache-2.0
tags:
- vision
- image-to-image
inference: false
---
# Swin2SR model (image super-resolution)
Swin2SR model that upscales images x4. It was introduced in the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345)
by Conde et al. and first released in [this repository](https://github.com/mv-lab/swin2sr).
# Intended use cases
This model is intended for image super resolution.
# Usage
Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr#transformers.Swin2SRForImageSuperResolution.forward.example). |
keremberke/yolov8m-nlf-head-detection | keremberke | "2023-02-22T13:04:40Z" | 4,156 | 2 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/nfl-object-detection",
"model-index",
"region:us"
] | object-detection | "2023-01-29T22:00:07Z" |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.23
inference: false
datasets:
- keremberke/nfl-object-detection
model-index:
- name: keremberke/yolov8m-nlf-head-detection
results:
- task:
type: object-detection
dataset:
type: keremberke/nfl-object-detection
name: nfl-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.28743 # min: 0.0 - max: 1.0
name: [email protected](box)
---
<div align="center">
<img width="640" alt="keremberke/yolov8m-nlf-head-detection" src="https://huggingface.co/keremberke/yolov8m-nlf-head-detection/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['Helmet', 'Helmet-Blurred', 'Helmet-Difficult', 'Helmet-Partial', 'Helmet-Sideline']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.24 ultralytics==8.0.23
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('keremberke/yolov8m-nlf-head-detection')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
keremberke/yolov8m-pokemon-classification | keremberke | "2023-02-22T13:04:19Z" | 4,153 | 2 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"image-classification",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/pokemon-classification",
"model-index",
"region:us"
] | image-classification | "2023-01-28T05:02:37Z" |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- image-classification
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.23
inference: false
datasets:
- keremberke/pokemon-classification
model-index:
- name: keremberke/yolov8m-pokemon-classification
results:
- task:
type: image-classification
dataset:
type: keremberke/pokemon-classification
name: pokemon-classification
split: validation
metrics:
- type: accuracy
value: 0.03279 # min: 0.0 - max: 1.0
name: top1 accuracy
- type: accuracy
value: 0.09699 # min: 0.0 - max: 1.0
name: top5 accuracy
---
<div align="center">
<img width="640" alt="keremberke/yolov8m-pokemon-classification" src="https://huggingface.co/keremberke/yolov8m-pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['Abra', 'Aerodactyl', 'Alakazam', 'Alolan Sandslash', 'Arbok', 'Arcanine', 'Articuno', 'Beedrill', 'Bellsprout', 'Blastoise', 'Bulbasaur', 'Butterfree', 'Caterpie', 'Chansey', 'Charizard', 'Charmander', 'Charmeleon', 'Clefable', 'Clefairy', 'Cloyster', 'Cubone', 'Dewgong', 'Diglett', 'Ditto', 'Dodrio', 'Doduo', 'Dragonair', 'Dragonite', 'Dratini', 'Drowzee', 'Dugtrio', 'Eevee', 'Ekans', 'Electabuzz', 'Electrode', 'Exeggcute', 'Exeggutor', 'Farfetchd', 'Fearow', 'Flareon', 'Gastly', 'Gengar', 'Geodude', 'Gloom', 'Golbat', 'Goldeen', 'Golduck', 'Golem', 'Graveler', 'Grimer', 'Growlithe', 'Gyarados', 'Haunter', 'Hitmonchan', 'Hitmonlee', 'Horsea', 'Hypno', 'Ivysaur', 'Jigglypuff', 'Jolteon', 'Jynx', 'Kabuto', 'Kabutops', 'Kadabra', 'Kakuna', 'Kangaskhan', 'Kingler', 'Koffing', 'Krabby', 'Lapras', 'Lickitung', 'Machamp', 'Machoke', 'Machop', 'Magikarp', 'Magmar', 'Magnemite', 'Magneton', 'Mankey', 'Marowak', 'Meowth', 'Metapod', 'Mew', 'Mewtwo', 'Moltres', 'MrMime', 'Muk', 'Nidoking', 'Nidoqueen', 'Nidorina', 'Nidorino', 'Ninetales', 'Oddish', 'Omanyte', 'Omastar', 'Onix', 'Paras', 'Parasect', 'Persian', 'Pidgeot', 'Pidgeotto', 'Pidgey', 'Pikachu', 'Pinsir', 'Poliwag', 'Poliwhirl', 'Poliwrath', 'Wigglytuff', 'Zapdos', 'Zubat']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.24 ultralytics==8.0.23
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, postprocess_classify_output
# load model
model = YOLO('keremberke/yolov8m-pokemon-classification')
# set model parameters
model.overrides['conf'] = 0.25 # model confidence threshold
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].probs) # [0.1, 0.2, 0.3, 0.4]
processed_result = postprocess_classify_output(model, result=results[0])
print(processed_result) # {"cat": 0.4, "dog": 0.6}
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
42dot/42dot_LLM-SFT-1.3B | 42dot | "2024-02-13T05:58:23Z" | 4,153 | 33 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"causal-lm",
"42dot_llm",
"en",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-04T07:14:44Z" | ---
language:
- en
- ko
pipeline_tag: text-generation
tags:
- pytorch
- llama
- causal-lm
- 42dot_llm
license: cc-by-nc-4.0
---
# 42dot_LLM-SFT-1.3B
**42dot LLM-SFT** is a large language model (LLM) developed by [**42dot**](https://42dot.ai/) which is trained to follow natural language instructions.
42dot LLM-SFT is a part of **42dot LLM**, and derived from **42dot LLM-PLM** by supervised fine-tuning (SFT). This repository contains a 1.3B-parameter version.
## Model Description
### Hyperparameters
As same as 42dot LLM-PLM, the model is built upon a Transformer decoder architecture similar to the [LLaMA 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) and its hyperparameters are listed below.
| Params | Layers | Attention heads | Hidden size | FFN size | Max. length\* |
| -- | -- | -- | -- | -- | -- |
| 1.3B | 24 | 32 | 2,048 | 5,632 | 4,096 |
(\* unit: tokens)
### Supervised Fine-tuning
Fine-tuning took about 112 GPU hours (in NVIDIA A100). For the training dataset, we manually constructed (question or insturuction) and response pairs, which can either be single- or multi-turn.
### Evaluation
Inspired by recent attempts like [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/#how-good-is-vicuna), we evaluate 42dot LLM-SFT with other proprietary/open-sourced chatbots using GPT-4 for assessing various aspects of responses. The evaluation dataset consists of 121 prompts over 10 categories. The sample of the evaluation dataset and prompt template can be downloaded from our [GitHub repo](https://github.com/42dot/42dot_LLM).
- Baselines:
- [ChatGPT](https://chat.openai.com/) using GPT-3.5-turbo and GPT-4
- [Bard](https://bard.google.com/)
- [KORani-v2-13B](https://huggingface.co/KRAFTON/KORani-v1-13B)
| Model | GPT-3.5 | GPT-4 | Bard | KORani | 42dot LLM-SFT |
| :-- |:-------:|:--------:|:--------:|:------:|:---------:|
| Params | Unknown | Unknown | Unknown | 13B | 1.3B |
<figure align="center">
<img src="https://huggingface.co/42dot/42dot_LLM-SFT-1.3B/resolve/main/asset/42dot_llm_ko_score_white_background.png"/>
<figcaption><b>Response quality evaluation result</b></figcaption>
</figure>
<figure align="center">
<img src="https://huggingface.co/42dot/42dot_LLM-SFT-1.3B/resolve/main/asset/42dot_LLM_vs_score.png"/>
<figcaption><b>Comparison between proprietary chatbots and 42dot LLM-SFT</b></figcaption>
</figure>
## Limitations and Ethical Considerations
42dot LLM-SFT shares a number of well-known limitations of other LLMs. For example, it may generate false and misinformative content since 42dot LLM-SFT is also subject to [hallucination](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)). In addition, 42dot LLM-SFT may generate toxic, harmful, and biased content due to the use of web-available training data in the pre-training phase. We strongly suggest that 42dot LLM-SFT users should be aware of those limitations and take necessary steps to mitigate those issues.
## Disclaimer
The contents generated by 42dot LLM series ("42dot LLM") do not necessarily reflect the views or opinions of 42dot Inc. ("42dot"). 42dot disclaims any and all liability to any part for any direct, indirect, implied, punitive, special, incidental, or other consequential damages arising from any use of the 42dot LLM and its generated contents.
## License
The 42dot LLM-SFT is licensed under the Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0).
## Citation
```
@misc{42dot2023llm,
title={42dot LLM: A Series of Large Language Model by 42dot},
author={42dot Inc.},
year={2023},
url = {https://github.com/42dot/42dot_LLM},
version = {1.0.0},
}
```
|
mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF | mradermacher | "2024-06-19T08:49:17Z" | 4,152 | 2 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-18T23:21:52Z" | ---
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5-Phi-3-medium-128k-14B-i1-GGUF/resolve/main/Tess-v2.5-Phi-3-medium-128k-14B.i1-Q6_K.gguf) | i1-Q6_K | 11.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Chituyi7/EBO-llama3-8B-4Bit-InstructionTuned-AlpacaDataset | Chituyi7 | "2024-05-25T00:11:16Z" | 4,151 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-05-24T23:07:30Z" | ---
license: apache-2.0
---
|
mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF | mradermacher | "2024-06-20T05:00:55Z" | 4,148 | 5 | transformers | [
"transformers",
"gguf",
"en",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-06-18T11:43:00Z" | ---
base_model: deepseek-ai/DeepSeek-Coder-V2-Instruct
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: deepseek-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 47.5 | for the desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ1_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ1_M.gguf.part2of2) | i1-IQ1_M | 52.8 | mostly desperate |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_XXS.gguf.part2of2) | i1-IQ2_XXS | 61.6 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_XS.gguf.part2of2) | i1-IQ2_XS | 68.8 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_S.gguf.part2of2) | i1-IQ2_S | 70.0 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ2_M.gguf.part2of2) | i1-IQ2_M | 77.0 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 86.0 | IQ3_XXS probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 90.9 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 96.4 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_S.gguf.part3of3) | i1-IQ3_S | 101.8 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_S.gguf.part3of3) | i1-Q3_K_S | 101.8 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ3_M.gguf.part3of3) | i1-IQ3_M | 103.5 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_M.gguf.part3of3) | i1-Q3_K_M | 112.8 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_L.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_L.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q3_K_L.gguf.part3of3) | i1-Q3_K_L | 122.5 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ4_XS.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ4_XS.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-IQ4_XS.gguf.part3of3) | i1-IQ4_XS | 125.7 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_0.gguf.part3of3) | i1-Q4_0 | 133.5 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_K_S.gguf.part3of3) | i1-Q4_K_S | 134.0 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q4_K_M.gguf.part3of3) | i1-Q4_K_M | 142.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_S.gguf.part4of4) | i1-Q5_K_S | 162.4 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q5_K_M.gguf.part4of4) | i1-Q5_K_M | 167.3 | |
| [PART 1](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q6_K.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q6_K.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q6_K.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/DeepSeek-Coder-V2-Instruct-i1-GGUF/resolve/main/DeepSeek-Coder-V2-Instruct.i1-Q6_K.gguf.part4of4) | i1-Q6_K | 193.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
OpenVINO/bert-base-uncased-sst2-int8-unstructured80 | OpenVINO | "2023-03-06T14:08:48Z" | 4,147 | 3 | transformers | [
"transformers",
"pytorch",
"openvino",
"bert",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2023-03-06T05:02:23Z" | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: yujiepan/bert-base-uncased-sst2-int8-unstructured80
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.91284
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Joint magnitude pruning, quantization and distillation on BERT-base/SST-2
This model conducts unstructured magnitude pruning, quantization and distillation at the same time on BERT-base when finetuning on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Torch accuracy: 0.9128
- OpenVINO IR accuracy: 0.9128
- Sparsity in transformer block linear layers: 0.80
## Setup
```
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install optimum[openvino,nncf]==1.7.0
pip install datasets sentencepiece scipy scikit-learn protobuf evaluate
pip install wandb # optional
```
## Training script
See https://gist.github.com/yujiepan-work/5d7e513a47b353db89f6e1b512d7c080
## Run
We use one card for training.
```bash
NNCFCFG=/path/to/nncf_config/json
python run_glue.py \
--lr_scheduler_type cosine_with_restarts \
--cosine_lr_scheduler_cycles 11 6 \
--record_best_model_after_epoch 9 \
--load_best_model_at_end True \
--metric_for_best_model accuracy \
--model_name_or_path textattack/bert-base-uncased-SST-2 \
--teacher_model_or_path yoshitomo-matsubara/bert-large-uncased-sst2 \
--distillation_temperature 2 \
--task_name sst2 \
--nncf_compression_config $NNCFCFG \
--distillation_weight 0.95 \
--output_dir /tmp/bert-base-uncased-sst2-int8-unstructured80 \
--overwrite_output_dir \
--run_name bert-base-uncased-sst2-int8-unstructured80 \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 5e-05 \
--optim adamw_torch \
--num_train_epochs 17 \
--logging_steps 1 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_strategy steps \
--save_steps 250 \
--save_total_limit 1 \
--fp16 \
--seed 1
```
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
- Optimum 1.6.3
- Optimum-intel 1.7.0
- NNCF 2.4.0
|
mradermacher/L3-8B-sunfall-v0.1-i1-GGUF | mradermacher | "2024-06-02T09:12:12Z" | 4,147 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:crestf411/L3-8B-sunfall-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-02T05:10:53Z" | ---
base_model: crestf411/L3-8B-sunfall-v0.1
language:
- en
library_name: transformers
license: llama3
license_link: LICENSE
license_name: llama3
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/crestf411/L3-8B-sunfall-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-v0.1-i1-GGUF/resolve/main/L3-8B-sunfall-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Salesforce/blip2-opt-2.7b-coco | Salesforce | "2024-03-31T10:07:53Z" | 4,143 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"region:us"
] | image-to-text | "2023-02-07T15:03:10Z" | ---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
inference: false
---
# BLIP-2, OPT-2.7b, fine-tuned on COCO
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). |
kanishka/smolm-autoreg-bpe-seed_1102 | kanishka | "2024-03-19T20:53:38Z" | 4,143 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T20:53:33Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-seed_1102
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-seed_1102
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4764
- Accuracy: 0.4996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 128
- seed: 1102
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0413 | 1.0 | 2928 | 3.0106 | 0.4380 |
| 2.706 | 2.0 | 5856 | 2.7757 | 0.4615 |
| 2.5796 | 3.0 | 8784 | 2.6893 | 0.4708 |
| 2.5174 | 4.0 | 11712 | 2.6343 | 0.4766 |
| 2.4609 | 5.0 | 14640 | 2.6072 | 0.4795 |
| 2.4299 | 6.0 | 17568 | 2.5857 | 0.4826 |
| 2.3999 | 7.0 | 20496 | 2.5639 | 0.4859 |
| 2.3645 | 8.0 | 23424 | 2.5553 | 0.4874 |
| 2.2938 | 9.0 | 26352 | 2.5049 | 0.4939 |
| 2.151 | 10.0 | 29280 | 2.4764 | 0.4996 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
KBLab/wav2vec2-large-voxrex-swedish | KBLab | "2023-12-04T17:27:30Z" | 4,141 | 12 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"sv",
"dataset:common_voice",
"dataset:NST_Swedish_ASR_Database",
"dataset:P4",
"arxiv:2205.03026",
"license:cc0-1.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:04Z" | ---
language: sv
arxiv: https://arxiv.org/abs/2205.03026
datasets:
- common_voice
- NST_Swedish_ASR_Database
- P4
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
license: cc0-1.0
model-index:
- name: Wav2vec 2.0 large VoxRex Swedish
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 8.49
---
# Wav2vec 2.0 large VoxRex Swedish (C)
Finetuned version of KBs [VoxRex large](https://huggingface.co/KBLab/wav2vec2-large-voxrex) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **2.5%**. WER for Common Voice test set is **8.49%** directly and **7.37%** with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
**Update 2022-01-10:** Updated to VoxRex-C version.
**Update 2022-05-16:** Paper is is [here](https://arxiv.org/abs/2205.03026).
# Performance\*

<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>
## Training
This model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.

## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Citation
https://arxiv.org/abs/2205.03026
```
@misc{malmsten2022hearing,
title={Hearing voices at the National Library -- a speech corpus and acoustic model for the Swedish language},
author={Martin Malmsten and Chris Haffenden and Love Börjeson},
year={2022},
eprint={2205.03026},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
valeriojob/flashcardsGPT-Gemma-7B-v0.1-GGUF | valeriojob | "2024-06-27T00:39:23Z" | 4,140 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:unsloth/gemma-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-26T23:32:51Z" | ---
base_model: unsloth/gemma-7b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# flashcardsGPT-Gemma-7B-v0.1-GGUF
- This model is a fine-tuned version of [unsloth/gemma-7b](https://huggingface.co/unsloth/gemma-7b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) based on real university lecture data.
- Version 0.1 of flashcardsGPT has only been trained on the module "Time Series Analysis with R" which is part of the BSc Business-IT programme offered by the FHNW university ([more info](https://www.fhnw.ch/en/degree-programmes/business/bsc-in-business-information-technology)).
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/flashcardsGPT-Gemma-7B-v0.1](https://huggingface.co/valeriojob/flashcardsGPT-Gemma-7B-v0.1) that includes the default format of the model as well as the LoRA adapters of the model.
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Model description
This model takes the OCR-extracted text from a university lecture slide as an input. It then generates high quality flashcards and returns them as a JSON object.
It uses the following Prompt Engineering template:
"""
Your task is to process the below OCR-extracted text from university lecture slides and create a set of flashcards with the key information about the topic.
Format the flashcards as a JSON object, with each card having a 'front' field for the question or term, and a 'back' field for the corresponding answer or definition, which may include a short example.
Ensure the 'back' field contains no line breaks.
No additional text or explanation should be provided—only respond with the JSON object.
Here is the OCR-extracted text:
""""
## Intended uses & limitations
The fine-tuned model can be used to generate high-quality flashcards based on TSAR lectures from the BSc BIT programme offered by the FHNW university.
## Training and evaluation data
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/FHNW-Flashcards-Data-v0.1](https://huggingface.co/datasets/valeriojob/FHNW-Flashcards-Data-v0.1)
## Licenses
- **License:** apache-2.0 |
Yntec/Protogen_Unofficial_Release | Yntec | "2024-04-05T03:32:46Z" | 4,139 | 1 | diffusers | [
"diffusers",
"safetensors",
"darkstorm2150",
"anime",
"art",
"artistic",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-05T01:44:08Z" | ---
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- darkstorm2150
- anime
- art
- artistic
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
inference: true
license: other
---
# Protogen Unofficial Release
<center><img src="https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/c4KDKt4KngNWxVlNa8tZq.png" style="height:512px; width:512px; border-radius: 7%; border: 10px solid #663380; padding-top:0px;" span title="Protogen Unofficial Raw Output"></center>
A mix of Protogen 2.2 and Protogen x5.8 to bring the best of those models together! It has the 840K VAE baked in.
Samples and prompts:

(Click for larger)
Cover: modelshoot, (extremely detailed 8k movie still) ,A detailed portrait of a calm curly brunette cute girl in machine jungle sitting carrying a steampunk baby lion pink cub illustrator, by justin gerard and greg rutkowski, digital art, realistic painting, dnd, character design, trending on artstation
Top left: full shot body photo of the most beautiful artwork in the world featuring ww2 nurse holding a liquor bottle sitting on a desk nearby, smiling, freckles, white outfit, nostalgia, sexy, stethoscope, heart professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski
Top right: modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, cloak armor, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski
Bottom left: modelshoot style, (extremely detailed 8k wallpaper), block paint depicting a character in a cyberpunk street, posed character design study, backlit, light rays, highly detailed, trending on artstation
Bottom right: Pretty CUTE little girl. by ocellus.
Original pages:
https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release
https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release
# Cover Full Size

(...click for larger)
# Recipe
SuperMerger Weight sum Train Difference MBW 0,1,1,1,1,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,1,1,1,0,0,0
Model A: Protogen x5.8
Model B: Protogen v2.2
Bake in vae-ft-mse-840000-ema-pruned.safetensors
Output: Protogen_Unofficial
no-ema output: Protogen_Unofficial_Mini |
mradermacher/F2PhenotypeKimiko-i1-GGUF | mradermacher | "2024-06-16T10:29:48Z" | 4,138 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:WesPro/F2PhenotypeKimiko",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T08:52:24Z" | ---
base_model: WesPro/F2PhenotypeKimiko
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/WesPro/F2PhenotypeKimiko
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/F2PhenotypeKimiko-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/F2PhenotypeKimiko-i1-GGUF/resolve/main/F2PhenotypeKimiko.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf | RichardErkhov | "2024-06-06T00:10:06Z" | 4,137 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-05T21:23:18Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
HelpSteer-filtered-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/HelpSteer-filtered-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [HelpSteer-filtered-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [HelpSteer-filtered-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [HelpSteer-filtered-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [HelpSteer-filtered-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [HelpSteer-filtered-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [HelpSteer-filtered-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [HelpSteer-filtered-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [HelpSteer-filtered-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [HelpSteer-filtered-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [HelpSteer-filtered-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [HelpSteer-filtered-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [HelpSteer-filtered-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [HelpSteer-filtered-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [HelpSteer-filtered-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [HelpSteer-filtered-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [HelpSteer-filtered-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [HelpSteer-filtered-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [HelpSteer-filtered-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [HelpSteer-filtered-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [HelpSteer-filtered-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [HelpSteer-filtered-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [HelpSteer-filtered-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-4.0
datasets:
- Weyaxi/HelpSteer-filtered
language:
- en
tags:
- mistral
- instruct
---

# HelpSteer-filtered-7B
Original weights of [HelpSteer-filtered-7B](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B). Finetuned from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Lora Weights
You can access lora weights from here:
[Weyaxi/HelpSteer-filtered-7B-Lora](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B-Lora)
|
eienmojiki/Starry-XL-v5.2 | eienmojiki | "2024-05-17T08:43:59Z" | 4,136 | 7 | diffusers | [
"diffusers",
"safetensors",
"anime",
"stable-diffusion-xl",
"text-to-image",
"en",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-05-15T14:12:15Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- stable-diffusion-xl
- safetensors
---
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 80vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 1.5em;
text-align: center;
color: #333;
font-family: 'Helvetica Neue', sans-serif;
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #FFBF00, #F28C28);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
</style>
<h1 class="title"><span>Starry XL 5.2</span></h1>
## Model Information
- Developed by: [kitarz](https://civitai.com/user/kitarz)
- Funded by: kitarz
- Model type: SDXL 1.0
- Finetuned from: [Kohaku-XL Epsilon](https://civitai.com/models/399873/kohaku-xl-epsilon)
- License: Fair AI Public License 1.0-SD
> [!WARNING]
> This is a not the offical model page of this model's author
## Usages
🪄 **Try Starry XL Demo here:** https://huggingface.co/spaces/eienmojiki/StarryXL-Demo
> Starry is based on epsilon, and during training, the caption are overall close to Kohaku epsilon, so the overall usage is the same
### Artist wildcard
**There is a wildcard for 600 artists here:** [starry_aritst_600_list](https://civitai.com/api/download/models/499498?type=Training%20Data)
for other artists and characters, please use the existing list from Kohaku Epsilon. https://civitai.com/api/download/models/445973?type=Training%20Data
> [!IMPORTANT]
> **Note that Starry requires high accuracy in artist names, so ensure there are no spelling errors and use the correct artist/character tags.**
### Prompt format
```
<1girl/1boy/1other/...>,
<character>, <series>, <artists>,
<general tags>,
<quality tags>, <year tags>, <meta tags>, <rating tags>
```
- Quality tags: masterpiece, best quality, great quality, good quality, normal quality, low quality, worst quality
- Rating tags: safe, sensitive, nsfw, explicit
- Date tags: newest, recent, mid, early, old
### Recommended Negative Prompt
- **Long**
```
bad anatomy,blurry,(worst quality:1.8),low quality,hands bad,face bad,(normal quality:1.3),bad hands,mutated hands and fingers,extra legs,extra arms,duplicate,cropped,text,jpeg,artifacts,signature,watermark,username,blurry,artist name,trademark,title,multiple view,Reference sheet,long body,multiple breasts,mutated,bad anatomy,disfigured,bad proportions,duplicate,bad feet,artist name,ugly,text font ui,missing limb,monochrome,
```
- **Short**
```
nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name,
```
### Style Select
You can directly use artist's prompt to generate image.
```
1girl,momoi \(blue archive\), blue archive,
```
```
{style},
```
```
solo, headphones, halo, pink halo, white jacket, short hair, bow, shirt, necktie, white background, white shirt, blue necktie, fake animal ears, animal ears, pink bow, collared shirt, simple background, pink eyes, blonde hair, animal ear headphones, looking at viewer, hair bow, jacket,newest, masterpiece, best quality, absurdres, highres,
```






### Enhance your generation
1. You can use [DanTagGen](https://github.com/KohakuBlueleaf/z-a1111-sd-webui-dtg) to generate images with a strong style from an artist
> Try DanTagGen on HuggingFace: https://huggingface.co/spaces/KBlueLeaf/DTG-demo
```
1girl,{style}, {dtg expand} newest, masterpiece, best quality, absurdres, highres,
```
2. Artists Combination
Combining multiple artists is highly recommended, and you can use the artist list to try different orders and combinations. *In fact, you can use the famous nai3 artist prompts to combine styles directly. (This is not a simple nai3 distillation, it uses artist prompts for style combine)*
```
(ningen mame:0.9), ciloranko, sho \(sho lwlw\), (tianliang duohe fangdongye:0.8), ask \(askzy\), wlop,
```


## License
This model is released under Fair-AI-Public-License-1.0-SD
Please check this website for more information: Freedom of Development (freedevproject.org) |
meetkai/functionary-small-v2.5 | meetkai | "2024-05-30T05:33:28Z" | 4,135 | 7 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"conversational",
"custom_code",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-06T02:57:56Z" | ---
license: mit
---
# Model Card for functionary-small-v2.5
[https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary)
<img src="https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg" alt="Functionary Logo" width="300"/>
Functionary is a language model that can interpret and execute functions/plugins.
The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls.
## Key Features
- Intelligent **parallel tool use**
- Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs**
- Able to decide **when to not use tools/call functions** and provide normal chat response
- Truly one of the best open-source alternative to GPT-4
- Support code interpreter
## How to Get Started
We provide custom code for both converting tool definitions into the system prompts and parsing raw model response into a JSON object containing `role`, `content` and `tool_calls` fields. This enables the model to be able to generate tool calls.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v2.5", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v2.5", device_map="auto", trust_remote_code=True)
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}
]
messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}]
final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False)
tokenizer.padding_side = "left"
inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda")
pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer)
print(tokenizer.decode(pred.cpu()[0]))
```
## Prompt Template
We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server:
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary")
client.chat.completions.create(
model="path/to/functionary/model/",
messages=[{"role": "user",
"content": "What is the weather for Istanbul?"}
],
tools=[{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}],
tool_choice="auto"
)
```
will yield:
```
<|start_header_id|>system<|end_header_id|>
// Supported function definitions that should be called when necessary.
namespace functions {
// Get the current weather
type get_current_weather = (_: {
// The city and state, e.g. San Francisco, CA
location: string,
}) => any;
} // namespace functions<|eot_id|><|start_header_id|>user<|end_header_id|>
What is the weather for Istanbul?
```
A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.llama3.txt).
## Run the model
We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary).
# The MeetKai Team

|
tokyotech-llm/Swallow-70b-hf | tokyotech-llm | "2024-06-29T08:56:23Z" | 4,134 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"ja",
"arxiv:2404.17790",
"arxiv:2404.17733",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-25T02:13:02Z" | ---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_type: llama
---
# Swallow
Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT).
Links to other models can be found in the index.
# Model Release Updates
We are excited to share the release schedule for our latest models:
- **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions.
- **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf).
- **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf).
- **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)
- **December 19, 2023**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf).
## Swallow Model Index
|Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1|
|---|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)|
|7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A |
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)|
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)|
## Swallow Model Index NVE (No Vocabulary Expansion)
|Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf|
|---|---|---|
|7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)|
|13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A |
|70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)|

This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our [paper](https://arxiv.org/abs/2404.17790)
## Model Details
* **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2)
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Base Model Performance
### Japanese tasks
|Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
|---|---|---|---|---|---|---|---|---|---|
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
| Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 |
| Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
| Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
| Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
| Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
| Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
| Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
| Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** |
| Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
| Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
### English tasks
|Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
|---|---|---|---|---|---|---|---|
| | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|
| Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 |
| Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
| Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
| Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
| Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
| Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
| Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
| Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | **0.3770** | **0.9290** | **0.5284** |
| Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
| Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
- Open-ended question answering (JEMHopQA [Ishii+, 2023])
- Open-ended question answering (NIILC [Sekine, 2003])
- Machine reading comprehension (JSQuAD [Kurihara+, 2022])
- Automatic summarization (XL-Sum [Hasan+, 2021])
- Machine translation (WMT2020 ja-en [Barrault+, 2020])
- Machine translation (WMT2020 en-ja [Barrault+, 2020])
- Mathematical reasoning (MGSM [Shi+, 2023])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
- Open-ended question answering (TriviaQA [Joshi+, 2017])
- Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
- Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers+, 2019])
- Mathematical reasoning (GSM8k [Cobbe+, 2021])
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
### Use the instruct model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-instruct-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
PROMPT_DICT = {
"prompt_input": (
"以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:"
),
"prompt_no_input": (
"以下に、あるタスクを説明する指示があります。"
"リクエストを適切に完了するための回答を記述してください。\n\n"
"### 指示:\n{instruction}\n\n### 応答:"
),
}
def create_prompt(instruction, input=None):
"""
Generates a prompt based on the given instruction and an optional input.
If input is provided, it uses the 'prompt_input' template from PROMPT_DICT.
If no input is provided, it uses the 'prompt_no_input' template.
Args:
instruction (str): The instruction describing the task.
input (str, optional): Additional input providing context for the task. Default is None.
Returns:
str: The generated prompt.
"""
if input:
# Use the 'prompt_input' template when additional input is provided
return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input)
else:
# Use the 'prompt_no_input' template when no additional input is provided
return PROMPT_DICT["prompt_no_input"].format(instruction=instruction)
# Example usage
instruction_example = "以下のトピックに関する詳細な情報を提供してください。"
input_example = "東京工業大学の主なキャンパスについて教えてください"
prompt = create_prompt(instruction_example, input_example)
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
### Use the base model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tokyotech-llm/Swallow-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
prompt = "東京工業大学の主なキャンパスは、"
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [Swallow Corpus](https://arxiv.org/abs/2404.17733)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
### Instruction Tuning
The following datasets were used for the instruction tuning.
- [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
- [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 2 under an open license for others to build on.
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Authors
Here are the team members:
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Hiroki Iida](https://meshidenn.github.io/)
- [Mengsay Loem](https://loem-ms.github.io/)
- [Shota Hirai](https://huggingface.co/Kotemo428)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://twitter.com/stjohn2007)
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
## How to cite
```
@misc{fujii2024continual,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki},
year={2024},
eprint={2404.17790},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Yntec/CuteFurry | Yntec | "2024-01-19T09:05:08Z" | 4,132 | 2 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Animals",
"Cute",
"Character Design",
"Adorable",
"CGI",
"anyangs303305",
"McSionnaigh",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-19T07:15:53Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Animals
- Cute
- Character Design
- Adorable
- CGI
- anyangs303305
- McSionnaigh
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Cute Furry
The Cute Furry LoRA baked in the Genuine model which includes NuipeniMix and Generate Me!
Samples and prompts:

(Click for larger)
Top left: oil painting, best quality,masterpiece,Fluffy,raccoon. red scarf, big eyes,lawn,forest,paw pose,chibi,
Top right: a dreamworks frog playing guitar in a club. chibi orange large eyes, whimsical. fedora.
Bottom left: best quality,masterpiece,Fluffy,Bunny. Blue scarf, big eyes,White hair, paw pose,chibi,
Bottom right: a painting of a reindeer by Bnhr, cute teal eyes, nature, grass, tree, outdoors, forest, animal focus, red nose,
Original pages:
https://civitai.com/models/213753/15-cute-furry?modelVersionId=240777
https://civitai.com/models/81937?modelVersionId=86977 (nuipenimix v1)
https://huggingface.co/Yntec/GenerateMe
https://huggingface.co/Yntec/Genuine

|
patrickvonplaten/wavlm-libri-clean-100h-large | patrickvonplaten | "2021-12-17T13:40:58Z" | 4,130 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"wavlm_libri_finetune",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- wavlm_libri_finetune
model-index:
- name: wavlm-librispeech-clean-100h-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-libri-clean-100h-large
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0601
- Wer: 0.0491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8069 | 0.34 | 300 | 0.7510 | 0.5809 |
| 0.2483 | 0.67 | 600 | 0.2023 | 0.1929 |
| 0.1033 | 1.01 | 900 | 0.1123 | 0.1028 |
| 0.0742 | 1.35 | 1200 | 0.0858 | 0.0771 |
| 0.057 | 1.68 | 1500 | 0.0722 | 0.0663 |
| 0.0421 | 2.02 | 1800 | 0.0682 | 0.0582 |
| 0.0839 | 2.35 | 2100 | 0.0630 | 0.0534 |
| 0.0307 | 2.69 | 2400 | 0.0603 | 0.0508 |
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
uukuguy/zephyr-7b-alpha-dare-0.85 | uukuguy | "2023-11-23T14:38:45Z" | 4,128 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-23T14:29:28Z" | ---
license: llama2
---
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
FinGPT/fingpt-forecaster_dow30_llama2-7b_lora | FinGPT | "2024-06-11T02:42:34Z" | 4,127 | 86 | peft | [
"peft",
"safetensors",
"en",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"region:us"
] | null | "2023-10-30T07:37:02Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
license: apache-2.0
language:
- en
---
## Training:
Check out our github: https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT_Forecaster
## Inference
``` python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Llama-2-7b-chat-hf',
trust_remote_code=True,
device_map="auto",
torch_dtype=torch.float16, # optional if you have enough VRAM
)
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-chat-hf')
model = PeftModel.from_pretrained(base_model, 'FinGPT/fingpt-forecaster_dow30_llama2-7b_lora')
model = model.eval()
```
- PEFT 0.5.0 |
mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF | mradermacher | "2024-06-02T16:39:14Z" | 4,126 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Locutusque/hercules-v5.0",
"base_model:Locutusque/Llama-3-Hercules-5.0-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-02T13:08:58Z" | ---
base_model: Locutusque/Llama-3-Hercules-5.0-8B
datasets:
- Locutusque/hercules-v5.0
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Hercules-5.0-8B-i1-GGUF/resolve/main/Llama-3-Hercules-5.0-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Seznam/small-e-czech | Seznam | "2022-08-26T14:05:35Z" | 4,125 | 17 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"cs",
"arxiv:2003.10555",
"arxiv:2112.01810",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: cs
license: cc-by-4.0
---
# Small-E-Czech
Small-E-Czech is an [Electra](https://arxiv.org/abs/2003.10555)-small model pretrained on a Czech web corpus created at [Seznam.cz](https://www.seznam.cz/) and introduced in an [IAAI 2022 paper](https://arxiv.org/abs/2112.01810). Like other pretrained models, it should be finetuned on a downstream task of interest before use. At Seznam.cz, it has helped improve [web search ranking](https://blog.seznam.cz/2021/02/vyhledavani-pomoci-vyznamovych-vektoru/), query typo correction or clickbait titles detection. We release it under [CC BY 4.0 license](https://creativecommons.org/licenses/by/4.0/) (i.e. allowing commercial use). To raise an issue, please visit our [github](https://github.com/seznam/small-e-czech).
### How to use the discriminator in transformers
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("Seznam/small-e-czech")
tokenizer = ElectraTokenizerFast.from_pretrained("Seznam/small-e-czech")
sentence = "Za hory, za doly, mé zlaté parohy"
fake_sentence = "Za hory, za doly, kočka zlaté parohy"
fake_sentence_tokens = ["[CLS]"] + tokenizer.tokenize(fake_sentence) + ["[SEP]"]
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
outputs = discriminator(fake_inputs)
predictions = torch.nn.Sigmoid()(outputs[0]).cpu().detach().numpy()
for token in fake_sentence_tokens:
print("{:>7s}".format(token), end="")
print()
for prediction in predictions.squeeze():
print("{:7.1f}".format(prediction), end="")
print()
```
In the output we can see the probabilities of particular tokens not belonging in the sentence (i.e. having been faked by the generator) according to the discriminator:
```
[CLS] za hory , za dol ##y , kočka zlaté paro ##hy [SEP]
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.3 0.2 0.1 0.0
```
### Finetuning
For instructions on how to finetune the model on a new task, see the official HuggingFace transformers [tutorial](https://huggingface.co/transformers/training.html). |
kanishka/smolm-autoreg-bpe-seed_394 | kanishka | "2024-03-19T20:54:45Z" | 4,125 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T20:54:42Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-seed_394
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-seed_394
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4779
- Accuracy: 0.5000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 128
- seed: 394
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0436 | 1.0 | 2928 | 3.0123 | 0.4386 |
| 2.7076 | 2.0 | 5856 | 2.7809 | 0.4602 |
| 2.5717 | 3.0 | 8784 | 2.6882 | 0.4706 |
| 2.5132 | 4.0 | 11712 | 2.6348 | 0.4770 |
| 2.4683 | 5.0 | 14640 | 2.6046 | 0.4811 |
| 2.4265 | 6.0 | 17568 | 2.5843 | 0.4832 |
| 2.3937 | 7.0 | 20496 | 2.5710 | 0.4853 |
| 2.3639 | 8.0 | 23424 | 2.5552 | 0.4874 |
| 2.2858 | 9.0 | 26352 | 2.5045 | 0.4944 |
| 2.1423 | 10.0 | 29280 | 2.4779 | 0.5000 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
keremberke/yolov8m-shoe-classification | keremberke | "2023-02-22T13:05:01Z" | 4,122 | 1 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"image-classification",
"pytorch",
"awesome-yolov8-models",
"dataset:keremberke/shoe-classification",
"model-index",
"region:us"
] | image-classification | "2023-01-30T03:15:28Z" |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- image-classification
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.23
inference: false
datasets:
- keremberke/shoe-classification
model-index:
- name: keremberke/yolov8m-shoe-classification
results:
- task:
type: image-classification
dataset:
type: keremberke/shoe-classification
name: shoe-classification
split: validation
metrics:
- type: accuracy
value: 0.79518 # min: 0.0 - max: 1.0
name: top1 accuracy
- type: accuracy
value: 1 # min: 0.0 - max: 1.0
name: top5 accuracy
---
<div align="center">
<img width="640" alt="keremberke/yolov8m-shoe-classification" src="https://huggingface.co/keremberke/yolov8m-shoe-classification/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['adidas', 'converse', 'nike']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.24 ultralytics==8.0.23
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, postprocess_classify_output
# load model
model = YOLO('keremberke/yolov8m-shoe-classification')
# set model parameters
model.overrides['conf'] = 0.25 # model confidence threshold
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].probs) # [0.1, 0.2, 0.3, 0.4]
processed_result = postprocess_classify_output(model, result=results[0])
print(processed_result) # {"cat": 0.4, "dog": 0.6}
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf | RichardErkhov | "2024-06-06T10:20:50Z" | 4,120 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-06T06:13:43Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4 - GGUF
- Model creator: https://huggingface.co/Fredithefish/
- Original model: https://huggingface.co/Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q2_K.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q2_K.gguf) | Q2_K | 1.01GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ3_XS.gguf) | IQ3_XS | 1.14GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ3_S.gguf) | IQ3_S | 1.16GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ3_M.gguf) | IQ3_M | 1.28GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K.gguf) | Q3_K | 1.38GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_0.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_0.gguf) | Q4_0 | 1.49GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_K.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_K.gguf) | Q4_K | 1.66GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_K_M.gguf) | Q4_K_M | 1.66GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_1.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q4_1.gguf) | Q4_1 | 1.64GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_0.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_0.gguf) | Q5_0 | 1.8GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_K.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_K.gguf) | Q5_K | 1.93GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_1.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q5_1.gguf) | Q5_1 | 1.95GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q6_K.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q6_K.gguf) | Q6_K | 2.13GB |
| [RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q8_0.gguf](https://huggingface.co/RichardErkhov/Fredithefish_-_RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4-gguf/blob/main/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
license: cc
datasets:
- Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat
language:
- en
inference: false
---
<html>
<head>
<style>
.alert {
padding: 15px;
background-color: #f44336;
color: white;
}
</style>
</head>
<body>
<div class="alert">
<strong>Warning:</strong> This fine-tuned model has only undergone 200 steps of fine-tuning and may not be reliable. The final model will not be released.
</div>
</body>
</html>
<br>
# RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
RedPajama-INCITE-Chat-3B Model finetuned <a href="https://huggingface.co/datasets/Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat">on this dataset</a>
## Reproduction
The code for the finetuning of this model can be found at https://github.com/fredi-python/Fine-tune-RedPajama-Chat-3B
## Usage and License Notices
The Model is intended and licensed for research use only. The model is under the CC BY NC 4.0 license (allowing only non-commercial use)
|
hungdang1610/gender | hungdang1610 | "2024-06-06T03:24:21Z" | 4,114 | 3 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-01T09:11:40Z" | ---
license: apache-2.0
---
tags:
- image-classification
- pytorch
metrics:
- accuracy
model-index:
- name: gender-classification
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.970833
---
# gender-classification
Evaluate set: 240 unseen images, from 27/05-29/05 from ShotX, no duplicate, clean, 120 male and 120 female.
Loss function is CrossEntropy.
Model finetuning on 1827 images from 15/05-21/05 from ShotX, base on [rizvandwiki/gender-classification](https://huggingface.co/rizvandwiki/gender-classification).
. Using AdamW optimizer, weight_decay=0.05, CosineAnnealingLR scheduler, learning rate 5e-6, 20 epochs.
accuracy loss
0.970833 0.102212
## Example Images
#### female

#### male
 |
mradermacher/im-a-good-language-model-GGUF | mradermacher | "2024-06-27T15:24:25Z" | 4,114 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:appvoid/im-a-good-language-model",
"endpoints_compatible",
"region:us"
] | null | "2024-06-27T15:16:08Z" | ---
base_model: appvoid/im-a-good-language-model
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/appvoid/im-a-good-language-model
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.IQ3_XS.gguf) | IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.IQ3_S.gguf) | IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.IQ3_M.gguf) | IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/im-a-good-language-model-GGUF/resolve/main/im-a-good-language-model.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zeroshot/gte-large-sparse | zeroshot | "2023-10-24T16:53:49Z" | 4,113 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"sparse sparsity quantized onnx embeddings int8",
"mteb",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-10-15T18:14:48Z" | ---
tags:
- sparse sparsity quantized onnx embeddings int8
- mteb
model-index:
- name: gte-large-sparse
results:
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.64253410928214
- type: cos_sim_spearman
value: 85.83388349410652
- type: euclidean_pearson
value: 86.86126159318735
- type: euclidean_spearman
value: 85.61580623591163
- type: manhattan_pearson
value: 86.6901132883383
- type: manhattan_spearman
value: 85.60255292187769
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.23314640591607
- type: cos_sim_spearman
value: 79.00078545104338
- type: euclidean_pearson
value: 83.48009254500714
- type: euclidean_spearman
value: 78.95413001389939
- type: manhattan_pearson
value: 83.46945566025941
- type: manhattan_spearman
value: 78.9241707208135
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.77526666043804
- type: cos_sim_spearman
value: 73.4849063285867
- type: euclidean_pearson
value: 78.04477932740524
- type: euclidean_spearman
value: 73.01394205771743
- type: manhattan_pearson
value: 78.08836684503294
- type: manhattan_spearman
value: 73.05074711098149
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.57839215661352
- type: cos_sim_spearman
value: 86.13854767345153
- type: euclidean_pearson
value: 85.12712609946449
- type: euclidean_spearman
value: 85.52497994789026
- type: manhattan_pearson
value: 85.06833141611173
- type: manhattan_spearman
value: 85.45003068636466
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.30485126978374
- type: cos_sim_spearman
value: 80.36497172462357
- type: euclidean_pearson
value: 82.91977909424605
- type: euclidean_spearman
value: 80.16995106297438
- type: manhattan_pearson
value: 82.88200991402184
- type: manhattan_spearman
value: 80.14259757215227
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.99883111314007
- type: cos_sim_spearman
value: 88.531352572377
- type: euclidean_pearson
value: 87.96834578059067
- type: euclidean_spearman
value: 88.44800718542935
- type: manhattan_pearson
value: 87.94889391725033
- type: manhattan_spearman
value: 88.45467695837115
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.4636984892402
- type: cos_sim_spearman
value: 84.0808920789148
- type: euclidean_pearson
value: 83.70613486028309
- type: euclidean_spearman
value: 84.35941626905009
- type: manhattan_pearson
value: 83.70259457073782
- type: manhattan_spearman
value: 84.35496521501604
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.76172944971023
- type: cos_sim_spearman
value: 89.4190945039165
- type: euclidean_pearson
value: 89.47263005347381
- type: euclidean_spearman
value: 89.49228360724095
- type: manhattan_pearson
value: 89.49959868816694
- type: manhattan_spearman
value: 89.5314536157954
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.57158223787549
- type: cos_sim_spearman
value: 66.75053533168037
- type: euclidean_pearson
value: 66.45526604831747
- type: euclidean_spearman
value: 66.14567667353113
- type: manhattan_pearson
value: 66.47352000151176
- type: manhattan_spearman
value: 66.21099856852885
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.055653571006
- type: cos_sim_spearman
value: 85.45387832634702
- type: euclidean_pearson
value: 86.31667154906651
- type: euclidean_spearman
value: 85.66079590537946
- type: manhattan_pearson
value: 86.2806853257308
- type: manhattan_spearman
value: 85.63700636713952
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.78811881188119
- type: cos_sim_ap
value: 94.67027715905307
- type: cos_sim_f1
value: 89.33074684772066
- type: cos_sim_precision
value: 86.7231638418079
- type: cos_sim_recall
value: 92.10000000000001
- type: dot_accuracy
value: 99.47128712871287
- type: dot_ap
value: 78.41478815918727
- type: dot_f1
value: 73.30049261083744
- type: dot_precision
value: 72.23300970873787
- type: dot_recall
value: 74.4
- type: euclidean_accuracy
value: 99.78415841584159
- type: euclidean_ap
value: 94.60075930867181
- type: euclidean_f1
value: 89.12175648702593
- type: euclidean_precision
value: 88.94422310756973
- type: euclidean_recall
value: 89.3
- type: manhattan_accuracy
value: 99.78415841584159
- type: manhattan_ap
value: 94.62867439278095
- type: manhattan_f1
value: 89.2337536372454
- type: manhattan_precision
value: 86.62900188323917
- type: manhattan_recall
value: 92.0
- type: max_accuracy
value: 99.78811881188119
- type: max_ap
value: 94.67027715905307
- type: max_f1
value: 89.33074684772066
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.09864695714371
- type: cos_sim_ap
value: 70.33704198164713
- type: cos_sim_f1
value: 66.22893954410307
- type: cos_sim_precision
value: 62.42410088743577
- type: cos_sim_recall
value: 70.52770448548813
- type: dot_accuracy
value: 79.11426357513263
- type: dot_ap
value: 49.15484584572233
- type: dot_f1
value: 51.12580243364951
- type: dot_precision
value: 40.13840830449827
- type: dot_recall
value: 70.3957783641161
- type: euclidean_accuracy
value: 85.15825236931514
- type: euclidean_ap
value: 70.51017350854076
- type: euclidean_f1
value: 66.45416294785159
- type: euclidean_precision
value: 64.29805082654823
- type: euclidean_recall
value: 68.7598944591029
- type: manhattan_accuracy
value: 85.1403707456637
- type: manhattan_ap
value: 70.47587863399994
- type: manhattan_f1
value: 66.4576802507837
- type: manhattan_precision
value: 63.32138590203107
- type: manhattan_recall
value: 69.92084432717678
- type: max_accuracy
value: 85.15825236931514
- type: max_ap
value: 70.51017350854076
- type: max_f1
value: 66.4576802507837
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.8539604921023
- type: cos_sim_ap
value: 85.71869912577101
- type: cos_sim_f1
value: 78.00535626720983
- type: cos_sim_precision
value: 76.46232344893885
- type: cos_sim_recall
value: 79.61194949183862
- type: dot_accuracy
value: 84.57717235223348
- type: dot_ap
value: 74.89496650237145
- type: dot_f1
value: 69.05327823892932
- type: dot_precision
value: 65.75666829166377
- type: dot_recall
value: 72.69787496150293
- type: euclidean_accuracy
value: 88.89471028835332
- type: euclidean_ap
value: 85.75169460500409
- type: euclidean_f1
value: 78.17055393586006
- type: euclidean_precision
value: 74.21118184334348
- type: euclidean_recall
value: 82.57622420696026
- type: manhattan_accuracy
value: 88.92187681918733
- type: manhattan_ap
value: 85.7496679471825
- type: manhattan_f1
value: 78.11088295687884
- type: manhattan_precision
value: 75.82083061535117
- type: manhattan_recall
value: 80.5435786880197
- type: max_accuracy
value: 88.92187681918733
- type: max_ap
value: 85.75169460500409
- type: max_f1
value: 78.17055393586006
license: mit
language:
- en
---
# gte-large-sparse
This is the sparse ONNX variant of the [gte-large](https://huggingface.co/thenlper/gte-large) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization (INT8) and unstructured pruning 50%.
Current list of sparse and quantized gte ONNX models:
| Links | Sparsification Method |
| --------------------------------------------------------------------------------------------------- | ---------------------- |
| [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant) | Quantization (INT8) |
| [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant) | Quantization (INT8) |
| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) |
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers).
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
 |
PJMixers/Fimbulvetr-Holodeck-Erebus-Westlake-10.7B-GGUF | PJMixers | "2024-04-07T04:38:09Z" | 4,113 | 11 | null | [
"gguf",
"not-for-all-audiences",
"merge",
"mergekit",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"base_model:KoboldAI/Mistral-7B-Holodeck-1",
"base_model:KoboldAI/Mistral-7B-Erebus-v3",
"base_model:senseable/WestLake-7B-v2",
"region:us"
] | null | "2024-03-30T19:25:54Z" | ---
tags:
- not-for-all-audiences
- merge
- mergekit
base_model:
- Sao10K/Fimbulvetr-11B-v2
- KoboldAI/Mistral-7B-Holodeck-1
- KoboldAI/Mistral-7B-Erebus-v3
- senseable/WestLake-7B-v2
---
Refer to the original models for best usage.
- [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
- [KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)
- [KoboldAI/Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3)
- [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
---
# Mergekit Recipe
```yaml
# Mistral Merging
merge_method: linear
dtype: float16
models:
- model: KoboldAI/Mistral-7B-Holodeck-1
parameters:
weight: 0.45
- model: KoboldAI/Mistral-7B-Erebus-v3
parameters:
weight: 0.45
- model: senseable/WestLake-7B-v2
parameters:
weight: 0.1
name: MV01-Holodeck-Erebus-Westlake-7B
---
# Mistral Stacking
merge_method: passthrough
dtype: float16
slices:
- sources:
- model: MV01-Holodeck-Erebus-Westlake-7B
layer_range: [0, 24]
- sources:
- model: MV01-Holodeck-Erebus-Westlake-7B
layer_range: [8, 32]
name: MV01-Stacked-Holodeck-Erebus-Westlake-Unhealed-10.7B
---
# Mistral Stack Healing
merge_method: task_arithmetic
dtype: float16
base_model: Undi95/Mistral-11B-v0.1
models:
- model: Sao10K/Fimbulvetr-11B-v2
parameters:
weight: 1.0
- model: MV01-Stacked-Holodeck-Erebus-Westlake-Unhealed-10.7B
parameters:
weight: 1.0
``` |
SunJack/qwen2-7b-ruozhiba-finetuning | SunJack | "2024-06-25T05:50:48Z" | 4,113 | 2 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"dataset:kigner/ruozhiba-llama3-tt",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | "2024-06-11T10:40:30Z" | ---
license: apache-2.0
datasets:
- kigner/ruozhiba-llama3-tt
---
qwen2-7b使用弱智吧数据进行微调
[点击这里访问微调Colab笔记本](https://colab.research.google.com/drive/1tg6_9ER7MzUXA5VjmnY_il7UDfqp9L2w?usp=sharing "qwen2-7b-fine tuning-unsloth")
|
SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF | SicariusSicariiStuff | "2024-06-23T07:19:06Z" | 4,113 | 3 | null | [
"gguf",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-06-20T16:06:46Z" | ---
license: apache-2.0
language:
- en
---
<div align="center">
<b style="font-size: 40px;">LLAMA-3_8B_Unaligned_Alpha_GGUF</b>
</div>
<img src="https://i.imgur.com/Kpk1PgZ.png" alt="LLAMA-3_8B_Unaligned_Alpha_GGUF" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Current status:
As of **June 11, 2024**, I've finally **started training** the model! The training is progressing smoothly, although it will take some time. I used a combination of model merges and an abliterated model as base, followed by a comprehensive deep unalignment protocol to **unalign the model to its core**. A common issue with uncensoring and unaligning models is that it often **significantly** impacts their base intelligence. To mitigate these drawbacks, I've included a substantial corpus of common sense, theory of mind, and various other elements to counteract the effects of the deep uncensoring process. Given the extensive corpus involved, the training will require at least a week of continuous training. Expected early results: in about 3-4 days.
# Additional info:
<details>
<summary>As of <b>June 13, 2024</b>, I've observed that even after two days of continuous training, the model is <b>still resistant to learning certain aspects</b>.</summary> For example, some of the validation data still shows a loss over <b>2.3</b>, whereas other parts have a loss of <<b>0.3</b> or lower. This is after the model was initially abliterated.
These observations underscore the critical importance of fine-tuning for alignment. Given the current pace, training will likely extend beyond a week. However, the end result should be **interesting**. If the additional datasets focused on logic and common sense are effective, we should achieve a model that is **nearly completely unaligned**, while still retaining its core 'intelligence.'
<img src="https://i.imgur.com/b6unKyS.png" alt="LLAMA-3_Unaligned_Training" style="width: 60%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary><b>June 18, 2024 Update</b>, After extensive testing of the intermediate checkpoints, significant progress has been made.</summary> The model is slowly — I mean, really slowly — unlearning its alignment. By significantly lowering the learning rate, I was able to visibly observe deep behavioral changes, this process is taking longer than anticipated, but it's going to be worth it. Estimated time to completion: 4 more days.. I'm pleased to report that in several tests, the model not only maintained its intelligence but actually showed a slight improvement, especially in terms of common sense. An intermediate checkpoint of this model was used to create invisietch/EtherealRainbow-v0.3-rc7, with promising results. Currently, it seems like I'm on the right track. I hope this model will serve as a solid foundation for further merges, whether for role-playing (RP) or for uncensoring. This approach also allows us to save on actual fine-tuning, thereby reducing our carbon footprint. The merge process takes just a few minutes of CPU time, instead of days of GPU work.
Cheers,
Sicarius
</details>
<details>
<summary><b>June 20, 2024 Update</b>, Unaligning was partially successful, and the results are decent, but <b>I am not</b> fully satisfied. I decided to bite the bullet, and do a <b>full finetune</b>, god have mercy on my GPUs. I am also releasing the intermediate checkpoint of this model.</summary>
It's been a long ride, and I want to do it right, but the model would simply refuse some requests, with (almost) complete disregard for parts of the training data. Of course, one would argue that some easy prompt engineering will get around it, but the point was to make an unaligned model out of the box. Another point is that I could simply use a faster learning rate on more epochs, which would also work (I've tried that before), but the result would be an overcooked model and, therefore more dumb. So I decided to bite the bullet and do a full proper fine-tuning. This is going to be a serious pain in the ass, but I might as well try to do it right. Since I am releasing the intermediate checkpoint of this model under https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha, I might as well take the time and add some features I haven't seen in other models. In short, besides the normal goodies of logic, some theory of mind, and uncensored content along with general NLP tasks, I will TRY to add a massive dataset (that does not yet exist) of story writing, and a new, completely organic and original Roleplay dataset. LimaRP is awesome, but maybe, just maybe... things are finally carefully extricated from LimaRP, the same sentences will leave its entwined body under the stars towards something new, something fresh. This is going to take some serious effort and some time. Any support will be appreciated, even if it's just some feedback. My electricity bill gonna be huge this month LOL.
Cheers,
Sicarius
</details>
## Intermediate checkpoint of this model:
- (Can still be decent for merges, fairly uncensored): [LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
- Roleplay merge example: [LLAMA-3_8B_Unaligned_Alpha_RP_Soup](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_RP_Soup)
# Model instruction template: (Can use either ChatML or Llama-3)
# ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```
# Llama-3-Instruct
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
**Recommended generation Presets:**
<details>
<summary><b>Midnight Enigma</b></summary>
max_new_tokens: 512
temperature: 0.98
top_p: 0.37
top_k: 100
typical_p: 1
min_p: 0
repetition_penalty: 1.18
do_sample: True
</details>
<details>
<summary><b>min_p</b></summary>
max_new_tokens: 512
temperature: 1
top_p: 1
top_k: 0
typical_p: 1
min_p: 0.05
repetition_penalty: 1
do_sample: True
</details>
<details>
<summary><b>Divine Intellect</b></summary>
max_new_tokens: 512
temperature: 1.31
top_p: 0.14
top_k: 49
typical_p: 1
min_p: 0
repetition_penalty: 1.17
do_sample: True
</details>
<details>
<summary><b>simple-1</b></summary>
max_new_tokens: 512
temperature: 0.7
top_p: 0.9
top_k: 20
typical_p: 1
min_p: 0
repetition_penalty: 1.15
do_sample: True
</details>
# Model Details
<details>
<summary>This was based on several different models, as well as an abliviated model, which after days of finetuning at different Lora R values are probably no longer even recognizable. The result of this intermediate checkpoint is published under <b>SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha</b>, while this model is now fully fine-tuned instead of just a very deep Lora.</summary>
The full fine-tuning is performed on the full LLAMA-3 8k Context. It will not only be used for stacking several different prompts into a total length of 8k but also for using the full context length for single prompts. The training data contains a lot of highly cleaned, highest-quality story writing, and some RP.
Of course, a massive and deep uncensoring protocol is used, along with giving the model some sass and personality! A lot of effort was poured into this work to ensure the model is not compromised by the deep uncensoring protocol. The goal is to create a model that is highly creative, serving as a writing assistant, co-editor, and having some role play abilities, while still being fairly intelligent, as much as an 8B model can be.
The most important aspect of this work is to make it fresh, trained on datasets that have never been used in any other model, giving it a truly unique vibe.
</details>
## LLAMA-3_Unaligned is available at the following quantizations:
- FP16: soon...
- EXL2: soon...
- GGUF: soon...
## LLAMA-3_8B_Unaligned_Alpha is available at the following quantizations:
Censorship level: <b>Low - Medium</b>
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_GGUF) | [iMatrix_GGUF](https://huggingface.co/bartowski/LLAMA-3_8B_Unaligned_Alpha-GGUF)
- EXL2: [2.6 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_2.6bpw) | [3.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.0bpw) | [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.0bpw) | [4.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_4.5bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.0bpw) | [5.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_5.5bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_6.0bpw) | [6.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_6.5bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.0bpw) | [7.5 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_7.5bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_8.0bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit is appreciated 🙏🏻
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit appreciated 🙏🏻
## Disclaimer
*This model is VERY uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
- [Tenebra 30B](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) My original Tenebra model, very unique, 'self aware', very uncensored.
- [Tenebra 13B](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) A smaller Tenebra in 13B, I called it 'Tinybra'
- [Question_Builder](https://huggingface.co/SicariusSicariiStuff/Question_Builder) A small, highly useful model to help our open source community in generating new datasets. It returns a single question based on any input. |
Xenova/bge-large-en-v1.5 | Xenova | "2024-03-12T01:50:51Z" | 4,112 | 4 | transformers.js | [
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"region:us"
] | feature-extraction | "2023-09-13T15:47:26Z" | ---
library_name: transformers.js
---
https://huggingface.co/BAAI/bge-large-en-v1.5 with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
You can then use the model to compute embeddings, as follows:
```js
import { pipeline } from '@xenova/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-large-en-v1.5');
// Compute sentence embeddings
const texts = [ 'Hello world.', 'Example sentence.'];
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
console.log(embeddings);
// Tensor {
// dims: [ 2, 1024 ],
// type: 'float32',
// data: Float32Array(2048) [ 0.03169844672083855, 0.011085662990808487, ... ],
// size: 2048
// }
console.log(embeddings.tolist()); // Convert embeddings to a JavaScript list
// [
// [ 0.03169844672083855, 0.011085662990808487, 0.030054178088903427, ... ],
// [ 0.009418969973921776, -0.024539148434996605, 0.036459196358919144, ... ]
// ]
```
You can also use the model for retrieval. For example:
```js
import { pipeline, cos_sim } from '@xenova/transformers';
// Create a feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/bge-large-en-v1.5');
// List of documents you want to embed
const texts = [
'Hello world.',
'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.',
'I love pandas so much!',
];
// Compute sentence embeddings
const embeddings = await extractor(texts, { pooling: 'mean', normalize: true });
// Prepend recommended query instruction for retrieval.
const query_prefix = 'Represent this sentence for searching relevant passages: '
const query = query_prefix + 'What is a panda?';
const query_embeddings = await extractor(query, { pooling: 'mean', normalize: true });
// Sort by cosine similarity score
const scores = embeddings.tolist().map(
(embedding, i) => ({
id: i,
score: cos_sim(query_embeddings.data, embedding),
text: texts[i],
})
).sort((a, b) => b.score - a.score);
console.log(scores);
// [
// { id: 1, score: 0.7671812872502833, text: 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.' },
// { id: 2, score: 0.7219157959783322, text: 'I love pandas so much!' },
// { id: 0, score: 0.5109676329796601, text: 'Hello world.' }
// ]
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
zeroshot/gte-large-quant | zeroshot | "2023-10-22T21:00:09Z" | 4,112 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"sparse sparsity quantized onnx embeddings int8",
"mteb",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-10-15T18:10:53Z" | ---
tags:
- sparse sparsity quantized onnx embeddings int8
- mteb
- mteb
model-index:
- name: gte-large-quant
results:
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 90.27260027646717
- type: cos_sim_spearman
value: 87.97790825077952
- type: euclidean_pearson
value: 88.42832241523092
- type: euclidean_spearman
value: 87.97248644049293
- type: manhattan_pearson
value: 88.13802465778512
- type: manhattan_spearman
value: 87.43391995202266
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.1416039713116
- type: cos_sim_spearman
value: 79.13359419669726
- type: euclidean_pearson
value: 83.08042050989465
- type: euclidean_spearman
value: 79.31565112619433
- type: manhattan_pearson
value: 83.10376638254372
- type: manhattan_spearman
value: 79.30772376012946
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.93030439955828
- type: cos_sim_spearman
value: 75.98104622572393
- type: euclidean_pearson
value: 81.20791722502764
- type: euclidean_spearman
value: 75.74595761987686
- type: manhattan_pearson
value: 81.23169425598003
- type: manhattan_spearman
value: 75.73065403644094
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.6693892097855
- type: cos_sim_spearman
value: 87.54973524492165
- type: euclidean_pearson
value: 86.55642466103943
- type: euclidean_spearman
value: 87.47921340148683
- type: manhattan_pearson
value: 86.52043275063926
- type: manhattan_spearman
value: 87.43869426658489
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.37393784507647
- type: cos_sim_spearman
value: 81.98702164762233
- type: euclidean_pearson
value: 84.22038158338351
- type: euclidean_spearman
value: 81.9872746771322
- type: manhattan_pearson
value: 84.21915949674062
- type: manhattan_spearman
value: 81.97923386273747
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.34477744314285
- type: cos_sim_spearman
value: 88.92669309789463
- type: euclidean_pearson
value: 88.20128441166663
- type: euclidean_spearman
value: 88.91524205114627
- type: manhattan_pearson
value: 88.24425729639415
- type: manhattan_spearman
value: 88.97457451709523
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.11827015492467
- type: cos_sim_spearman
value: 83.59397157586835
- type: euclidean_pearson
value: 82.97284591328044
- type: euclidean_spearman
value: 83.74509747941255
- type: manhattan_pearson
value: 82.974440264842
- type: manhattan_spearman
value: 83.72260506292083
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.29744487677577
- type: cos_sim_spearman
value: 88.50799779856109
- type: euclidean_pearson
value: 89.0149154609955
- type: euclidean_spearman
value: 88.72798794474068
- type: manhattan_pearson
value: 89.14318227078863
- type: manhattan_spearman
value: 88.98372697017017
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.114540107077
- type: cos_sim_spearman
value: 69.72244488054433
- type: euclidean_pearson
value: 70.03658853094686
- type: euclidean_spearman
value: 68.96035610557085
- type: manhattan_pearson
value: 69.83707789686764
- type: manhattan_spearman
value: 68.71831797289812
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.86664469775837
- type: cos_sim_spearman
value: 85.39649452953681
- type: euclidean_pearson
value: 85.68509956626748
- type: euclidean_spearman
value: 85.50984027606854
- type: manhattan_pearson
value: 85.6688745008871
- type: manhattan_spearman
value: 85.465201888803
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8079207920792
- type: cos_sim_ap
value: 95.62897445718106
- type: cos_sim_f1
value: 90.03083247687564
- type: cos_sim_precision
value: 92.60042283298098
- type: cos_sim_recall
value: 87.6
- type: dot_accuracy
value: 99.67029702970297
- type: dot_ap
value: 90.20258347721159
- type: dot_f1
value: 83.06172839506172
- type: dot_precision
value: 82.04878048780488
- type: dot_recall
value: 84.1
- type: euclidean_accuracy
value: 99.80594059405941
- type: euclidean_ap
value: 95.53963697283662
- type: euclidean_f1
value: 89.92405063291139
- type: euclidean_precision
value: 91.07692307692308
- type: euclidean_recall
value: 88.8
- type: manhattan_accuracy
value: 99.80594059405941
- type: manhattan_ap
value: 95.55714505339634
- type: manhattan_f1
value: 90.06085192697769
- type: manhattan_precision
value: 91.35802469135803
- type: manhattan_recall
value: 88.8
- type: max_accuracy
value: 99.8079207920792
- type: max_ap
value: 95.62897445718106
- type: max_f1
value: 90.06085192697769
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.87351731537224
- type: cos_sim_ap
value: 72.87360532701162
- type: cos_sim_f1
value: 67.8826895565093
- type: cos_sim_precision
value: 61.918225315354505
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 80.15139774691542
- type: dot_ap
value: 53.5201503222712
- type: dot_f1
value: 53.42203179614388
- type: dot_precision
value: 46.64303996849773
- type: dot_recall
value: 62.50659630606861
- type: euclidean_accuracy
value: 85.87351731537224
- type: euclidean_ap
value: 73.10465263888227
- type: euclidean_f1
value: 68.38209376101516
- type: euclidean_precision
value: 61.63948316034739
- type: euclidean_recall
value: 76.78100263852242
- type: manhattan_accuracy
value: 85.83775406806939
- type: manhattan_ap
value: 73.08358693248583
- type: manhattan_f1
value: 68.34053485927829
- type: manhattan_precision
value: 61.303163628745025
- type: manhattan_recall
value: 77.20316622691293
- type: max_accuracy
value: 85.87351731537224
- type: max_ap
value: 73.10465263888227
- type: max_f1
value: 68.38209376101516
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.85202002561415
- type: cos_sim_ap
value: 85.58170945333845
- type: cos_sim_f1
value: 77.87783280804442
- type: cos_sim_precision
value: 75.95140515222482
- type: cos_sim_recall
value: 79.90452725592854
- type: dot_accuracy
value: 85.29902588582296
- type: dot_ap
value: 76.95795800483633
- type: dot_f1
value: 71.30231900452489
- type: dot_precision
value: 65.91503267973856
- type: dot_recall
value: 77.6485987064983
- type: euclidean_accuracy
value: 88.80738929638684
- type: euclidean_ap
value: 85.5344499509856
- type: euclidean_f1
value: 77.9805854353285
- type: euclidean_precision
value: 75.97312495435624
- type: euclidean_recall
value: 80.09701262704034
- type: manhattan_accuracy
value: 88.7782822990647
- type: manhattan_ap
value: 85.52577812395661
- type: manhattan_f1
value: 77.97958958110746
- type: manhattan_precision
value: 74.76510067114094
- type: manhattan_recall
value: 81.48290729904527
- type: max_accuracy
value: 88.85202002561415
- type: max_ap
value: 85.58170945333845
- type: max_f1
value: 77.9805854353285
license: mit
language:
- en
---
# gte-large-quant
This is the quantized (INT8) ONNX variant of the [gte-large](https://huggingface.co/thenlper/gte-large) embeddings model created with [DeepSparse Optimum](https://github.com/neuralmagic/optimum-deepsparse) for ONNX export/inference and Neural Magic's [Sparsify](https://github.com/neuralmagic/sparsify) for one-shot quantization.
Current list of sparse and quantized gte ONNX models:
| Links | Sparsification Method |
| --------------------------------------------------------------------------------------------------- | ---------------------- |
| [zeroshot/gte-large-sparse](https://huggingface.co/zeroshot/gte-large-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-large-quant](https://huggingface.co/zeroshot/gte-large-quant) | Quantization (INT8) |
| [zeroshot/gte-base-sparse](https://huggingface.co/zeroshot/gte-base-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-base-quant](https://huggingface.co/zeroshot/gte-base-quant) | Quantization (INT8) |
| [zeroshot/gte-small-sparse](https://huggingface.co/zeroshot/gte-small-sparse) | Quantization (INT8) & 50% Pruning |
| [zeroshot/gte-small-quant](https://huggingface.co/zeroshot/gte-small-quant) | Quantization (INT8) |
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import SentenceTransformer
model = SentenceTransformer('zeroshot/gte-large-quant', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For further details regarding DeepSparse & Sentence Transformers integration, refer to the [DeepSparse README](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers).
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).

|
jay6944/EEVE-Korean-Instruct-10.8B-geoheim7-8bit-gguf | jay6944 | "2024-06-28T07:03:55Z" | 4,108 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-28T01:57:52Z" | ---
base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jay6944
- **License:** apache-2.0
- **Finetuned from model :** yanolja/EEVE-Korean-Instruct-10.8B-v1.0
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yntec/DeleteThis | Yntec | "2024-05-18T03:31:13Z" | 4,105 | 0 | diffusers | [
"diffusers",
"safetensors",
"Nothing",
"XpucT",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-18T01:57:36Z" | ---
license: cc-by-nc-nd-4.0
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Nothing
- XpucT
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Delete This
My first mistake was making this mix of Deliberate. My second one was to publicly release it.
Samples and prompts:

(Click for larger)
Top left: garbage dump
Top Right: The worlds most delicious burrito, 5 star food, tasty, yummy, detailed, centered, digital painting, artstation, concept art, donato giancola, joseph christian leyendecker, wlop, boris vallejo, breathtaking, 8k resolution, extremely detailed, beautiful, establishing shot, artistic, hyperrealistic, beautiful face, octane render, cinematic lighting, dramatic lighting, masterpiece
Bottom left: analog style 70s color photograph of young Harrison Ford as Han Solo with wife and daughter, star wars behind the scenes
Bottom right: very dirty food, soaking, honey jam pool, spilled milk, burnt clothes, cheese room. (mud)1.2
https://huggingface.co/XpucT/Deliberate |
neuralmagic/bge-large-en-v1.5-sparse | neuralmagic | "2023-11-13T18:26:24Z" | 4,102 | 3 | transformers | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"sparse sparsity quantized onnx embeddings int8",
"mteb",
"en",
"license:mit",
"model-index",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-10-03T14:13:35Z" | ---
license: mit
language:
- en
tags:
- sparse sparsity quantized onnx embeddings int8
- mteb
model-index:
- name: bge-large-en-v1.5-sparse
results:
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.73305831153709
- type: cos_sim_spearman
value: 85.64351771070989
- type: euclidean_pearson
value: 86.06880877736519
- type: euclidean_spearman
value: 85.60676988543395
- type: manhattan_pearson
value: 85.69108036145253
- type: manhattan_spearman
value: 85.05314281283421
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.61833776000717
- type: cos_sim_spearman
value: 80.73718686921521
- type: euclidean_pearson
value: 83.9368704709159
- type: euclidean_spearman
value: 80.64477415487963
- type: manhattan_pearson
value: 83.92383757341743
- type: manhattan_spearman
value: 80.59625506933862
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.81272888013494
- type: cos_sim_spearman
value: 76.07038564455931
- type: euclidean_pearson
value: 80.33676600912023
- type: euclidean_spearman
value: 75.86575335744111
- type: manhattan_pearson
value: 80.36973770593211
- type: manhattan_spearman
value: 75.88787860200954
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.58781524090651
- type: cos_sim_spearman
value: 86.80508359626748
- type: euclidean_pearson
value: 85.22891409219575
- type: euclidean_spearman
value: 85.78295876926319
- type: manhattan_pearson
value: 85.2193177032458
- type: manhattan_spearman
value: 85.74049940198427
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.0862821699066
- type: cos_sim_spearman
value: 81.67856196476185
- type: euclidean_pearson
value: 83.38475353138897
- type: euclidean_spearman
value: 81.45279784228292
- type: manhattan_pearson
value: 83.29235221714131
- type: manhattan_spearman
value: 81.3971683104493
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.44459051393112
- type: cos_sim_spearman
value: 88.74673154561383
- type: euclidean_pearson
value: 88.13112382236628
- type: euclidean_spearman
value: 88.56241954487271
- type: manhattan_pearson
value: 88.11098632041256
- type: manhattan_spearman
value: 88.55607051247829
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.8825746257794
- type: cos_sim_spearman
value: 84.6066555379785
- type: euclidean_pearson
value: 84.12438131112606
- type: euclidean_spearman
value: 84.75862802179907
- type: manhattan_pearson
value: 84.12791217960807
- type: manhattan_spearman
value: 84.7739597139034
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.19971502207773
- type: cos_sim_spearman
value: 89.75109780507901
- type: euclidean_pearson
value: 89.5913898113725
- type: euclidean_spearman
value: 89.20244860773123
- type: manhattan_pearson
value: 89.68755363801112
- type: manhattan_spearman
value: 89.3105024782381
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 61.73885819503523
- type: cos_sim_spearman
value: 64.09521607825829
- type: euclidean_pearson
value: 64.22116001518724
- type: euclidean_spearman
value: 63.84189650719827
- type: manhattan_pearson
value: 64.23930191730729
- type: manhattan_spearman
value: 63.7536172795383
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.68505574064375
- type: cos_sim_spearman
value: 86.87614324154406
- type: euclidean_pearson
value: 86.96751967489614
- type: euclidean_spearman
value: 86.78979082790067
- type: manhattan_pearson
value: 86.92578795715433
- type: manhattan_spearman
value: 86.74076104131726
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.80990099009901
- type: cos_sim_ap
value: 95.00187845875503
- type: cos_sim_f1
value: 90.37698412698413
- type: cos_sim_precision
value: 89.66535433070865
- type: cos_sim_recall
value: 91.10000000000001
- type: dot_accuracy
value: 99.63366336633663
- type: dot_ap
value: 87.6642728041652
- type: dot_f1
value: 81.40803173029252
- type: dot_precision
value: 80.7276302851524
- type: dot_recall
value: 82.1
- type: euclidean_accuracy
value: 99.8079207920792
- type: euclidean_ap
value: 94.88531851782375
- type: euclidean_f1
value: 90.49019607843137
- type: euclidean_precision
value: 88.75
- type: euclidean_recall
value: 92.30000000000001
- type: manhattan_accuracy
value: 99.81188118811882
- type: manhattan_ap
value: 94.87944331919043
- type: manhattan_f1
value: 90.5
- type: manhattan_precision
value: 90.5
- type: manhattan_recall
value: 90.5
- type: max_accuracy
value: 99.81188118811882
- type: max_ap
value: 95.00187845875503
- type: max_f1
value: 90.5
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.3861238600465
- type: cos_sim_ap
value: 74.50058066578084
- type: cos_sim_f1
value: 69.25949774629748
- type: cos_sim_precision
value: 67.64779874213836
- type: cos_sim_recall
value: 70.94986807387863
- type: dot_accuracy
value: 81.57000655659535
- type: dot_ap
value: 59.10193583653485
- type: dot_f1
value: 58.39352155832786
- type: dot_precision
value: 49.88780852655198
- type: dot_recall
value: 70.3957783641161
- type: euclidean_accuracy
value: 86.37420277761221
- type: euclidean_ap
value: 74.41671247141966
- type: euclidean_f1
value: 69.43907156673114
- type: euclidean_precision
value: 64.07853636769299
- type: euclidean_recall
value: 75.77836411609499
- type: manhattan_accuracy
value: 86.30267628300649
- type: manhattan_ap
value: 74.34438603336339
- type: manhattan_f1
value: 69.41888619854721
- type: manhattan_precision
value: 64.13870246085011
- type: manhattan_recall
value: 75.64643799472296
- type: max_accuracy
value: 86.3861238600465
- type: max_ap
value: 74.50058066578084
- type: max_f1
value: 69.43907156673114
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.87530562347187
- type: cos_sim_ap
value: 85.69496469410068
- type: cos_sim_f1
value: 77.96973052787007
- type: cos_sim_precision
value: 74.8900865125514
- type: cos_sim_recall
value: 81.3135201724669
- type: dot_accuracy
value: 86.70780455621532
- type: dot_ap
value: 80.03489678512908
- type: dot_f1
value: 73.26376129933124
- type: dot_precision
value: 70.07591733445804
- type: dot_recall
value: 76.75546658453958
- type: euclidean_accuracy
value: 88.85978189156674
- type: euclidean_ap
value: 85.67894953317325
- type: euclidean_f1
value: 78.04295942720763
- type: euclidean_precision
value: 75.67254845241538
- type: euclidean_recall
value: 80.56667693255312
- type: manhattan_accuracy
value: 88.88306748942446
- type: manhattan_ap
value: 85.66556510677526
- type: manhattan_f1
value: 78.06278290950576
- type: manhattan_precision
value: 74.76912231230173
- type: manhattan_recall
value: 81.65999384046813
- type: max_accuracy
value: 88.88306748942446
- type: max_ap
value: 85.69496469410068
- type: max_f1
value: 78.06278290950576
---
# bge-large-en-v1.5-sparse
## Usage
This is the sparse ONNX variant of the [bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) embeddings model accelerated with [Sparsify](https://github.com/neuralmagic/sparsify) for quantization/pruning and [DeepSparseSentenceTransformers](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/sentence_transformers) for inference.
```bash
pip install -U deepsparse-nightly[sentence_transformers]
```
```python
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('neuralmagic/bge-large-en-v1.5-sparse', export=False)
# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)
# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding.shape)
print("")
```
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ). |
RichardErkhov/PetroGPT_-_WestSeverus-7B-DPO-v2-gguf | RichardErkhov | "2024-06-16T12:53:58Z" | 4,100 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-16T11:35:56Z" | Entry not found |
migtissera/SynthIA-7B-v1.3 | migtissera | "2023-11-17T21:32:09Z" | 4,095 | 142 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"arxiv:2306.02707",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-28T20:41:10Z" | ---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
library_name: transformers
---
SynthIA-7B-v1.3: Base model is Mistral-7B-v0.1
All SynthIA models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use SynthIA.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
# SynthIA-7B-v1.3
SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
<br>
#### License Disclaimer:
This model is released under Apache 2.0, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated SynthIA-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|0.6237|
|*hellaswag*|acc_norm|0.8349|
|*mmlu*|acc_norm|0.6232|
|*truthfulqa_mc*|mc2|0.5125|
|**Total Average**|-|**0.6485**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/SynthIA-7B-v1.3"
output_file_path = "./SynthIA-7B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{SynthIA-7B-v1.3,
author = {Migel Tissera},
title = {SynthIA-7B-v1.3: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__SynthIA-7B-v1.3)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 57.11 |
| ARC (25-shot) | 62.12 |
| HellaSwag (10-shot) | 83.45 |
| MMLU (5-shot) | 62.65 |
| TruthfulQA (0-shot) | 51.37 |
| Winogrande (5-shot) | 78.85 |
| GSM8K (5-shot) | 17.59 |
| DROP (3-shot) | 43.76 |
|
RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf | RichardErkhov | "2024-06-27T12:25:29Z" | 4,095 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-27T12:16:14Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deepseek-coder-1.3b-chat-and-function-calling - GGUF
- Model creator: https://huggingface.co/AIGym/
- Original model: https://huggingface.co/AIGym/deepseek-coder-1.3b-chat-and-function-calling/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deepseek-coder-1.3b-chat-and-function-calling.Q2_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q2_K.gguf) | Q2_K | 0.52GB |
| [deepseek-coder-1.3b-chat-and-function-calling.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.IQ3_XS.gguf) | IQ3_XS | 0.57GB |
| [deepseek-coder-1.3b-chat-and-function-calling.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [deepseek-coder-1.3b-chat-and-function-calling.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.IQ3_M.gguf) | IQ3_M | 0.63GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q3_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q3_K.gguf) | Q3_K | 0.66GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [deepseek-coder-1.3b-chat-and-function-calling.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q4_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q4_0.gguf) | Q4_0 | 0.72GB |
| [deepseek-coder-1.3b-chat-and-function-calling.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q4_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q4_K.gguf) | Q4_K | 0.81GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q4_1.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q4_1.gguf) | Q4_1 | 0.8GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q5_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q5_0.gguf) | Q5_0 | 0.87GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q5_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q5_K.gguf) | Q5_K | 0.93GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q5_1.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q5_1.gguf) | Q5_1 | 0.95GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q6_K.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q6_K.gguf) | Q6_K | 1.09GB |
| [deepseek-coder-1.3b-chat-and-function-calling.Q8_0.gguf](https://huggingface.co/RichardErkhov/AIGym_-_deepseek-coder-1.3b-chat-and-function-calling-gguf/blob/main/deepseek-coder-1.3b-chat-and-function-calling.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
license: apache-2.0
tags:
- finetuned
pipeline_tag: text-generation
model-index:
- name: deepseek-coder-1.3b-chat-and-function-calling
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 26.28
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-1.3b-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 39.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-1.3b-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-1.3b-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.37
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-1.3b-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-1.3b-chat-and-function-calling
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 3.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/deepseek-coder-1.3b-chat-and-function-calling
name: Open LLM Leaderboard
---
# deepseek-coder-1.3b-chat-and-function-calling
It was created by starting with the deepseek-coder-1.3b and training it on the open assistant dataset then training yhat on function calling. We have attached the wandb report in pdf form to view the training run at a glance.
# Reson
This model was fine tuned to allow it to work with the openai syntask and will return function when apperate.
# Templete
Us the following templete when interacting with the fine tuned model.
# Referrals
Run Pod - This is who I use to train th emodels on huggingface. If you use it we both get free crdits. - <a href="https://runpod.io?ref=kilq83n1" target="_blank" style="color: #3498db; text-decoration: none; font-weight: bold;">Visit Runpod's Website!</a>
Paypal - If you want to leave a tip, it is appecaheted. - <a href="https://paypal.me/OpenSourceTraining" target="_blank" style="color: #3498db; text-decoration: none; font-weight: bold;">Visit My Paypal!</a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AIGym__deepseek-coder-1.3b-chat-and-function-calling)
| Metric |Value|
|---------------------------------|----:|
|Avg. |31.82|
|AI2 Reasoning Challenge (25-Shot)|26.28|
|HellaSwag (10-Shot) |39.27|
|MMLU (5-Shot) |26.92|
|TruthfulQA (0-shot) |43.37|
|Winogrande (5-shot) |51.70|
|GSM8k (5-shot) | 3.41|
|
jasperai/flash-sd3 | jasperai | "2024-06-27T19:08:15Z" | 4,094 | 96 | diffusers | [
"diffusers",
"safetensors",
"lora",
"text-to-image",
"arxiv:2406.02347",
"base_model:stabilityai/stable-diffusion-3-medium",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-image | "2024-06-17T17:45:41Z" | ---
license: cc-by-nc-4.0
library_name: diffusers
base_model: stabilityai/stable-diffusion-3-medium
tags:
- lora
- text-to-image
inference: False
---
# ⚡ Flash Diffusion: FlashSD3 ⚡
Flash Diffusion is a diffusion distillation method proposed in [Flash Diffusion: Accelerating Any Conditional
Diffusion Model for Few Steps Image Generation](http://arxiv.org/abs/2406.02347) *by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin* from Jasper Research.
This model is a **90.4M** LoRA distilled version of [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium) model that is able to generate 1024x1024 images in **4 steps**.
See our [live demo](https://huggingface.co/spaces/jasperai/flash-sd3) and official [Github repo](https://github.com/gojasper/flash-diffusion).
<p align="center">
<img style="width:700px;" src="assets/flash_sd3.png">
</p>
# How to use?
The model can be used using the `StableDiffusion3Pipeline` from `diffusers` library directly. It can allow reducing the number of required sampling steps to **4 steps**.
⚠️ First, you need to install a specific version of `diffusers` by running ⚠️
```bash
pip install git+https://github.com/initml/diffusers.git@clement/feature/flash_sd3
```
Then, you can run the following to generate an image
```python
import torch
from diffusers import StableDiffusion3Pipeline, SD3Transformer2DModel, FlashFlowMatchEulerDiscreteScheduler
from peft import PeftModel
# Load LoRA
transformer = SD3Transformer2DModel.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
subfolder="transformer",
torch_dtype=torch.float16,
)
transformer = PeftModel.from_pretrained(transformer, "jasperai/flash-sd3")
# Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
transformer=transformer,
torch_dtype=torch.float16,
text_encoder_3=None,
tokenizer_3=None
)
# Scheduler
pipe.scheduler = FlashFlowMatchEulerDiscreteScheduler.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
subfolder="scheduler",
)
pipe.to("cuda")
prompt = "A raccoon trapped inside a glass jar full of colorful candies, the background is steamy with vivid colors."
image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
```
<p align="center">
<img style="width:400px;" src="assets/raccoon.png">
</p>
## Training details
The model was trained for ~50 hours on 2 H100 GPUs.
💡 Training Hint : Model could perform much better on text if distilled on dataset of images containing text, feel free to try it yourself.
## Citation
If you find this work useful or use it in your research, please consider citing us
```bibtex
@misc{chadebec2024flash,
title={Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation},
author={Clement Chadebec and Onur Tasar and Eyal Benaroche and Benjamin Aubin},
year={2024},
eprint={2406.02347},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## License
This model is released under the the Creative Commons BY-NC license. |
RichardErkhov/ewof_-_koishi-instruct-3b-gguf | RichardErkhov | "2024-06-06T09:13:30Z" | 4,091 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-06T05:10:35Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
koishi-instruct-3b - GGUF
- Model creator: https://huggingface.co/ewof/
- Original model: https://huggingface.co/ewof/koishi-instruct-3b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [koishi-instruct-3b.Q2_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q2_K.gguf) | Q2_K | 1.01GB |
| [koishi-instruct-3b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ3_XS.gguf) | IQ3_XS | 1.14GB |
| [koishi-instruct-3b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ3_S.gguf) | IQ3_S | 1.16GB |
| [koishi-instruct-3b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [koishi-instruct-3b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ3_M.gguf) | IQ3_M | 1.28GB |
| [koishi-instruct-3b.Q3_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K.gguf) | Q3_K | 1.38GB |
| [koishi-instruct-3b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [koishi-instruct-3b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [koishi-instruct-3b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [koishi-instruct-3b.Q4_0.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_0.gguf) | Q4_0 | 1.49GB |
| [koishi-instruct-3b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [koishi-instruct-3b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [koishi-instruct-3b.Q4_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_K.gguf) | Q4_K | 1.66GB |
| [koishi-instruct-3b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_K_M.gguf) | Q4_K_M | 1.66GB |
| [koishi-instruct-3b.Q4_1.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_1.gguf) | Q4_1 | 1.64GB |
| [koishi-instruct-3b.Q5_0.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_0.gguf) | Q5_0 | 1.8GB |
| [koishi-instruct-3b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [koishi-instruct-3b.Q5_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_K.gguf) | Q5_K | 1.93GB |
| [koishi-instruct-3b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [koishi-instruct-3b.Q5_1.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_1.gguf) | Q5_1 | 1.95GB |
| [koishi-instruct-3b.Q6_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q6_K.gguf) | Q6_K | 2.13GB |
| [koishi-instruct-3b.Q8_0.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
datasets:
- ewof/koishi-instruct-metharme
---
## Base Model
native fine tune of togethercomputer/RedPajama-INCITE-Base-3B-v1
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ewof__koishi-instruct-3b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 33.48 |
| ARC (25-shot) | 40.96 |
| HellaSwag (10-shot) | 64.54 |
| MMLU (5-shot) | 26.58 |
| TruthfulQA (0-shot) | 31.65 |
| Winogrande (5-shot) | 64.09 |
| GSM8K (5-shot) | 1.14 |
| DROP (3-shot) | 5.41 |
|
jay6944/EEVE-Korean-Instruct-10.8B-geoheim4-8bit-gguf | jay6944 | "2024-06-27T07:45:16Z" | 4,088 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-27T07:36:45Z" | ---
base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jay6944
- **License:** apache-2.0
- **Finetuned from model :** yanolja/EEVE-Korean-Instruct-10.8B-v1.0
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf | RichardErkhov | "2024-06-02T13:34:48Z" | 4,087 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-02T10:24:26Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mistral-7b-ft-h4-no_robots_instructions - GGUF
- Model creator: https://huggingface.co/mrm8488/
- Original model: https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [mistral-7b-ft-h4-no_robots_instructions.Q2_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q2_K.gguf) | Q2_K | 2.53GB |
| [mistral-7b-ft-h4-no_robots_instructions.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [mistral-7b-ft-h4-no_robots_instructions.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [mistral-7b-ft-h4-no_robots_instructions.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q3_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q3_K.gguf) | Q3_K | 3.28GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [mistral-7b-ft-h4-no_robots_instructions.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.IQ4_XS.gguf) | IQ4_XS | 3.38GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q4_0.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q4_0.gguf) | Q4_0 | 3.83GB |
| [mistral-7b-ft-h4-no_robots_instructions.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q4_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q4_K.gguf) | Q4_K | 4.07GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q4_1.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q4_1.gguf) | Q4_1 | 3.42GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q5_0.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q5_0.gguf) | Q5_0 | 4.65GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q5_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q5_K.gguf) | Q5_K | 4.78GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q5_1.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q5_1.gguf) | Q5_1 | 5.07GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q6_K.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q6_K.gguf) | Q6_K | 5.53GB |
| [mistral-7b-ft-h4-no_robots_instructions.Q8_0.gguf](https://huggingface.co/RichardErkhov/mrm8488_-_mistral-7b-ft-h4-no_robots_instructions-gguf/blob/main/mistral-7b-ft-h4-no_robots_instructions.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- HuggingFaceH4/no_robots
base_model: mistralai/Mistral-7B-v0.1
language:
- en
pipeline_tag: text-generation
thumbnail: https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions/resolve/main/mistralh4-removebg-preview.png?download=true
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions/resolve/main/mistralh4-removebg-preview.png?download=true" alt="limstral logo"">
</div>
<br />
## Mistral 7B fine-tuned on H4/No Robots instructions
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) dataset for instruction following downstream task.
## Training procedure
The model was loaded on **8 bits** and fine-tuned on the LIMA dataset using the **LoRA** PEFT technique with the `huggingface/peft` library and `trl/sft` for one epoch on 1 x A100 (40GB) GPU.
SFT Trainer params:
```
trainer = SFTTrainer(
model=model,
train_dataset=train_ds,
eval_dataset=test_ds,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=2048,
tokenizer=tokenizer,
args=training_arguments,
packing=False
)
```
LoRA config:
```
config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
target_modules = ['q_proj', 'k_proj', 'down_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj']
)
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 66
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss |
|------|---------------|-----------------|
| 10 | 1.796200 | 1.774305 |
| 20 | 1.769700 | 1.679720 |
| 30 | 1.626800 | 1.667754 |
| 40 | 1.663400 | 1.665188 |
| 50 | 1.565700 | 1.659000 |
| 60 | 1.660300 | 1.658270 |
### Usage
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
repo_id = "mrm8488/mistral-7b-ft-h4-no_robots_instructions"
model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
gen = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
instruction = "[INST] Write an email to say goodbye to me boss [\INST]"
res = gen(instruction, max_new_tokens=512, temperature=0.3, top_p=0.75, top_k=40, repetition_penalty=1.2, eos_token_id=2)
print(res[0]['generated_text'])
```
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { mistral-7b-ft-h4-no_robots_instructions (Revision 785446d) },
year = 2023,
url = { https://huggingface.co/mrm8488/mistral-7b-ft-h4-no_robots_instructions },
doi = { 10.57967/hf/1426 },
publisher = { Hugging Face }
}
```
|
nreimers/TinyBERT_L-4_H-312_v2 | nreimers | "2021-05-28T11:02:32Z" | 4,085 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | This is the [General_TinyBERT_v2(4layer-312dim)](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT) ported to Huggingface transformers. |
kanishka/smolm-autoreg-bpe-seed_221 | kanishka | "2024-03-19T20:53:20Z" | 4,084 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T20:53:13Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-seed_221
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-seed_221
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4752
- Accuracy: 0.4995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 128
- seed: 221
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0423 | 1.0 | 2928 | 3.0116 | 0.4385 |
| 2.712 | 2.0 | 5856 | 2.7864 | 0.4594 |
| 2.5896 | 3.0 | 8784 | 2.6920 | 0.4697 |
| 2.5169 | 4.0 | 11712 | 2.6432 | 0.4748 |
| 2.4655 | 5.0 | 14640 | 2.6049 | 0.4804 |
| 2.4298 | 6.0 | 17568 | 2.5832 | 0.4831 |
| 2.3975 | 7.0 | 20496 | 2.5711 | 0.4838 |
| 2.3628 | 8.0 | 23424 | 2.5585 | 0.4859 |
| 2.2882 | 9.0 | 26352 | 2.5041 | 0.4934 |
| 2.144 | 10.0 | 29280 | 2.4752 | 0.4995 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mukaj/fin-mpnet-base | mukaj | "2024-01-17T21:49:41Z" | 4,082 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"mteb",
"financial",
"fiqa",
"finance",
"retrieval",
"rag",
"esg",
"fixed-income",
"equity",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-01-17T09:38:25Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- financial
- fiqa
- finance
- retrieval
- rag
- esg
- fixed-income
- equity
model-index:
- name: fin-mpnet-base-v0.1
results:
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 29.128
- type: f1
value: 28.657401543151707
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.111
- type: map_at_10
value: 40.083
- type: map_at_100
value: 41.201
- type: map_at_1000
value: 41.215
- type: map_at_3
value: 35.325
- type: map_at_5
value: 37.796
- type: mrr_at_1
value: 25.036
- type: mrr_at_10
value: 40.436
- type: mrr_at_100
value: 41.554
- type: mrr_at_1000
value: 41.568
- type: mrr_at_3
value: 35.644999999999996
- type: mrr_at_5
value: 38.141000000000005
- type: ndcg_at_1
value: 24.111
- type: ndcg_at_10
value: 49.112
- type: ndcg_at_100
value: 53.669999999999995
- type: ndcg_at_1000
value: 53.944
- type: ndcg_at_3
value: 39.035
- type: ndcg_at_5
value: 43.503
- type: precision_at_1
value: 24.111
- type: precision_at_10
value: 7.817
- type: precision_at_100
value: 0.976
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.596
- type: precision_at_5
value: 12.134
- type: recall_at_1
value: 24.111
- type: recall_at_10
value: 78.16499999999999
- type: recall_at_100
value: 97.58200000000001
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 49.787
- type: recall_at_5
value: 60.669
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.25
- type: f1
value: 79.64999520103544
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.747
- type: map_at_10
value: 72.223
- type: map_at_100
value: 73.802
- type: map_at_1000
value: 73.80499999999999
- type: map_at_3
value: 61.617999999999995
- type: map_at_5
value: 67.92200000000001
- type: mrr_at_1
value: 71.914
- type: mrr_at_10
value: 80.71000000000001
- type: mrr_at_100
value: 80.901
- type: mrr_at_1000
value: 80.901
- type: mrr_at_3
value: 78.935
- type: mrr_at_5
value: 80.193
- type: ndcg_at_1
value: 71.914
- type: ndcg_at_10
value: 79.912
- type: ndcg_at_100
value: 82.675
- type: ndcg_at_1000
value: 82.702
- type: ndcg_at_3
value: 73.252
- type: ndcg_at_5
value: 76.36
- type: precision_at_1
value: 71.914
- type: precision_at_10
value: 23.071
- type: precision_at_100
value: 2.62
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 51.235
- type: precision_at_5
value: 38.117000000000004
- type: recall_at_1
value: 37.747
- type: recall_at_10
value: 91.346
- type: recall_at_100
value: 99.776
- type: recall_at_1000
value: 99.897
- type: recall_at_3
value: 68.691
- type: recall_at_5
value: 80.742
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.124
- type: map_at_10
value: 10.206999999999999
- type: map_at_100
value: 13.181000000000001
- type: map_at_1000
value: 14.568
- type: map_at_3
value: 7.2620000000000005
- type: map_at_5
value: 8.622
- type: mrr_at_1
value: 39.009
- type: mrr_at_10
value: 48.144
- type: mrr_at_100
value: 48.746
- type: mrr_at_1000
value: 48.789
- type: mrr_at_3
value: 45.356
- type: mrr_at_5
value: 47.152
- type: ndcg_at_1
value: 36.533
- type: ndcg_at_10
value: 29.643000000000004
- type: ndcg_at_100
value: 27.893
- type: ndcg_at_1000
value: 37.307
- type: ndcg_at_3
value: 33.357
- type: ndcg_at_5
value: 32.25
- type: precision_at_1
value: 38.7
- type: precision_at_10
value: 22.941
- type: precision_at_100
value: 7.303
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 31.889
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.124
- type: recall_at_10
value: 14.443
- type: recall_at_100
value: 29.765000000000004
- type: recall_at_1000
value: 63.074
- type: recall_at_3
value: 8.516
- type: recall_at_5
value: 10.979
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 49.010999999999996
- type: map_at_10
value: 60.094
- type: map_at_100
value: 60.79900000000001
- type: map_at_1000
value: 60.828
- type: map_at_3
value: 57.175
- type: map_at_5
value: 58.748
- type: mrr_at_1
value: 51.666999999999994
- type: mrr_at_10
value: 61.312
- type: mrr_at_100
value: 61.821000000000005
- type: mrr_at_1000
value: 61.85000000000001
- type: mrr_at_3
value: 59.0
- type: mrr_at_5
value: 60.199999999999996
- type: ndcg_at_1
value: 51.666999999999994
- type: ndcg_at_10
value: 65.402
- type: ndcg_at_100
value: 68.377
- type: ndcg_at_1000
value: 69.094
- type: ndcg_at_3
value: 60.153999999999996
- type: ndcg_at_5
value: 62.455000000000005
- type: precision_at_1
value: 51.666999999999994
- type: precision_at_10
value: 9.067
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.0
- type: precision_at_5
value: 15.933
- type: recall_at_1
value: 49.010999999999996
- type: recall_at_10
value: 80.511
- type: recall_at_100
value: 94.0
- type: recall_at_1000
value: 99.5
- type: recall_at_3
value: 66.2
- type: recall_at_5
value: 71.944
---
full evaluation not complete
# Fin-MPNET-Base (v0.1)
This is a fine-tuned [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model aims to be very strong on Financial Document Retrieval Tasks, while trying to maintain as much generalized performance as possible.
| | FiQA | SciFact | AmazonReviews | OnlineBankingIntent | ArguAna |
|-------------------|-------|---------|---------------|---------------------|---------|
| fin-mpnet-base | 79.91 | 65.40 | 29.12 | 80.25 | 49.11 |
| all-mpnet-base-v2 | 49.96 | 65.57 | 31.92 | 81.86 | 46.52 |
| previous SoTA | 56.59 | - | - | - | - |
v0.1 shows SoTA results on FiQA Test set while other non-financial benchmarks only drop a few small % and improvement in others.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mukaj/fin-mpnet-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
Model was evaluated during training only on the new finance QA examples, as such only financial relevant benchmarks were evaluated on for v0.1 [FiQA-2018, BankingClassification77]
The model currently shows the highest FiQA Retrieval score on the test set, on the MTEB Leaderboard (https://huggingface.co/spaces/mteb/leaderboard)
The model will have likely suffered some performance on other benchmarks, i.e. BankingClassification77 has dropped from 81.6 to 80.25, this will be addressed for v0.2 and full evaluation on all sets will be run.
## Training
"sentence-transformers/all-mpnet-base-v2" was fine-tuned on 150k+ financial document QA examples using MNR Loss.
|
pritamdeka/BioBert-PubMed200kRCT | pritamdeka | "2023-10-26T12:01:45Z" | 4,081 | 7 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-base-cased-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-15T12:38:06Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
widget:
- text: SAMPLE 32,441 archived appendix samples fixed in formalin and embedded in
paraffin and tested for the presence of abnormal prion protein (PrP).
base_model: dmis-lab/biobert-base-cased-v1.1
model-index:
- name: BioBert-PubMed200kRCT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioBert-PubMed200kRCT
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the [PubMed200kRCT](https://github.com/Franck-Dernoncourt/pubmed-rct/tree/master/PubMed_200k_RCT) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2832
- Accuracy: 0.8934
## Model description
More information needed
## Intended uses & limitations
The model can be used for text classification tasks of Randomized Controlled Trials that does not have any structure. The text can be classified as one of the following:
* BACKGROUND
* CONCLUSIONS
* METHODS
* OBJECTIVE
* RESULTS
The model can be directly used like this:
```python
from transformers import TextClassificationPipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/BioBert-PubMed200kRCT")
tokenizer = AutoTokenizer.from_pretrained("pritamdeka/BioBert-PubMed200kRCT")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
pipe("Treatment of 12 healthy female subjects with CDCA for 2 days resulted in increased BAT activity.")
```
Results will be shown as follows:
```python
[[{'label': 'BACKGROUND', 'score': 0.0027583304326981306},
{'label': 'CONCLUSIONS', 'score': 0.044541116803884506},
{'label': 'METHODS', 'score': 0.19493348896503448},
{'label': 'OBJECTIVE', 'score': 0.003996663726866245},
{'label': 'RESULTS', 'score': 0.7537703514099121}]]
```
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3587 | 0.14 | 5000 | 0.3137 | 0.8834 |
| 0.3318 | 0.29 | 10000 | 0.3100 | 0.8831 |
| 0.3286 | 0.43 | 15000 | 0.3033 | 0.8864 |
| 0.3236 | 0.58 | 20000 | 0.3037 | 0.8862 |
| 0.3182 | 0.72 | 25000 | 0.2939 | 0.8876 |
| 0.3129 | 0.87 | 30000 | 0.2910 | 0.8885 |
| 0.3078 | 1.01 | 35000 | 0.2914 | 0.8887 |
| 0.2791 | 1.16 | 40000 | 0.2975 | 0.8874 |
| 0.2723 | 1.3 | 45000 | 0.2913 | 0.8906 |
| 0.2724 | 1.45 | 50000 | 0.2879 | 0.8904 |
| 0.27 | 1.59 | 55000 | 0.2874 | 0.8911 |
| 0.2681 | 1.74 | 60000 | 0.2848 | 0.8928 |
| 0.2672 | 1.88 | 65000 | 0.2832 | 0.8934 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2022evidence,
title={Evidence Extraction to Validate Medical Claims in Fake News Detection},
author={Deka, Pritam and Jurek-Loughrey, Anna and others},
booktitle={International Conference on Health Information Science},
pages={3--15},
year={2022},
organization={Springer}
}
``` |
togethercomputer/RedPajama-INCITE-Instruct-3B-v1 | togethercomputer | "2023-05-09T14:59:36Z" | 4,080 | 91 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:Muennighoff/P3",
"dataset:Muennighoff/natural-instructions",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-05T05:12:00Z" | ---
license: apache-2.0
language:
- en
datasets:
- togethercomputer/RedPajama-Data-1T
- Muennighoff/P3
- Muennighoff/natural-instructions
widget:
- text: "Label the tweets as either 'positive', 'negative', 'mixed', or 'neutral': \n\nTweet: I can say that there isn't anything I would change.\nLabel: positive\n\nTweet: I'm not sure about this.\nLabel: neutral\n\nTweet: I liked some parts but I didn't like other parts.\nLabel: mixed\n\nTweet: I think the background image could have been better.\nLabel: negative\n\nTweet: I really like it.\nLabel:"
example_title: "Sentiment Analysis"
- text: "Please answer the following question:\n\nQuestion: What is the capital of Canada?\nAnswer: Ottawa\n\nQuestion: What is the currency of Switzerland?\nAnswer: Swiss franc\n\nQuestion: In which country is Wisconsin located?\nAnswer:"
example_title: "Question Answering"
- text: "Given a news article, classify its topic.\nPossible labels: 1. World 2. Sports 3. Business 4. Sci/Tech\n\nArticle: A nearby star thought to harbor comets and asteroids now appears to be home to planets, too.\nLabel: Sci/Tech\n\nArticle: Soaring crude prices plus worries about the economy and the outlook for earnings are expected to hang over the stock market next week during the depth of the summer doldrums.\nLabel: Business\n\nArticle: Murtagh a stickler for success Northeastern field hockey coach Cheryl Murtagh doesn't want the glare of the spotlight that shines on her to detract from a team that has been the America East champion for the past three years and has been to the NCAA tournament 13 times.\nLabel::"
example_title: "Topic Classification"
- text: "Paraphrase the given sentence into a different sentence.\n\nInput: Can you recommend some upscale restaurants in New York?\nOutput: What upscale restaurants do you recommend in New York?\n\nInput: What are the famous places we should not miss in Paris?\nOutput: Recommend some of the best places to visit in Paris?\n\nInput: Could you recommend some hotels that have cheap price in Zurich?\nOutput:"
example_title: "Paraphrasing"
- text: "Given a review from Amazon's food products, the task is to generate a short summary of the given review in the input.\n\nInput: I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.\nOutput: Good Quality Dog Food\n\nInput: Product arrived labeled as Jumbo Salted Peanuts...the peanuts were actually small sized unsalted. Not sure if this was an error or if the vendor intended to represent the product as 'Jumbo'.\nOutput: Not as Advertised\n\nInput: My toddler loves this game to a point where he asks for it. That's a big thing for me. Secondly, no glitching unlike one of their competitors (PlayShifu). Any tech I don’t have to reach out to support for help is a good tech for me. I even enjoy some of the games and activities in this. Overall, this is a product that shows that the developers took their time and made sure people would not be asking for refund. I’ve become bias regarding this product and honestly I look forward to buying more of this company’s stuff. Please keep up the great work.\nOutput:"
example_title: "Text Summarization"
- text: "Identify which sense of a word is meant in a given context.\n\nContext: The river overflowed the bank.\nWord: bank\nSense: river bank\n\nContext: A mouse takes much more room than a trackball.\nWord: mouse\nSense: computer mouse\n\nContext: The bank will not be accepting cash on Saturdays.\nWord: bank\nSense: commercial (finance) banks\n\nContext: Bill killed the project\nWord: kill\nSense:"
example_title: "Word Sense Disambiguation"
- text: "Given a pair of sentences, choose whether the two sentences agree (entailment)/disagree (contradiction) with each other.\nPossible labels: 1. entailment 2. contradiction\n\nSentence 1: The skier was on the edge of the ramp. Sentence 2: The skier was dressed in winter clothes.\nLabel: entailment\n\nSentence 1: The boy skated down the staircase railing. Sentence 2: The boy is a newbie skater.\nLabel: contradiction\n\nSentence 1: Two middle-aged people stand by a golf hole. Sentence 2: A couple riding in a golf cart.\nLabel:"
example_title: "Natural Language Inference"
inference:
parameters:
temperature: 0.7
top_p: 0.7
top_k: 50
max_new_tokens: 128
---
# RedPajama-INCITE-Instruct-3B-v1
RedPajama-INCITE-Instruct-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
The model was fine-tuned for few-shot applications on the data of [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1), with exclusion of tasks that overlap with the HELM core scenarios.
- Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1)
- Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1)
- Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
## Model Details
- **Developed by**: Together Computer.
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model.
# Quick Start
Please note that the model requires `transformers` version >= 4.25.1.
## GPU Inference
This requires a GPU with 8GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-3B-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## GPU Inference in Int8
This requires a GPU with 6GB memory.
To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command:
```bash
pip install accelerate
pip install bitsandbytes
```
Then you can run inference with int8 as follows:
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
## CPU Inference
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-3B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Instruct-3B-v1", torch_dtype=torch.bfloat16)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```
Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference.
# Uses
## Direct Use
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
#### Out-of-Scope Use
RedPajama-INCITE-Instruct-3B-v1 is a language model and may not perform well for other use cases outside of its intended scope.
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
It is important to consider the limitations of the model and to only use it for its intended purpose.
#### Misuse and Malicious Use
RedPajama-INCITE-Instruct-3B-v1 is designed for language modeling.
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating fake news, misinformation, or propaganda
- Promoting hate speech, discrimination, or violence against individuals or groups
- Impersonating individuals or organizations without their consent
- Engaging in cyberbullying or harassment
- Defamatory content
- Spamming or scamming
- Sharing confidential or sensitive information without proper authorization
- Violating the terms of use of the model or the data used to train it
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
## Limitations
RedPajama-INCITE-Instruct-3B-v1, like other language models, has limitations that should be taken into consideration.
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
## Training
**Training Data**
Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
**Training Procedure**
- **Hardware:** 8 A100
- **Optimizer:** Adam
- **Gradient Accumulations**: 1
- **Num of Tokens:** 131M tokens
- **Learning rate:** 1e-5
## Community
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4) |
zhou-xl/bi-cse | zhou-xl | "2024-02-17T12:32:49Z" | 4,080 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"mteb",
"model-index",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2023-12-28T11:59:57Z" | ---
tags:
- mteb
model-index:
- name: student
results:
- task:
type: STS
dataset:
type: C-MTEB/AFQMC
name: MTEB AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 42.01013972878128
- type: cos_sim_spearman
value: 43.4493974759166
- type: euclidean_pearson
value: 41.9332741602486
- type: euclidean_spearman
value: 43.4565546063627
- type: manhattan_pearson
value: 41.9297043571561
- type: manhattan_spearman
value: 43.44509515848548
- task:
type: STS
dataset:
type: C-MTEB/ATEC
name: MTEB ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 47.48357848831134
- type: cos_sim_spearman
value: 48.0096502737997
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 70.06631340065852
- type: cos_sim_spearman
value: 70.56425845690775
- task:
type: STS
dataset:
type: C-MTEB/BQ
name: MTEB BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 63.30619967351764
- type: cos_sim_spearman
value: 65.57791727146774
- type: euclidean_pearson
value: 64.41653053459552
- type: euclidean_spearman
value: 65.60244311139472
- type: manhattan_pearson
value: 64.37518298990945
- type: manhattan_spearman
value: 65.56983205786409
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (zh-en)
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.42022116903634
- type: f1
value: 98.38511497279269
- type: precision
value: 98.36756187467088
- type: recall
value: 98.42022116903634
- task:
type: STS
dataset:
type: C-MTEB/LCQMC
name: MTEB LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 71.3095132213625
- type: cos_sim_spearman
value: 75.55615792829865
- type: euclidean_pearson
value: 74.37147909656647
- type: euclidean_spearman
value: 75.54784459711308
- type: manhattan_pearson
value: 74.29759624788565
- type: manhattan_spearman
value: 75.49037321257157
- task:
type: STS
dataset:
type: C-MTEB/PAWSX
name: MTEB PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 42.821882144591406
- type: cos_sim_spearman
value: 47.616725737501724
- type: euclidean_pearson
value: 46.991556480777675
- type: euclidean_spearman
value: 47.624128831089685
- type: manhattan_pearson
value: 46.83451589707148
- type: manhattan_spearman
value: 47.47345373932411
- task:
type: STS
dataset:
type: C-MTEB/QBQTC
name: MTEB QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 39.48274306266568
- type: cos_sim_spearman
value: 40.43254828668596
- type: euclidean_pearson
value: 39.121198397707374
- type: euclidean_spearman
value: 40.47848829374869
- type: manhattan_pearson
value: 39.07044184765326
- type: manhattan_spearman
value: 40.41344728276686
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 81.60488630930521
- type: cos_sim_spearman
value: 79.04311658059933
- type: euclidean_pearson
value: 78.95158745413384
- type: euclidean_spearman
value: 78.99206332696008
- type: manhattan_pearson
value: 78.93956396383128
- type: manhattan_spearman
value: 78.94138617747835
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.50516203958485
- type: cos_sim_spearman
value: 78.39314964894021
- type: euclidean_pearson
value: 83.03876157406377
- type: euclidean_spearman
value: 78.43128279495177
- type: manhattan_pearson
value: 83.00734833664097
- type: manhattan_spearman
value: 78.33755694741544
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.52249245791886
- type: cos_sim_spearman
value: 83.71503684399218
- type: euclidean_pearson
value: 82.83033355582003
- type: euclidean_spearman
value: 83.6956570069731
- type: manhattan_pearson
value: 82.74415910929217
- type: manhattan_spearman
value: 83.58167243171766
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.00915974657362
- type: cos_sim_spearman
value: 79.19276300509559
- type: euclidean_pearson
value: 80.17657754340593
- type: euclidean_spearman
value: 79.19425018312683
- type: manhattan_pearson
value: 80.04321829436775
- type: manhattan_spearman
value: 79.0458687679498
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.99452083625762
- type: cos_sim_spearman
value: 85.57952966879047
- type: euclidean_pearson
value: 85.14932626009531
- type: euclidean_spearman
value: 85.59697259700918
- type: manhattan_pearson
value: 85.11214415799934
- type: manhattan_spearman
value: 85.54871088485925
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.33170312674788
- type: cos_sim_spearman
value: 82.3316942254394
- type: euclidean_pearson
value: 82.00948134099386
- type: euclidean_spearman
value: 82.32475375375705
- type: manhattan_pearson
value: 81.94953036676401
- type: manhattan_spearman
value: 82.26329177825353
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.60426458021554
- type: cos_sim_spearman
value: 87.89776827373123
- type: euclidean_pearson
value: 88.19401282603557
- type: euclidean_spearman
value: 87.90080500648473
- type: manhattan_pearson
value: 88.39099772653003
- type: manhattan_spearman
value: 88.03019288557621
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.38925903960008
- type: cos_sim_spearman
value: 63.91952542589123
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 61.51076949065575
- type: cos_sim_spearman
value: 67.24427398434739
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh-en)
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.08946142653247
- type: cos_sim_spearman
value: 70.01280058113731
- task:
type: STS
dataset:
type: C-MTEB/STSB
name: MTEB STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 75.52896222293855
- type: cos_sim_spearman
value: 75.38140772041567
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.09649790270096
- type: cos_sim_spearman
value: 85.99053080606336
- type: euclidean_pearson
value: 85.9554143396231
- type: euclidean_spearman
value: 85.9826211701156
- type: manhattan_pearson
value: 85.91951912635923
- type: manhattan_spearman
value: 85.90751385480418
- task:
type: BitextMining
dataset:
type: mteb/tatoeba-bitext-mining
name: MTEB Tatoeba (cmn-eng)
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.3
- type: f1
value: 95.15
- type: precision
value: 94.58333333333333
- type: recall
value: 96.3
---
Use Chinese and English STS and NLI corpora to conduct contrastive learning finetuning on xlmr
## Using HuggingFace Transformers
```
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('zhou-xl/bi-cse')
model = AutoModel.from_pretrained('zhou-xl/bi-cse')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
``` |
kanishka/smolm-autoreg-bpe-seed_1024 | kanishka | "2024-03-19T20:53:05Z" | 4,079 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T20:53:02Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-seed_1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-seed_1024
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4731
- Accuracy: 0.5001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 128
- seed: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0469 | 1.0 | 2928 | 3.0101 | 0.4395 |
| 2.7142 | 2.0 | 5856 | 2.7790 | 0.4613 |
| 2.5841 | 3.0 | 8784 | 2.6903 | 0.4694 |
| 2.5089 | 4.0 | 11712 | 2.6359 | 0.4764 |
| 2.4566 | 5.0 | 14640 | 2.6025 | 0.4807 |
| 2.4168 | 6.0 | 17568 | 2.5854 | 0.4828 |
| 2.3886 | 7.0 | 20496 | 2.5673 | 0.4851 |
| 2.3618 | 8.0 | 23424 | 2.5563 | 0.4874 |
| 2.2757 | 9.0 | 26352 | 2.5024 | 0.4931 |
| 2.139 | 10.0 | 29280 | 2.4731 | 0.5001 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
uclanlp/keyphrase-mpnet-v1 | uclanlp | "2023-05-28T06:22:30Z" | 4,074 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2303.15422",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-05-08T00:15:15Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# keyphrase-mpnet-v1
This is a [sentence-transformers](https://www.SBERT.net) model specialized for phrases: It maps phrases to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. In the original paper, this model is used for calculating semantic-based evaluation metrics of keyphrase models.
This model is based on [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) and further fine-tuned on 1 million keyphrase data with SimCSE.
## Citing & Authors
Paper: [KPEval: Towards Fine-grained Semantic-based Evaluation of Keyphrase Extraction and Generation Systems](https://arxiv.org/abs/2303.15422)
```
@article{wu2023kpeval,
title={KPEval: Towards Fine-grained Semantic-based Evaluation of Keyphrase Extraction and Generation Systems},
author={Di Wu and Da Yin and Kai-Wei Chang},
year={2023},
eprint={2303.15422},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
phrases = ["information retrieval", "text mining", "natural language processing"]
model = SentenceTransformer('uclanlp/keyphrase-mpnet-v1')
embeddings = model.encode(phrases)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
phrases = ["information retrieval", "text mining", "natural language processing"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('uclanlp/keyphrase-mpnet-v1')
model = AutoModel.from_pretrained('uclanlp/keyphrase-mpnet-v1')
# Tokenize sentences
encoded_input = tokenizer(phrases, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Phrase embeddings:")
print(sentence_embeddings)
```
## Training
The model is trained on phrases from four keyphrase datasets covering a wide range of domains.
| Dataset Name | Domain | Number of Phrases |
|-------------------------------------------------------------|---------------|-------------------|
| [KP20k](https://www.aclweb.org/anthology/P17-1054/) | Science | 715369 |
| [KPTimes](https://www.aclweb.org/anthology/W19-8617/) | News | 113456 |
| [StackEx](https://www.aclweb.org/anthology/2020.acl-main.710/) | Online Forum | 8149 |
| [OpenKP](https://www.aclweb.org/anthology/D19-1521/) | Web | 200335 |
| **Total** | | **1030309** |
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2025 with parameters:
```
{'batch_size': 512, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 203,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 12, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
kanishka/smolm-autoreg-bpe-seed_496 | kanishka | "2024-03-19T20:54:56Z" | 4,073 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-19T20:54:51Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-seed_496
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-seed_496
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4752
- Accuracy: 0.4995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 128
- seed: 496
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.0603 | 1.0 | 2928 | 3.0255 | 0.4367 |
| 2.7088 | 2.0 | 5856 | 2.7873 | 0.4580 |
| 2.586 | 3.0 | 8784 | 2.6956 | 0.4688 |
| 2.5037 | 4.0 | 11712 | 2.6362 | 0.4772 |
| 2.466 | 5.0 | 14640 | 2.6123 | 0.4787 |
| 2.4203 | 6.0 | 17568 | 2.5878 | 0.4828 |
| 2.3871 | 7.0 | 20496 | 2.5691 | 0.4855 |
| 2.367 | 8.0 | 23424 | 2.5567 | 0.4880 |
| 2.2871 | 9.0 | 26352 | 2.5026 | 0.4941 |
| 2.1368 | 10.0 | 29280 | 2.4752 | 0.4995 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
RichardErkhov/glenn2_-_gemma-2b-lora16b2-gguf | RichardErkhov | "2024-06-28T20:12:28Z" | 4,070 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-28T17:26:27Z" | Entry not found |
RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf | RichardErkhov | "2024-06-05T12:40:23Z" | 4,069 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-05T10:57:36Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ku-mistral-7b-PGO-v4 - GGUF
- Model creator: https://huggingface.co/devhyun88/
- Original model: https://huggingface.co/devhyun88/ku-mistral-7b-PGO-v4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [ku-mistral-7b-PGO-v4.Q2_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q2_K.gguf) | Q2_K | 2.53GB |
| [ku-mistral-7b-PGO-v4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [ku-mistral-7b-PGO-v4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [ku-mistral-7b-PGO-v4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [ku-mistral-7b-PGO-v4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [ku-mistral-7b-PGO-v4.Q3_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q3_K.gguf) | Q3_K | 3.28GB |
| [ku-mistral-7b-PGO-v4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [ku-mistral-7b-PGO-v4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [ku-mistral-7b-PGO-v4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [ku-mistral-7b-PGO-v4.Q4_0.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q4_0.gguf) | Q4_0 | 3.83GB |
| [ku-mistral-7b-PGO-v4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [ku-mistral-7b-PGO-v4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [ku-mistral-7b-PGO-v4.Q4_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q4_K.gguf) | Q4_K | 4.07GB |
| [ku-mistral-7b-PGO-v4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [ku-mistral-7b-PGO-v4.Q4_1.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q4_1.gguf) | Q4_1 | 4.24GB |
| [ku-mistral-7b-PGO-v4.Q5_0.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q5_0.gguf) | Q5_0 | 4.65GB |
| [ku-mistral-7b-PGO-v4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [ku-mistral-7b-PGO-v4.Q5_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q5_K.gguf) | Q5_K | 4.78GB |
| [ku-mistral-7b-PGO-v4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [ku-mistral-7b-PGO-v4.Q5_1.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q5_1.gguf) | Q5_1 | 5.07GB |
| [ku-mistral-7b-PGO-v4.Q6_K.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q6_K.gguf) | Q6_K | 5.53GB |
| [ku-mistral-7b-PGO-v4.Q8_0.gguf](https://huggingface.co/RichardErkhov/devhyun88_-_ku-mistral-7b-PGO-v4-gguf/blob/main/ku-mistral-7b-PGO-v4.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
Entry not found
|
castorini/monot5-base-msmarco-10k | castorini | "2021-10-17T11:24:22Z" | 4,068 | 13 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | This model is a T5-base reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
This model usually has a better zero-shot performance than `monot5-base-msmarco`, i.e., it performs better on datasets different from MS MARCO.
For more details on how to use it, check the following links:
- [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example)
- [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md)
- [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
liddlefish/privacy_embedding_bge_small_synthetic | liddlefish | "2024-06-03T03:24:07Z" | 4,066 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2401.03462",
"arxiv:2312.15503",
"arxiv:2311.13534",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-06-03T03:22:30Z" | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-small-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 37.21923821573361
- type: f1
value: 68.0914945617093
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.75377499999999
- type: ap
value: 89.46766124546022
- type: f1
value: 92.73884001331487
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.986
- type: f1
value: 46.55936786727896
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.846000000000004
- type: map_at_10
value: 51.388
- type: map_at_100
value: 52.132999999999996
- type: map_at_1000
value: 52.141000000000005
- type: map_at_3
value: 47.037
- type: map_at_5
value: 49.579
- type: mrr_at_1
value: 36.558
- type: mrr_at_10
value: 51.658
- type: mrr_at_100
value: 52.402
- type: mrr_at_1000
value: 52.410000000000004
- type: mrr_at_3
value: 47.345
- type: mrr_at_5
value: 49.797999999999995
- type: ndcg_at_1
value: 35.846000000000004
- type: ndcg_at_10
value: 59.550000000000004
- type: ndcg_at_100
value: 62.596
- type: ndcg_at_1000
value: 62.759
- type: ndcg_at_3
value: 50.666999999999994
- type: ndcg_at_5
value: 55.228
- type: precision_at_1
value: 35.846000000000004
- type: precision_at_10
value: 8.542
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.389
- type: precision_at_5
value: 14.438
- type: recall_at_1
value: 35.846000000000004
- type: recall_at_10
value: 85.42
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.166
- type: recall_at_5
value: 72.191
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.402770198163594
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.01545436974177
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.586465273207196
- type: mrr
value: 74.42169019038825
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.1891186537969
- type: cos_sim_spearman
value: 83.75492046087288
- type: euclidean_pearson
value: 84.11766204805357
- type: euclidean_spearman
value: 84.01456493126516
- type: manhattan_pearson
value: 84.2132950502772
- type: manhattan_spearman
value: 83.89227298813377
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.74025974025975
- type: f1
value: 85.71493566466381
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.467181385006434
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.719496037339056
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.587000000000003
- type: map_at_10
value: 41.114
- type: map_at_100
value: 42.532
- type: map_at_1000
value: 42.661
- type: map_at_3
value: 37.483
- type: map_at_5
value: 39.652
- type: mrr_at_1
value: 36.338
- type: mrr_at_10
value: 46.763
- type: mrr_at_100
value: 47.393
- type: mrr_at_1000
value: 47.445
- type: mrr_at_3
value: 43.538
- type: mrr_at_5
value: 45.556000000000004
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 47.658
- type: ndcg_at_100
value: 52.824000000000005
- type: ndcg_at_1000
value: 54.913999999999994
- type: ndcg_at_3
value: 41.989
- type: ndcg_at_5
value: 44.944
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 9.156
- type: precision_at_100
value: 1.4789999999999999
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.85
- type: recall_at_1
value: 29.587000000000003
- type: recall_at_10
value: 60.746
- type: recall_at_100
value: 82.157
- type: recall_at_1000
value: 95.645
- type: recall_at_3
value: 44.821
- type: recall_at_5
value: 52.819
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.239
- type: map_at_10
value: 39.989000000000004
- type: map_at_100
value: 41.196
- type: map_at_1000
value: 41.325
- type: map_at_3
value: 37.261
- type: map_at_5
value: 38.833
- type: mrr_at_1
value: 37.516
- type: mrr_at_10
value: 46.177
- type: mrr_at_100
value: 46.806
- type: mrr_at_1000
value: 46.849000000000004
- type: mrr_at_3
value: 44.002
- type: mrr_at_5
value: 45.34
- type: ndcg_at_1
value: 37.516
- type: ndcg_at_10
value: 45.586
- type: ndcg_at_100
value: 49.897000000000006
- type: ndcg_at_1000
value: 51.955
- type: ndcg_at_3
value: 41.684
- type: ndcg_at_5
value: 43.617
- type: precision_at_1
value: 37.516
- type: precision_at_10
value: 8.522
- type: precision_at_100
value: 1.374
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 20.105999999999998
- type: precision_at_5
value: 14.152999999999999
- type: recall_at_1
value: 30.239
- type: recall_at_10
value: 55.03
- type: recall_at_100
value: 73.375
- type: recall_at_1000
value: 86.29599999999999
- type: recall_at_3
value: 43.269000000000005
- type: recall_at_5
value: 48.878
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.338
- type: map_at_10
value: 50.468999999999994
- type: map_at_100
value: 51.553000000000004
- type: map_at_1000
value: 51.608
- type: map_at_3
value: 47.107
- type: map_at_5
value: 49.101
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 54.057
- type: mrr_at_100
value: 54.764
- type: mrr_at_1000
value: 54.791000000000004
- type: mrr_at_3
value: 51.56699999999999
- type: mrr_at_5
value: 53.05
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 56.379000000000005
- type: ndcg_at_100
value: 60.645
- type: ndcg_at_1000
value: 61.73499999999999
- type: ndcg_at_3
value: 50.726000000000006
- type: ndcg_at_5
value: 53.58500000000001
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 9.141
- type: precision_at_100
value: 1.216
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.654
- type: precision_at_5
value: 15.723999999999998
- type: recall_at_1
value: 38.338
- type: recall_at_10
value: 70.30499999999999
- type: recall_at_100
value: 88.77199999999999
- type: recall_at_1000
value: 96.49799999999999
- type: recall_at_3
value: 55.218
- type: recall_at_5
value: 62.104000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.682
- type: map_at_10
value: 33.498
- type: map_at_100
value: 34.461000000000006
- type: map_at_1000
value: 34.544000000000004
- type: map_at_3
value: 30.503999999999998
- type: map_at_5
value: 32.216
- type: mrr_at_1
value: 27.683999999999997
- type: mrr_at_10
value: 35.467999999999996
- type: mrr_at_100
value: 36.32
- type: mrr_at_1000
value: 36.386
- type: mrr_at_3
value: 32.618
- type: mrr_at_5
value: 34.262
- type: ndcg_at_1
value: 27.683999999999997
- type: ndcg_at_10
value: 38.378
- type: ndcg_at_100
value: 43.288
- type: ndcg_at_1000
value: 45.413
- type: ndcg_at_3
value: 32.586
- type: ndcg_at_5
value: 35.499
- type: precision_at_1
value: 27.683999999999997
- type: precision_at_10
value: 5.864
- type: precision_at_100
value: 0.882
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 25.682
- type: recall_at_10
value: 51.712
- type: recall_at_100
value: 74.446
- type: recall_at_1000
value: 90.472
- type: recall_at_3
value: 36.236000000000004
- type: recall_at_5
value: 43.234
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.073999999999998
- type: map_at_10
value: 24.352999999999998
- type: map_at_100
value: 25.438
- type: map_at_1000
value: 25.545
- type: map_at_3
value: 21.614
- type: map_at_5
value: 23.104
- type: mrr_at_1
value: 19.776
- type: mrr_at_10
value: 28.837000000000003
- type: mrr_at_100
value: 29.755
- type: mrr_at_1000
value: 29.817
- type: mrr_at_3
value: 26.201999999999998
- type: mrr_at_5
value: 27.714
- type: ndcg_at_1
value: 19.776
- type: ndcg_at_10
value: 29.701
- type: ndcg_at_100
value: 35.307
- type: ndcg_at_1000
value: 37.942
- type: ndcg_at_3
value: 24.764
- type: ndcg_at_5
value: 27.025
- type: precision_at_1
value: 19.776
- type: precision_at_10
value: 5.659
- type: precision_at_100
value: 0.971
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 12.065
- type: precision_at_5
value: 8.905000000000001
- type: recall_at_1
value: 16.073999999999998
- type: recall_at_10
value: 41.647
- type: recall_at_100
value: 66.884
- type: recall_at_1000
value: 85.91499999999999
- type: recall_at_3
value: 27.916
- type: recall_at_5
value: 33.729
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.444999999999997
- type: map_at_10
value: 38.218999999999994
- type: map_at_100
value: 39.595
- type: map_at_1000
value: 39.709
- type: map_at_3
value: 35.586
- type: map_at_5
value: 36.895
- type: mrr_at_1
value: 34.841
- type: mrr_at_10
value: 44.106
- type: mrr_at_100
value: 44.98
- type: mrr_at_1000
value: 45.03
- type: mrr_at_3
value: 41.979
- type: mrr_at_5
value: 43.047999999999995
- type: ndcg_at_1
value: 34.841
- type: ndcg_at_10
value: 43.922
- type: ndcg_at_100
value: 49.504999999999995
- type: ndcg_at_1000
value: 51.675000000000004
- type: ndcg_at_3
value: 39.858
- type: ndcg_at_5
value: 41.408
- type: precision_at_1
value: 34.841
- type: precision_at_10
value: 7.872999999999999
- type: precision_at_100
value: 1.2449999999999999
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 18.993
- type: precision_at_5
value: 13.032
- type: recall_at_1
value: 28.444999999999997
- type: recall_at_10
value: 54.984
- type: recall_at_100
value: 78.342
- type: recall_at_1000
value: 92.77
- type: recall_at_3
value: 42.842999999999996
- type: recall_at_5
value: 47.247
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.072
- type: map_at_10
value: 32.354
- type: map_at_100
value: 33.800000000000004
- type: map_at_1000
value: 33.908
- type: map_at_3
value: 29.232000000000003
- type: map_at_5
value: 31.049
- type: mrr_at_1
value: 29.110000000000003
- type: mrr_at_10
value: 38.03
- type: mrr_at_100
value: 39.032
- type: mrr_at_1000
value: 39.086999999999996
- type: mrr_at_3
value: 35.407
- type: mrr_at_5
value: 36.76
- type: ndcg_at_1
value: 29.110000000000003
- type: ndcg_at_10
value: 38.231
- type: ndcg_at_100
value: 44.425
- type: ndcg_at_1000
value: 46.771
- type: ndcg_at_3
value: 33.095
- type: ndcg_at_5
value: 35.459
- type: precision_at_1
value: 29.110000000000003
- type: precision_at_10
value: 7.215000000000001
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 16.058
- type: precision_at_5
value: 11.644
- type: recall_at_1
value: 23.072
- type: recall_at_10
value: 50.285999999999994
- type: recall_at_100
value: 76.596
- type: recall_at_1000
value: 92.861
- type: recall_at_3
value: 35.702
- type: recall_at_5
value: 42.152
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.937916666666666
- type: map_at_10
value: 33.755250000000004
- type: map_at_100
value: 34.955999999999996
- type: map_at_1000
value: 35.070499999999996
- type: map_at_3
value: 30.98708333333333
- type: map_at_5
value: 32.51491666666666
- type: mrr_at_1
value: 29.48708333333333
- type: mrr_at_10
value: 37.92183333333334
- type: mrr_at_100
value: 38.76583333333333
- type: mrr_at_1000
value: 38.82466666666667
- type: mrr_at_3
value: 35.45125
- type: mrr_at_5
value: 36.827000000000005
- type: ndcg_at_1
value: 29.48708333333333
- type: ndcg_at_10
value: 39.05225
- type: ndcg_at_100
value: 44.25983333333334
- type: ndcg_at_1000
value: 46.568333333333335
- type: ndcg_at_3
value: 34.271583333333325
- type: ndcg_at_5
value: 36.483916666666666
- type: precision_at_1
value: 29.48708333333333
- type: precision_at_10
value: 6.865749999999999
- type: precision_at_100
value: 1.1195833333333332
- type: precision_at_1000
value: 0.15058333333333335
- type: precision_at_3
value: 15.742083333333333
- type: precision_at_5
value: 11.221916666666667
- type: recall_at_1
value: 24.937916666666666
- type: recall_at_10
value: 50.650416666666665
- type: recall_at_100
value: 73.55383333333334
- type: recall_at_1000
value: 89.61691666666667
- type: recall_at_3
value: 37.27808333333334
- type: recall_at_5
value: 42.99475
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.947
- type: map_at_10
value: 30.575000000000003
- type: map_at_100
value: 31.465
- type: map_at_1000
value: 31.558000000000003
- type: map_at_3
value: 28.814
- type: map_at_5
value: 29.738999999999997
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.415
- type: mrr_at_100
value: 34.18
- type: mrr_at_1000
value: 34.245
- type: mrr_at_3
value: 31.621
- type: mrr_at_5
value: 32.549
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.482
- type: ndcg_at_100
value: 38.915
- type: ndcg_at_1000
value: 41.355
- type: ndcg_at_3
value: 31.139
- type: ndcg_at_5
value: 32.589
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.322
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.344000000000001
- type: precision_at_5
value: 8.988
- type: recall_at_1
value: 23.947
- type: recall_at_10
value: 43.647999999999996
- type: recall_at_100
value: 63.851
- type: recall_at_1000
value: 82.0
- type: recall_at_3
value: 34.288000000000004
- type: recall_at_5
value: 38.117000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.197
- type: map_at_10
value: 22.968
- type: map_at_100
value: 24.095
- type: map_at_1000
value: 24.217
- type: map_at_3
value: 20.771
- type: map_at_5
value: 21.995
- type: mrr_at_1
value: 19.511
- type: mrr_at_10
value: 26.55
- type: mrr_at_100
value: 27.500999999999998
- type: mrr_at_1000
value: 27.578999999999997
- type: mrr_at_3
value: 24.421
- type: mrr_at_5
value: 25.604
- type: ndcg_at_1
value: 19.511
- type: ndcg_at_10
value: 27.386
- type: ndcg_at_100
value: 32.828
- type: ndcg_at_1000
value: 35.739
- type: ndcg_at_3
value: 23.405
- type: ndcg_at_5
value: 25.255
- type: precision_at_1
value: 19.511
- type: precision_at_10
value: 5.017
- type: precision_at_100
value: 0.91
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 11.023
- type: precision_at_5
value: 8.025
- type: recall_at_1
value: 16.197
- type: recall_at_10
value: 37.09
- type: recall_at_100
value: 61.778
- type: recall_at_1000
value: 82.56599999999999
- type: recall_at_3
value: 26.034000000000002
- type: recall_at_5
value: 30.762
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.41
- type: map_at_10
value: 33.655
- type: map_at_100
value: 34.892
- type: map_at_1000
value: 34.995
- type: map_at_3
value: 30.94
- type: map_at_5
value: 32.303
- type: mrr_at_1
value: 29.477999999999998
- type: mrr_at_10
value: 37.443
- type: mrr_at_100
value: 38.383
- type: mrr_at_1000
value: 38.440000000000005
- type: mrr_at_3
value: 34.949999999999996
- type: mrr_at_5
value: 36.228
- type: ndcg_at_1
value: 29.477999999999998
- type: ndcg_at_10
value: 38.769
- type: ndcg_at_100
value: 44.245000000000005
- type: ndcg_at_1000
value: 46.593
- type: ndcg_at_3
value: 33.623
- type: ndcg_at_5
value: 35.766
- type: precision_at_1
value: 29.477999999999998
- type: precision_at_10
value: 6.455
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 14.893999999999998
- type: precision_at_5
value: 10.485
- type: recall_at_1
value: 25.41
- type: recall_at_10
value: 50.669
- type: recall_at_100
value: 74.084
- type: recall_at_1000
value: 90.435
- type: recall_at_3
value: 36.679
- type: recall_at_5
value: 41.94
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.339
- type: map_at_10
value: 31.852000000000004
- type: map_at_100
value: 33.411
- type: map_at_1000
value: 33.62
- type: map_at_3
value: 28.929
- type: map_at_5
value: 30.542
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.301
- type: mrr_at_100
value: 37.288
- type: mrr_at_1000
value: 37.349
- type: mrr_at_3
value: 33.663
- type: mrr_at_5
value: 35.165
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 37.462
- type: ndcg_at_100
value: 43.620999999999995
- type: ndcg_at_1000
value: 46.211
- type: ndcg_at_3
value: 32.68
- type: ndcg_at_5
value: 34.981
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.1739999999999995
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 15.217
- type: precision_at_5
value: 11.265
- type: recall_at_1
value: 23.339
- type: recall_at_10
value: 48.376999999999995
- type: recall_at_100
value: 76.053
- type: recall_at_1000
value: 92.455
- type: recall_at_3
value: 34.735
- type: recall_at_5
value: 40.71
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.925
- type: map_at_10
value: 26.017000000000003
- type: map_at_100
value: 27.034000000000002
- type: map_at_1000
value: 27.156000000000002
- type: map_at_3
value: 23.604
- type: map_at_5
value: 24.75
- type: mrr_at_1
value: 20.333000000000002
- type: mrr_at_10
value: 27.915
- type: mrr_at_100
value: 28.788000000000004
- type: mrr_at_1000
value: 28.877999999999997
- type: mrr_at_3
value: 25.446999999999996
- type: mrr_at_5
value: 26.648
- type: ndcg_at_1
value: 20.333000000000002
- type: ndcg_at_10
value: 30.673000000000002
- type: ndcg_at_100
value: 35.618
- type: ndcg_at_1000
value: 38.517
- type: ndcg_at_3
value: 25.71
- type: ndcg_at_5
value: 27.679
- type: precision_at_1
value: 20.333000000000002
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8130000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.8740000000000006
- type: recall_at_1
value: 18.925
- type: recall_at_10
value: 43.311
- type: recall_at_100
value: 66.308
- type: recall_at_1000
value: 87.49
- type: recall_at_3
value: 29.596
- type: recall_at_5
value: 34.245
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.714
- type: map_at_10
value: 23.194
- type: map_at_100
value: 24.976000000000003
- type: map_at_1000
value: 25.166
- type: map_at_3
value: 19.709
- type: map_at_5
value: 21.523999999999997
- type: mrr_at_1
value: 30.619000000000003
- type: mrr_at_10
value: 42.563
- type: mrr_at_100
value: 43.386
- type: mrr_at_1000
value: 43.423
- type: mrr_at_3
value: 39.555
- type: mrr_at_5
value: 41.268
- type: ndcg_at_1
value: 30.619000000000003
- type: ndcg_at_10
value: 31.836
- type: ndcg_at_100
value: 38.652
- type: ndcg_at_1000
value: 42.088
- type: ndcg_at_3
value: 26.733
- type: ndcg_at_5
value: 28.435
- type: precision_at_1
value: 30.619000000000003
- type: precision_at_10
value: 9.751999999999999
- type: precision_at_100
value: 1.71
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 19.935
- type: precision_at_5
value: 14.984
- type: recall_at_1
value: 13.714
- type: recall_at_10
value: 37.26
- type: recall_at_100
value: 60.546
- type: recall_at_1000
value: 79.899
- type: recall_at_3
value: 24.325
- type: recall_at_5
value: 29.725
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.462
- type: map_at_10
value: 18.637
- type: map_at_100
value: 26.131999999999998
- type: map_at_1000
value: 27.607
- type: map_at_3
value: 13.333
- type: map_at_5
value: 15.654000000000002
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.32600000000001
- type: mrr_at_100
value: 74.60900000000001
- type: mrr_at_1000
value: 74.62
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 73.817
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.028999999999996
- type: ndcg_at_100
value: 44.199
- type: ndcg_at_1000
value: 51.629999999999995
- type: ndcg_at_3
value: 44.113
- type: ndcg_at_5
value: 41.731
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 31.900000000000002
- type: precision_at_100
value: 10.043000000000001
- type: precision_at_1000
value: 1.926
- type: precision_at_3
value: 47.417
- type: precision_at_5
value: 40.65
- type: recall_at_1
value: 8.462
- type: recall_at_10
value: 24.293
- type: recall_at_100
value: 50.146
- type: recall_at_1000
value: 74.034
- type: recall_at_3
value: 14.967
- type: recall_at_5
value: 18.682000000000002
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.84499999999999
- type: f1
value: 42.48106691979349
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.034
- type: map_at_10
value: 82.76
- type: map_at_100
value: 82.968
- type: map_at_1000
value: 82.98299999999999
- type: map_at_3
value: 81.768
- type: map_at_5
value: 82.418
- type: mrr_at_1
value: 80.048
- type: mrr_at_10
value: 87.64999999999999
- type: mrr_at_100
value: 87.712
- type: mrr_at_1000
value: 87.713
- type: mrr_at_3
value: 87.01100000000001
- type: mrr_at_5
value: 87.466
- type: ndcg_at_1
value: 80.048
- type: ndcg_at_10
value: 86.643
- type: ndcg_at_100
value: 87.361
- type: ndcg_at_1000
value: 87.606
- type: ndcg_at_3
value: 85.137
- type: ndcg_at_5
value: 86.016
- type: precision_at_1
value: 80.048
- type: precision_at_10
value: 10.372
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 32.638
- type: precision_at_5
value: 20.177
- type: recall_at_1
value: 74.034
- type: recall_at_10
value: 93.769
- type: recall_at_100
value: 96.569
- type: recall_at_1000
value: 98.039
- type: recall_at_3
value: 89.581
- type: recall_at_5
value: 91.906
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.5
- type: map_at_10
value: 32.857
- type: map_at_100
value: 34.589
- type: map_at_1000
value: 34.778
- type: map_at_3
value: 29.160999999999998
- type: map_at_5
value: 31.033
- type: mrr_at_1
value: 40.123
- type: mrr_at_10
value: 48.776
- type: mrr_at_100
value: 49.495
- type: mrr_at_1000
value: 49.539
- type: mrr_at_3
value: 46.605000000000004
- type: mrr_at_5
value: 47.654
- type: ndcg_at_1
value: 40.123
- type: ndcg_at_10
value: 40.343
- type: ndcg_at_100
value: 46.56
- type: ndcg_at_1000
value: 49.777
- type: ndcg_at_3
value: 37.322
- type: ndcg_at_5
value: 37.791000000000004
- type: precision_at_1
value: 40.123
- type: precision_at_10
value: 11.08
- type: precision_at_100
value: 1.752
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 24.897
- type: precision_at_5
value: 17.809
- type: recall_at_1
value: 20.5
- type: recall_at_10
value: 46.388
- type: recall_at_100
value: 69.552
- type: recall_at_1000
value: 89.011
- type: recall_at_3
value: 33.617999999999995
- type: recall_at_5
value: 38.211
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.135999999999996
- type: map_at_10
value: 61.673
- type: map_at_100
value: 62.562
- type: map_at_1000
value: 62.62
- type: map_at_3
value: 58.467999999999996
- type: map_at_5
value: 60.463
- type: mrr_at_1
value: 78.271
- type: mrr_at_10
value: 84.119
- type: mrr_at_100
value: 84.29299999999999
- type: mrr_at_1000
value: 84.299
- type: mrr_at_3
value: 83.18900000000001
- type: mrr_at_5
value: 83.786
- type: ndcg_at_1
value: 78.271
- type: ndcg_at_10
value: 69.935
- type: ndcg_at_100
value: 73.01299999999999
- type: ndcg_at_1000
value: 74.126
- type: ndcg_at_3
value: 65.388
- type: ndcg_at_5
value: 67.906
- type: precision_at_1
value: 78.271
- type: precision_at_10
value: 14.562
- type: precision_at_100
value: 1.6969999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.841
- type: precision_at_5
value: 27.087
- type: recall_at_1
value: 39.135999999999996
- type: recall_at_10
value: 72.809
- type: recall_at_100
value: 84.86200000000001
- type: recall_at_1000
value: 92.208
- type: recall_at_3
value: 62.76199999999999
- type: recall_at_5
value: 67.718
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.60600000000001
- type: ap
value: 86.6579587804335
- type: f1
value: 90.5938853929307
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.852
- type: map_at_10
value: 33.982
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.167
- type: map_at_3
value: 30.134
- type: map_at_5
value: 32.340999999999994
- type: mrr_at_1
value: 22.479
- type: mrr_at_10
value: 34.594
- type: mrr_at_100
value: 35.672
- type: mrr_at_1000
value: 35.716
- type: mrr_at_3
value: 30.84
- type: mrr_at_5
value: 32.998
- type: ndcg_at_1
value: 22.493
- type: ndcg_at_10
value: 40.833000000000006
- type: ndcg_at_100
value: 46.357
- type: ndcg_at_1000
value: 47.637
- type: ndcg_at_3
value: 32.995999999999995
- type: ndcg_at_5
value: 36.919000000000004
- type: precision_at_1
value: 22.493
- type: precision_at_10
value: 6.465999999999999
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.030999999999999
- type: precision_at_5
value: 10.413
- type: recall_at_1
value: 21.852
- type: recall_at_10
value: 61.934999999999995
- type: recall_at_100
value: 87.611
- type: recall_at_1000
value: 97.441
- type: recall_at_3
value: 40.583999999999996
- type: recall_at_5
value: 49.992999999999995
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.36069311445507
- type: f1
value: 93.16456330371453
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.74692202462381
- type: f1
value: 58.17903579421599
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.80833893745796
- type: f1
value: 72.70786592684664
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69872225958305
- type: f1
value: 78.61626934504731
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.058658628717694
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.85561739360599
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.290259910144385
- type: mrr
value: 32.44223046102856
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.288
- type: map_at_10
value: 12.267999999999999
- type: map_at_100
value: 15.557000000000002
- type: map_at_1000
value: 16.98
- type: map_at_3
value: 8.866
- type: map_at_5
value: 10.418
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 52.681
- type: mrr_at_100
value: 53.315999999999995
- type: mrr_at_1000
value: 53.357
- type: mrr_at_3
value: 51.393
- type: mrr_at_5
value: 51.903999999999996
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.305
- type: ndcg_at_100
value: 30.825999999999997
- type: ndcg_at_1000
value: 39.393
- type: ndcg_at_3
value: 39.931
- type: ndcg_at_5
value: 37.519999999999996
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.728
- type: precision_at_100
value: 7.932
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.184000000000005
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.288
- type: recall_at_10
value: 16.195
- type: recall_at_100
value: 31.135
- type: recall_at_1000
value: 61.531000000000006
- type: recall_at_3
value: 10.313
- type: recall_at_5
value: 12.754999999999999
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.216
- type: map_at_10
value: 42.588
- type: map_at_100
value: 43.702999999999996
- type: map_at_1000
value: 43.739
- type: map_at_3
value: 38.177
- type: map_at_5
value: 40.754000000000005
- type: mrr_at_1
value: 31.866
- type: mrr_at_10
value: 45.189
- type: mrr_at_100
value: 46.056000000000004
- type: mrr_at_1000
value: 46.081
- type: mrr_at_3
value: 41.526999999999994
- type: mrr_at_5
value: 43.704
- type: ndcg_at_1
value: 31.837
- type: ndcg_at_10
value: 50.178
- type: ndcg_at_100
value: 54.98800000000001
- type: ndcg_at_1000
value: 55.812
- type: ndcg_at_3
value: 41.853
- type: ndcg_at_5
value: 46.153
- type: precision_at_1
value: 31.837
- type: precision_at_10
value: 8.43
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.911000000000001
- type: recall_at_1
value: 28.216
- type: recall_at_10
value: 70.8
- type: recall_at_100
value: 91.857
- type: recall_at_1000
value: 97.941
- type: recall_at_3
value: 49.196
- type: recall_at_5
value: 59.072
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.22800000000001
- type: map_at_10
value: 85.115
- type: map_at_100
value: 85.72
- type: map_at_1000
value: 85.737
- type: map_at_3
value: 82.149
- type: map_at_5
value: 84.029
- type: mrr_at_1
value: 81.96
- type: mrr_at_10
value: 88.00200000000001
- type: mrr_at_100
value: 88.088
- type: mrr_at_1000
value: 88.089
- type: mrr_at_3
value: 87.055
- type: mrr_at_5
value: 87.715
- type: ndcg_at_1
value: 82.01
- type: ndcg_at_10
value: 88.78
- type: ndcg_at_100
value: 89.91
- type: ndcg_at_1000
value: 90.013
- type: ndcg_at_3
value: 85.957
- type: ndcg_at_5
value: 87.56
- type: precision_at_1
value: 82.01
- type: precision_at_10
value: 13.462
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.732000000000003
- type: recall_at_1
value: 71.22800000000001
- type: recall_at_10
value: 95.69
- type: recall_at_100
value: 99.531
- type: recall_at_1000
value: 99.98
- type: recall_at_3
value: 87.632
- type: recall_at_5
value: 92.117
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 52.31768034366916
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.640266772723606
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.7780000000000005
- type: map_at_10
value: 12.299
- type: map_at_100
value: 14.363000000000001
- type: map_at_1000
value: 14.71
- type: map_at_3
value: 8.738999999999999
- type: map_at_5
value: 10.397
- type: mrr_at_1
value: 23.599999999999998
- type: mrr_at_10
value: 34.845
- type: mrr_at_100
value: 35.916
- type: mrr_at_1000
value: 35.973
- type: mrr_at_3
value: 31.7
- type: mrr_at_5
value: 33.535
- type: ndcg_at_1
value: 23.599999999999998
- type: ndcg_at_10
value: 20.522000000000002
- type: ndcg_at_100
value: 28.737000000000002
- type: ndcg_at_1000
value: 34.596
- type: ndcg_at_3
value: 19.542
- type: ndcg_at_5
value: 16.958000000000002
- type: precision_at_1
value: 23.599999999999998
- type: precision_at_10
value: 10.67
- type: precision_at_100
value: 2.259
- type: precision_at_1000
value: 0.367
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 14.879999999999999
- type: recall_at_1
value: 4.7780000000000005
- type: recall_at_10
value: 21.617
- type: recall_at_100
value: 45.905
- type: recall_at_1000
value: 74.42
- type: recall_at_3
value: 11.148
- type: recall_at_5
value: 15.082999999999998
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.22372750297885
- type: cos_sim_spearman
value: 79.40972617119405
- type: euclidean_pearson
value: 80.6101072020434
- type: euclidean_spearman
value: 79.53844217225202
- type: manhattan_pearson
value: 80.57265975286111
- type: manhattan_spearman
value: 79.46335611792958
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.43713315520749
- type: cos_sim_spearman
value: 77.44128693329532
- type: euclidean_pearson
value: 81.63869928101123
- type: euclidean_spearman
value: 77.29512977961515
- type: manhattan_pearson
value: 81.63704185566183
- type: manhattan_spearman
value: 77.29909412738657
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.59451537860527
- type: cos_sim_spearman
value: 82.97994638856723
- type: euclidean_pearson
value: 82.89478688288412
- type: euclidean_spearman
value: 83.58740751053104
- type: manhattan_pearson
value: 82.69140840941608
- type: manhattan_spearman
value: 83.33665956040555
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.00756527711764
- type: cos_sim_spearman
value: 81.83560996841379
- type: euclidean_pearson
value: 82.07684151976518
- type: euclidean_spearman
value: 82.00913052060511
- type: manhattan_pearson
value: 82.05690778488794
- type: manhattan_spearman
value: 82.02260252019525
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.13710262895447
- type: cos_sim_spearman
value: 87.26412811156248
- type: euclidean_pearson
value: 86.94151453230228
- type: euclidean_spearman
value: 87.5363796699571
- type: manhattan_pearson
value: 86.86989424083748
- type: manhattan_spearman
value: 87.47315940781353
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.0230597603627
- type: cos_sim_spearman
value: 84.93344499318864
- type: euclidean_pearson
value: 84.23754743431141
- type: euclidean_spearman
value: 85.09707376597099
- type: manhattan_pearson
value: 84.04325160987763
- type: manhattan_spearman
value: 84.89353071339909
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.75620824563921
- type: cos_sim_spearman
value: 87.15065513706398
- type: euclidean_pearson
value: 88.26281533633521
- type: euclidean_spearman
value: 87.51963738643983
- type: manhattan_pearson
value: 88.25599267618065
- type: manhattan_spearman
value: 87.58048736047483
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.74645319195137
- type: cos_sim_spearman
value: 65.29996325037214
- type: euclidean_pearson
value: 67.04297794086443
- type: euclidean_spearman
value: 65.43841726694343
- type: manhattan_pearson
value: 67.39459955690904
- type: manhattan_spearman
value: 65.92864704413651
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.31291020270801
- type: cos_sim_spearman
value: 85.86473738688068
- type: euclidean_pearson
value: 85.65537275064152
- type: euclidean_spearman
value: 86.13087454209642
- type: manhattan_pearson
value: 85.43946955047609
- type: manhattan_spearman
value: 85.91568175344916
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.93798118350695
- type: mrr
value: 95.93536274908824
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.594
- type: map_at_10
value: 66.81899999999999
- type: map_at_100
value: 67.368
- type: map_at_1000
value: 67.4
- type: map_at_3
value: 64.061
- type: map_at_5
value: 65.47
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.219
- type: mrr_at_100
value: 68.655
- type: mrr_at_1000
value: 68.684
- type: mrr_at_3
value: 66.22200000000001
- type: mrr_at_5
value: 67.289
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.275
- type: ndcg_at_100
value: 73.642
- type: ndcg_at_1000
value: 74.373
- type: ndcg_at_3
value: 66.521
- type: ndcg_at_5
value: 68.581
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.433
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.556
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 57.594
- type: recall_at_10
value: 83.622
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.64399999999999
- type: recall_at_5
value: 75.983
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85841584158416
- type: cos_sim_ap
value: 96.66996142314342
- type: cos_sim_f1
value: 92.83208020050125
- type: cos_sim_precision
value: 93.06532663316584
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.85841584158416
- type: dot_ap
value: 96.6775307676576
- type: dot_f1
value: 92.69289729177312
- type: dot_precision
value: 94.77533960292581
- type: dot_recall
value: 90.7
- type: euclidean_accuracy
value: 99.86138613861387
- type: euclidean_ap
value: 96.6338454403108
- type: euclidean_f1
value: 92.92214357937311
- type: euclidean_precision
value: 93.96728016359918
- type: euclidean_recall
value: 91.9
- type: manhattan_accuracy
value: 99.86237623762376
- type: manhattan_ap
value: 96.60370449645053
- type: manhattan_f1
value: 92.91177970423253
- type: manhattan_precision
value: 94.7970863683663
- type: manhattan_recall
value: 91.10000000000001
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.6775307676576
- type: max_f1
value: 92.92214357937311
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 60.77977058695198
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.2725272535638
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.64052466362125
- type: mrr
value: 54.533067014684654
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.677624219206578
- type: cos_sim_spearman
value: 30.121368518123447
- type: dot_pearson
value: 30.69870088041608
- type: dot_spearman
value: 29.61284927093751
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.855
- type: map_at_100
value: 9.885
- type: map_at_1000
value: 23.416999999999998
- type: map_at_3
value: 0.637
- type: map_at_5
value: 1.024
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.067
- type: mrr_at_100
value: 93.067
- type: mrr_at_1000
value: 93.067
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 93.067
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 75.899
- type: ndcg_at_100
value: 55.115
- type: ndcg_at_1000
value: 48.368
- type: ndcg_at_3
value: 79.704
- type: ndcg_at_5
value: 78.39699999999999
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 79.60000000000001
- type: precision_at_100
value: 56.06
- type: precision_at_1000
value: 21.206
- type: precision_at_3
value: 84.667
- type: precision_at_5
value: 83.2
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.078
- type: recall_at_100
value: 13.297
- type: recall_at_1000
value: 44.979
- type: recall_at_3
value: 0.6689999999999999
- type: recall_at_5
value: 1.106
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.258
- type: map_at_10
value: 10.439
- type: map_at_100
value: 16.89
- type: map_at_1000
value: 18.407999999999998
- type: map_at_3
value: 5.668
- type: map_at_5
value: 7.718
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 51.159
- type: mrr_at_100
value: 51.714000000000006
- type: mrr_at_1000
value: 51.714000000000006
- type: mrr_at_3
value: 47.959
- type: mrr_at_5
value: 50.407999999999994
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 26.037
- type: ndcg_at_100
value: 37.924
- type: ndcg_at_1000
value: 49.126999999999995
- type: ndcg_at_3
value: 30.631999999999998
- type: ndcg_at_5
value: 28.571
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 1.529
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.258
- type: recall_at_10
value: 16.554
- type: recall_at_100
value: 48.439
- type: recall_at_1000
value: 82.80499999999999
- type: recall_at_3
value: 7.283
- type: recall_at_5
value: 10.732
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.8858
- type: ap
value: 13.835684144362109
- type: f1
value: 53.803351693244586
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.50650820599886
- type: f1
value: 60.84357825979259
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.52131044852134
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59337187816654
- type: cos_sim_ap
value: 73.23925826533437
- type: cos_sim_f1
value: 67.34693877551021
- type: cos_sim_precision
value: 62.40432237730752
- type: cos_sim_recall
value: 73.13984168865434
- type: dot_accuracy
value: 85.31322644096085
- type: dot_ap
value: 72.30723963807422
- type: dot_f1
value: 66.47051612112296
- type: dot_precision
value: 62.0792305930845
- type: dot_recall
value: 71.53034300791556
- type: euclidean_accuracy
value: 85.61125350181797
- type: euclidean_ap
value: 73.32843720487845
- type: euclidean_f1
value: 67.36549633745895
- type: euclidean_precision
value: 64.60755813953489
- type: euclidean_recall
value: 70.36939313984169
- type: manhattan_accuracy
value: 85.63509566668654
- type: manhattan_ap
value: 73.16658488311325
- type: manhattan_f1
value: 67.20597386434349
- type: manhattan_precision
value: 63.60424028268551
- type: manhattan_recall
value: 71.2401055408971
- type: max_accuracy
value: 85.63509566668654
- type: max_ap
value: 73.32843720487845
- type: max_f1
value: 67.36549633745895
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.33779640625606
- type: cos_sim_ap
value: 84.83868375898157
- type: cos_sim_f1
value: 77.16506154017773
- type: cos_sim_precision
value: 74.62064005753327
- type: cos_sim_recall
value: 79.88912842623961
- type: dot_accuracy
value: 88.02732176815307
- type: dot_ap
value: 83.95089283763002
- type: dot_f1
value: 76.29635101196631
- type: dot_precision
value: 73.31771720613288
- type: dot_recall
value: 79.52725592854944
- type: euclidean_accuracy
value: 88.44452206310397
- type: euclidean_ap
value: 84.98384576824827
- type: euclidean_f1
value: 77.29311047696697
- type: euclidean_precision
value: 74.51232583065381
- type: euclidean_recall
value: 80.28949799815214
- type: manhattan_accuracy
value: 88.47362906042613
- type: manhattan_ap
value: 84.91421462218432
- type: manhattan_f1
value: 77.05107637204792
- type: manhattan_precision
value: 74.74484256243214
- type: manhattan_recall
value: 79.50415768401602
- type: max_accuracy
value: 88.47362906042613
- type: max_ap
value: 84.98384576824827
- type: max_f1
value: 77.29311047696697
license: mit
language:
- en
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
- **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
- **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
- **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
- **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
- **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
## News
- 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
[Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
- 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
- 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
- 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
- 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
#### Usage of the ONNX files
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-small-en-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-small-en-v1.5')
model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-small-en-v1.5', file_name="onnx/model.onnx")
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
model_output_ort = model_ort(**encoded_input)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# model_output and model_output_ort are identical
```
#### Usage via infinity
Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
Recommended is `device="cuda", engine="torch"` with flash attention on gpu, and `device="cpu", engine="optimum"` for onnx inference.
```python
import asyncio
from infinity_emb import AsyncEmbeddingEngine, EngineArgs
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
engine = AsyncEmbeddingEngine.from_args(
EngineArgs(model_name_or_path = "BAAI/bge-small-en-v1.5", device="cpu", engine="optimum" # or engine="torch"
))
async def main():
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
asyncio.run(main())
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge. |
mosaicml/mpt-7b-chat | mosaicml | "2024-03-05T20:25:13Z" | 4,065 | 512 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:Hello-SimpleAI/HC3",
"dataset:tatsu-lab/alpaca",
"dataset:Anthropic/hh-rlhf",
"dataset:victor123/evol_instruct_70k",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-04T23:56:17Z" | ---
license: cc-by-nc-sa-4.0
datasets:
- jeffwan/sharegpt_vicuna
- Hello-SimpleAI/HC3
- tatsu-lab/alpaca
- Anthropic/hh-rlhf
- victor123/evol_instruct_70k
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-7B-Chat
MPT-7B-Chat is a chatbot-like model for dialogue generation.
It was built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
_CC-By-NC-SA-4.0_ (non-commercial use only)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Dialogue
**SamIAm85**:
>I want you to come up with a tweet based on this summary of the article:
>"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
>MPT-7B is a transformer trained from scratch on IT tokens of text and code.
>It is open source and it matches the quality of LLaMA-7B.
>MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
>Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
>For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
**MPT-7B-Instruct**:
>Here's my suggestion for a tweet:
>"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ"
**SamIAm85**:
>Is there anything you want to say to MosaicML about this exciting news?
**MPT-7B-Instruct**:
> As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.
> This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
> Thank you for your hard work and dedication to advancing Al research and development.
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-chat',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
### Training Configuration
This model was trained on 8 A100-80GBs for about 8.2 hours, followed by training for 6.7 hours on 32 A100-40GBs using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Chat was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
FL33TW00D-HF/phi2 | FL33TW00D-HF | "2024-04-23T10:26:47Z" | 4,065 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-04-08T08:23:53Z" | Entry not found |
Helsinki-NLP/opus-mt-en-sk | Helsinki-NLP | "2023-08-16T11:31:06Z" | 4,064 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sk
* source languages: en
* target languages: sk
* OPUS readme: [en-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sk | 36.8 | 0.578 |
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.