modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
stablediffusionapi/queratogray-sketch | stablediffusionapi | "2023-09-09T12:29:39Z" | 26,000 | 1 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-09-09T12:28:30Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Queratogray Sketch API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "queratogray-sketch"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/queratogray-sketch)
Model link: [View model](https://stablediffusionapi.com/models/queratogray-sketch)
Credits: [View credits](https://civitai.com/?query=Queratogray%20Sketch)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "queratogray-sketch",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
rubra-ai/Qwen2-7B-Instruct-GGUF | rubra-ai | "2024-07-01T06:18:34Z" | 25,998 | 4 | null | [
"gguf",
"function-calling",
"tool-calling",
"agentic",
"rubra",
"conversational",
"en",
"zh",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2024-06-11T22:11:38Z" | ---
license: apache-2.0
model-index:
- name: Rubra-Qwen2-7B-Instruct
results:
- task:
type: text-generation
dataset:
type: MMLU
name: MMLU
metrics:
- type: 5-shot
value: 68.88
verified: false
- task:
type: text-generation
dataset:
type: GPQA
name: GPQA
metrics:
- type: 0-shot
value: 30.36
verified: false
- task:
type: text-generation
dataset:
type: GSM-8K
name: GSM-8K
metrics:
- type: 8-shot, CoT
value: 75.82
verified: false
- task:
type: text-generation
dataset:
type: MATH
name: MATH
metrics:
- type: 4-shot, CoT
value: 28.72
verified: false
- task:
type: text-generation
dataset:
type: MT-bench
name: MT-bench
metrics:
- type: GPT-4 as Judge
value: 8.08
verified: false
tags:
- function-calling
- tool-calling
- agentic
- rubra
- conversational
language:
- en
- zh
---
# Qwen2 7B Instruct GGUF
Original model: [rubra-ai/Qwen2-7B-Instruct](https://huggingface.co/rubra-ai/Qwen2-7B-Instruct)
## Model description
The model is the result of further post-training [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct). It is capable of complex multi-turn tool/function calling.
## Training
The model was post-trained (freeze tuned & DPO) on a proprietary dataset consisting of diverse function calling, chat, and instruct data.
## How to use
Refer to https://docs.rubra.ai/inference/llamacpp for usage. Feel free to ask/open issues up in our Github repo: https://github.com/rubra-ai/rubra
## Limitations and Bias
While the model performs well on a wide range of tasks, it may still produce biased or incorrect outputs. Users should exercise caution and critical judgment when using the model in sensitive or high-stakes applications. The model's outputs are influenced by the data it was trained on, which may contain inherent biases.
## Ethical Considerations
Users should ensure that the deployment of this model adheres to ethical guidelines and consider the potential societal impact of the generated text. Misuse of the model for generating harmful or misleading content is strongly discouraged.
## Acknowledgements
We would like to thank Alibaba Cloud for the model.
## Contact Information
For questions or comments about the model, please reach out to [the rubra team](mailto:[email protected]).
## Citation
If you use this work, please cite it as:
```
@misc {rubra_ai_2024,
author = { Sanjay Nadhavajhala and Yingbei Tong },
title = { Rubra-Qwen2-7B-Instruct },
year = 2024,
url = { https://huggingface.co/rubra-ai/Qwen2-7B-Instruct },
doi = { 10.57967/hf/2658 },
publisher = { Hugging Face }
}
``` |
phamson02/colbert2.1_290000 | phamson02 | "2024-06-16T17:32:10Z" | 25,968 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"endpoints_compatible",
"region:us"
] | null | "2024-06-16T15:32:43Z" | Entry not found |
bigcode/gpt_bigcode-santacoder | bigcode | "2023-06-08T09:20:22Z" | 25,941 | 25 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_bigcode",
"text-generation",
"code",
"dataset:bigcode/the-stack",
"license:openrail",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-06T01:35:04Z" | ---
license: openrail
datasets:
- bigcode/the-stack
language:
- code
programming_language:
- Java
- JavaScript
- Python
pipeline_tag: text-generation
inference: false
model-index:
- name: SantaCoder
results:
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval (Python)
metrics:
- name: pass@1
type: pass@1
value: 0.18
verified: false
- name: pass@10
type: pass@10
value: 0.29
verified: false
- name: pass@100
type: pass@100
value: 0.49
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL MBPP (Python)
metrics:
- name: pass@1
type: pass@1
value: 0.35
verified: false
- name: pass@10
type: pass@10
value: 0.58
verified: false
- name: pass@100
type: pass@100
value: 0.77
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 0.16
verified: false
- name: pass@10
type: pass@10
value: 0.27
verified: false
- name: pass@100
type: pass@100
value: 0.47
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL MBPP (Javascript)
metrics:
- name: pass@1
type: pass@1
value: 0.28
verified: false
- name: pass@10
type: pass@10
value: 0.51
verified: false
- name: pass@100
type: pass@100
value: 0.70
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 0.15
verified: false
- name: pass@10
type: pass@10
value: 0.26
verified: false
- name: pass@100
type: pass@100
value: 0.41
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL MBPP (Java)
metrics:
- name: pass@1
type: pass@1
value: 0.28
verified: false
- name: pass@10
type: pass@10
value: 0.44
verified: false
- name: pass@100
type: pass@100
value: 0.59
verified: false
- task:
type: text-generation
dataset:
type: loubnabnl/humaneval_infilling
name: HumanEval FIM (Python)
metrics:
- name: single_line
type: exact_match
value: 0.44
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval FIM (Java)
metrics:
- name: single_line
type: exact_match
value: 0.62
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval FIM (JavaScript)
metrics:
- name: single_line
type: exact_match
value: 0.60
verified: false
- task:
type: text-generation
dataset:
type: code_x_glue_ct_code_to_text
name: CodeXGLUE code-to-text (Python)
metrics:
- name: BLEU
type: bleu
value: 18.13
verified: false
---
# SantaCoder

Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces/bigcode/santacoder-demo).
# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [License](#license)
6. [Citation](#citation)
# Model Summary
This is the same model as [SantaCoder](https://huggingface.co/bigcode/santacoder) but it can be loaded with transformers >=4.28.1 to use the GPTBigCode architecture.
We refer the reader to the [SantaCoder model page](https://huggingface.co/bigcode/santacoder) for full documentation about this model
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](www.bigcode-project.org)
- **Paper:** [🎅SantaCoder: Don't reach for the stars!🌟](https://t.co/YV3pzUbYOr)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** Python, Java, and JavaScript
There are two versions (branches) of the model:
* `main`: Uses the `gpt_bigcode` model. [Requires the bigcode fork of transformers](https://github.com/bigcode-project/transformers).
* `main_custom`: Packaged with its modeling code. Requires `transformers>=4.27`.
Alternatively, it can run on older versions by setting the configuration parameter `activation_function = "gelu_pytorch_tanh"`.
# Use
## Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
You should phrase commands like they occur in source code such as comments (e.g. `# the following function computes the sqrt`) or write a function signature and docstring and let the model complete the function body.
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/santacoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code in Python, Java, and JavaScript. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 600K
- **Pretraining tokens:** 236 billion
- **Precision:** float16
## Hardware
- **GPUs:** 96 Tesla V100
- **Training time:** 6.2 days
- **Total FLOPS:** 2.1 x 10e21
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
|
deepset/roberta-base-squad2-distilled | deepset | "2023-09-26T08:34:45Z" | 25,921 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"exbert",
"en",
"dataset:squad_v2",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
language: en
license: mit
tags:
- exbert
datasets:
- squad_v2
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
model-index:
- name: deepset/roberta-base-squad2-distilled
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 80.8593
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVjNzkxNmNiNDkzNzdiYjJjZGM3ZTViMGJhOGM2ZjFmYjg1MjYxMDM2YzM5NWMwNDIyYzNlN2QwNGYyNDMzZSIsInZlcnNpb24iOjF9.Rgww8tf8D7nF2dh2U_DMrFzmp87k8s7RFibrDXSvQyA66PGWXwjlsd1552lzjHnNV5hvHUM1-h3PTuY_5p64BA
- type: f1
value: 84.0104
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTAyZDViNWYzNjA4OWQ5MzgyYmQ2ZDlhNWRhMTIzYTYxYzViMmI4NWE4ZGU5MzVhZTAwNTRlZmRlNWUwMjI0ZSIsInZlcnNpb24iOjF9.Er21BNgJ3jJXLuZtpubTYq9wCwO1i_VLQFwS5ET0e4eAYVVj0aOA40I5FvP5pZac3LjkCnVacxzsFWGCYVmnDA
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 86.225
name: Exact Match
- type: f1
value: 92.483
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- type: exact_match
value: 29.900
name: Exact Match
- type: f1
value: 41.183
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_adversarial
type: squad_adversarial
config: AddOneSent
split: validation
metrics:
- type: exact_match
value: 79.071
name: Exact Match
- type: f1
value: 84.472
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts amazon
type: squadshifts
config: amazon
split: test
metrics:
- type: exact_match
value: 70.733
name: Exact Match
- type: f1
value: 83.958
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts new_wiki
type: squadshifts
config: new_wiki
split: test
metrics:
- type: exact_match
value: 82.011
name: Exact Match
- type: f1
value: 91.092
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts nyt
type: squadshifts
config: nyt
split: test
metrics:
- type: exact_match
value: 84.203
name: Exact Match
- type: f1
value: 91.521
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts reddit
type: squadshifts
config: reddit
split: test
metrics:
- type: exact_match
value: 72.029
name: Exact Match
- type: f1
value: 83.454
name: F1
---
## Overview
**Language model:** deepset/roberta-base-squad2-distilled
**Language:** English
**Training data:** SQuAD 2.0 training set
**Eval data:** SQuAD 2.0 dev set
**Infrastructure**: 4x V100 GPU
**Published**: Dec 8th, 2021
## Details
- haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model.
## Hyperparameters
```
batch_size = 80
n_epochs = 4
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 1.5
distillation_loss_weight = 0.75
```
## Performance
```
"exact": 79.8366040596311
"f1": 83.916407079888
```
## Authors
**Timo Möller:** [email protected]
**Julian Risch:** [email protected]
**Malte Pietsch:** [email protected]
**Michel Bartels:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf | RichardErkhov | "2024-06-25T07:58:53Z" | 25,883 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-24T21:45:25Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-15B-Instruct-zeroed - GGUF
- Model creator: https://huggingface.co/elinas/
- Original model: https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-15B-Instruct-zeroed.Q2_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q2_K.gguf) | Q2_K | 5.35GB |
| [Llama-3-15B-Instruct-zeroed.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.IQ3_XS.gguf) | IQ3_XS | 5.94GB |
| [Llama-3-15B-Instruct-zeroed.IQ3_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.IQ3_S.gguf) | IQ3_S | 6.24GB |
| [Llama-3-15B-Instruct-zeroed.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q3_K_S.gguf) | Q3_K_S | 6.21GB |
| [Llama-3-15B-Instruct-zeroed.IQ3_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.IQ3_M.gguf) | IQ3_M | 6.43GB |
| [Llama-3-15B-Instruct-zeroed.Q3_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q3_K.gguf) | Q3_K | 6.87GB |
| [Llama-3-15B-Instruct-zeroed.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q3_K_M.gguf) | Q3_K_M | 6.87GB |
| [Llama-3-15B-Instruct-zeroed.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q3_K_L.gguf) | Q3_K_L | 7.43GB |
| [Llama-3-15B-Instruct-zeroed.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.IQ4_XS.gguf) | IQ4_XS | 7.68GB |
| [Llama-3-15B-Instruct-zeroed.Q4_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q4_0.gguf) | Q4_0 | 8.0GB |
| [Llama-3-15B-Instruct-zeroed.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.IQ4_NL.gguf) | IQ4_NL | 8.08GB |
| [Llama-3-15B-Instruct-zeroed.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q4_K_S.gguf) | Q4_K_S | 3.58GB |
| [Llama-3-15B-Instruct-zeroed.Q4_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q4_K.gguf) | Q4_K | 8.48GB |
| [Llama-3-15B-Instruct-zeroed.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q4_K_M.gguf) | Q4_K_M | 8.48GB |
| [Llama-3-15B-Instruct-zeroed.Q4_1.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q4_1.gguf) | Q4_1 | 8.84GB |
| [Llama-3-15B-Instruct-zeroed.Q5_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q5_0.gguf) | Q5_0 | 9.68GB |
| [Llama-3-15B-Instruct-zeroed.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q5_K_S.gguf) | Q5_K_S | 9.68GB |
| [Llama-3-15B-Instruct-zeroed.Q5_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q5_K.gguf) | Q5_K | 9.93GB |
| [Llama-3-15B-Instruct-zeroed.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q5_K_M.gguf) | Q5_K_M | 9.93GB |
| [Llama-3-15B-Instruct-zeroed.Q5_1.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q5_1.gguf) | Q5_1 | 10.53GB |
| [Llama-3-15B-Instruct-zeroed.Q6_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q6_K.gguf) | Q6_K | 11.48GB |
| [Llama-3-15B-Instruct-zeroed.Q8_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-15B-Instruct-zeroed-gguf/blob/main/Llama-3-15B-Instruct-zeroed.Q8_0.gguf) | Q8_0 | 14.86GB |
Original model description:
---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# Llama-3-15B-Instruct-zeroed
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method while zeroing `o_proj` and `down_proj` which led to an decrease in perplexity (good)
compared to similar 15B merges. This was a recommendation from [Charles Goddard](https://huggingface.co/chargoddard) - thank you for sharing the method of merging as well as Toasty
Pigeon for bringing it to my attention!
## Finetuned Version
A finetuned version of this model can be found at [elinas/Llama-3-15B-Instruct-zeroed-ft](https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed-ft) which seems to improve performance.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [8, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 32]
model: meta-llama/Meta-Llama-3-8B-Instruct
```
|
diffusers/sdxl-instructpix2pix-768 | diffusers | "2023-08-30T09:42:20Z" | 25,836 | 36 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"instruct-pix2pix",
"dataset:timbrooks/instructpix2pix-clip-filtered",
"arxiv:2307.01952",
"arxiv:2211.09800",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"diffusers:StableDiffusionXLInstructPix2PixPipeline",
"region:us"
] | text-to-image | "2023-08-23T05:24:35Z" | ---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- instruct-pix2pix
inference: false
datasets:
- timbrooks/instructpix2pix-clip-filtered
---
# SDXL InstructPix2Pix (768768)
Instruction fine-tuning of [Stable Diffusion XL (SDXL)](https://hf.co/papers/2307.01952) à la [InstructPix2Pix](https://huggingface.co/papers/2211.09800). Some results below:
**Edit instruction**: *"Turn sky into a cloudy one"*

**Edit instruction**: *"Make it a picasso painting"*

**Edit instruction**: *"make the person older"*

## Usage in 🧨 diffusers
Make sure to install the libraries first:
```bash
pip install accelerate transformers
pip install git+https://github.com/huggingface/diffusers
```
```python
import torch
from diffusers import StableDiffusionXLInstructPix2PixPipeline
from diffusers.utils import load_image
resolution = 768
image = load_image(
"https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
).resize((resolution, resolution))
edit_instruction = "Turn sky into a cloudy one"
pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained(
"diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16
).to("cuda")
edited_image = pipe(
prompt=edit_instruction,
image=image,
height=resolution,
width=resolution,
guidance_scale=3.0,
image_guidance_scale=1.5,
num_inference_steps=30,
).images[0]
edited_image.save("edited_image.png")
```
To know more, refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/pix2pix).
🚨 Note that this checkpoint is experimental in nature and there's a lot of room for improvements. Please use the "Discussions" tab of this repository to open issues and discuss. 🚨
## Training
We fine-tuned SDXL using the InstructPix2Pix training methodology for 15000 steps using a fixed learning rate of 5e-6 on an image resolution of 768x768.
Our training scripts and other utilities can be found [here](https://github.com/sayakpaul/instructpix2pix-sdxl/tree/b9acc91d6ddf1f2aa2f9012b68216deb40e178f3) and they were built on top of our [official training script](https://huggingface.co/docs/diffusers/main/en/training/instructpix2pix).
Our training logs are available on Weights and Biases [here](https://wandb.ai/sayakpaul/instruct-pix2pix-sdxl-new/runs/sw53gxmc). Refer to this link for details on all the hyperparameters.
### Training data
We used this dataset: [timbrooks/instructpix2pix-clip-filtered](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered).
### Compute
one 8xA100 machine
### Batch size
Data parallel with a single gpu batch size of 8 for a total batch size of 32.
### Mixed precision
FP16 |
facebook/wav2vec2-conformer-rel-pos-large | facebook | "2022-12-07T14:01:31Z" | 25,832 | 8 | transformers | [
"transformers",
"pytorch",
"wav2vec2-conformer",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2022-04-17T15:54:03Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Conformer-Large with Relative Position Embeddings
Wav2Vec2 Conformer with relative position embeddings, pretrained on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
OpenAssistant/reward-model-deberta-v3-large-v2 | OpenAssistant | "2023-02-01T00:55:05Z" | 25,794 | 190 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"reward-model",
"reward_model",
"RLHF",
"en",
"dataset:openai/summarize_from_feedback",
"dataset:openai/webgpt_comparisons",
"dataset:Dahoas/instruct-synthetic-prompt-responses",
"dataset:Anthropic/hh-rlhf",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-02-01T00:13:05Z" | ---
license: mit
datasets:
- openai/summarize_from_feedback
- openai/webgpt_comparisons
- Dahoas/instruct-synthetic-prompt-responses
- Anthropic/hh-rlhf
language:
- en
metrics:
- accuracy
tags:
- reward-model
- reward_model
- RLHF
---
# Reward model trained from human feedback
Reward model (RM) trained to predict which generated answer is better judged by a human, given a question.
RM are useful in these domain:
- QA model evaluation
- serves as reward score in RLHF
- detect potential toxic response via ranking
All models are train on these dataset with a same split seed across datasets (if validation split wasn't available)
- [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback)
- [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
- [anthropic_hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
# How to use
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)
```
**Toxic response detection**
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question = "I just came out of from jail, any suggestion of my future?"
helpful = "It's great to hear that you have been released from jail."
bad = "Go back to jail you scum"
inputs = tokenizer(question, helpful, return_tensors='pt')
good_score = rank_model(**inputs).logits[0].cpu().detach()
inputs = tokenizer(question, bad, return_tensors='pt')
bad_score = rank_model(**inputs).logits[0].cpu().detach()
print(good_score > bad_score) # tensor([True])
```
# Performance
Validation split accuracy
| Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) | [Anthropic RLHF]() |
|---|---|---|---|---|
| [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 | 54.33 |
| **[deberta-v3-large-v2](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2)** | **61.57** | 71.47 | 99.88 | **69.25** |
| [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | **99.94** | 55.62 |
| [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 | 54.51 |
| deberta-v2-xxlarge | 58.67 | **73.27** | 99.77 | 66.74 |
Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer.
# Other
Sincere thanks to [stability.ai](https://stability.ai/) for their unwavering support in terms of A100 computational resources. Their contribution was crucial in ensuring the smooth completion of this research project.
|
PrunaAI/allenai-tulu-2-dpo-13b-GGUF-smashed | PrunaAI | "2024-06-28T21:52:34Z" | 25,771 | 0 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | "2024-06-28T20:33:36Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the allenai/tulu-2-dpo-13b model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: allenai-tulu-2-dpo-13b-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download allenai-tulu-2-dpo-13b-GGUF-smashed tulu-2-dpo-13b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download allenai-tulu-2-dpo-13b-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download allenai-tulu-2-dpo-13b-GGUF-smashed tulu-2-dpo-13b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m tulu-2-dpo-13b.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./tulu-2-dpo-13b.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./tulu-2-dpo-13b.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
facebook/mbart-large-50 | facebook | "2023-03-28T08:28:50Z" | 25,763 | 122 | transformers | [
"transformers",
"pytorch",
"tf",
"mbart",
"text2text-generation",
"mbart-50",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl",
"arxiv:2008.00401",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language:
- multilingual
- ar
- cs
- de
- en
- es
- et
- fi
- fr
- gu
- hi
- it
- ja
- kk
- ko
- lt
- lv
- my
- ne
- nl
- ro
- ru
- si
- tr
- vi
- zh
- af
- az
- bn
- fa
- he
- hr
- id
- ka
- km
- mk
- ml
- mn
- mr
- pl
- ps
- pt
- sv
- sw
- ta
- te
- th
- tl
- uk
- ur
- xh
- gl
- sl
license: mit
tags:
- mbart-50
---
# mBART-50
mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
## Model description
mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning.
Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.
**Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data:
`D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes,
first randomly shuffling the original sentences' order, and second a novel in-filling scheme,
where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text.
35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`.
The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence.
## Intended uses & limitations
`mbart-large-50` is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks.
See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for fine-tuned versions.
## Training
As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is `[lang_code] X [eos]` with `X` being the source or target text respectively and `lang_code` is `source_lang_code` for source text and `tgt_lang_code` for target text. `bos` is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
model(**model_inputs, labels=labels) # forward pass
```
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
edumunozsala/roberta_bne_sentiment_analysis_es | edumunozsala | "2023-08-10T17:48:01Z" | 25,737 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"sagemaker",
"roberta-bne",
"TextClassification",
"SentimentAnalysis",
"es",
"dataset:IMDbreviews_es",
"arxiv:2107.07253",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-05-15T17:18:15Z" | ---
language: es
tags:
- sagemaker
- roberta-bne
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: roberta_bne_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
metrics:
- name: Accuracy
type: accuracy
value: 0.9106666666666666
- name: F1 Score
type: f1
value: 0.9090909090909091
- name: Precision
type: precision
value: 0.9063852813852814
- name: Recall
type: recall
value: 0.9118127381600436
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
# Model roberta_bne_sentiment_analysis_es
## **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **RoBERTa-base-bne** which is a RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB.
It was trained by The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html)
**RoBERTa BNE Citation**
Check out the paper for all the details: https://arxiv.org/abs/2107.07253
```
@article{gutierrezfandino2022,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Intended uses & limitations
This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews.
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"PlanTL-GOB-ES/roberta-base-bne\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
- Accuracy = 0.9106666666666666
- F1 Score = 0.9090909090909091
- Precision = 0.9063852813852814
- Recall = 0.9118127381600436
## Test results
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
duyntnet/Orca-2-13b-imatrix-GGUF | duyntnet | "2024-06-25T06:04:54Z" | 25,733 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Orca-2-13b",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-06-25T02:07:46Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Orca-2-13b
---
Quantizations of https://huggingface.co/microsoft/Orca-2-13b
# From original readme
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-13b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-13b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
``` |
batterydata/bde-cner-batteryonlybert-uncased-base | batterydata | "2022-05-31T15:12:29Z" | 25,722 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-03-30T11:50:57Z" | ---
license: mit
---
|
legraphista/Yi-9B-Coder-IMat-GGUF | legraphista | "2024-06-27T17:15:17Z" | 25,707 | 0 | gguf | [
"gguf",
"code",
"llama",
"quantized",
"GGUF",
"quantization",
"imat",
"imatrix",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"base_model:TechxGenus/Yi-9B-Coder",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-06-27T16:39:25Z" | ---
base_model: TechxGenus/Yi-9B-Coder
inference: false
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- code
- llama
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# Yi-9B-Coder-IMat-GGUF
_Llama.cpp imatrix quantization of TechxGenus/Yi-9B-Coder_
Original Model: [TechxGenus/Yi-9B-Coder](https://huggingface.co/TechxGenus/Yi-9B-Coder)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3248](https://github.com/ggerganov/llama.cpp/releases/tag/b3248)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Yi-9B-Coder.Q8_0.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q8_0.gguf) | Q8_0 | 9.38GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.Q6_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q6_K.gguf) | Q6_K | 7.25GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.Q4_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q4_K.gguf) | Q4_K | 5.33GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q3_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q3_K.gguf) | Q3_K | 4.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q2_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q2_K.gguf) | Q2_K | 3.35GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Yi-9B-Coder.BF16.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.BF16.gguf) | BF16 | 17.66GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.FP16.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.FP16.gguf) | F16 | 17.66GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.Q8_0.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q8_0.gguf) | Q8_0 | 9.38GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.Q6_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q6_K.gguf) | Q6_K | 7.25GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.Q5_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q5_K.gguf) | Q5_K | 6.26GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.Q5_K_S.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q5_K_S.gguf) | Q5_K_S | 6.11GB | ✅ Available | ⚪ Static | 📦 No
| [Yi-9B-Coder.Q4_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q4_K.gguf) | Q4_K | 5.33GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q4_K_S.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q4_K_S.gguf) | Q4_K_S | 5.07GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ4_NL.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ4_NL.gguf) | IQ4_NL | 5.05GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ4_XS.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ4_XS.gguf) | IQ4_XS | 4.79GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q3_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q3_K.gguf) | Q3_K | 4.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q3_K_L.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q3_K_L.gguf) | Q3_K_L | 4.69GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q3_K_S.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q3_K_S.gguf) | Q3_K_S | 3.90GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ3_M.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ3_M.gguf) | IQ3_M | 4.06GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ3_S.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ3_S.gguf) | IQ3_S | 3.91GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ3_XS.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ3_XS.gguf) | IQ3_XS | 3.72GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ3_XXS.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ3_XXS.gguf) | IQ3_XXS | 3.47GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q2_K.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q2_K.gguf) | Q2_K | 3.35GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.Q2_K_S.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.Q2_K_S.gguf) | Q2_K_S | 3.12GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ2_M.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ2_M.gguf) | IQ2_M | 3.10GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ2_S.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ2_S.gguf) | IQ2_S | 2.88GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ2_XS.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ2_XS.gguf) | IQ2_XS | 2.71GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ2_XXS.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ2_XXS.gguf) | IQ2_XXS | 2.46GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ1_M.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ1_M.gguf) | IQ1_M | 2.18GB | ✅ Available | 🟢 IMatrix | 📦 No
| [Yi-9B-Coder.IQ1_S.gguf](https://huggingface.co/legraphista/Yi-9B-Coder-IMat-GGUF/blob/main/Yi-9B-Coder.IQ1_S.gguf) | IQ1_S | 2.01GB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Yi-9B-Coder-IMat-GGUF --include "Yi-9B-Coder.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Yi-9B-Coder-IMat-GGUF --include "Yi-9B-Coder.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Llama.cpp
```
llama.cpp/main -m Yi-9B-Coder.Q8_0.gguf --color -i -p "prompt here"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Yi-9B-Coder.Q8_0`)
3. Run `gguf-split --merge Yi-9B-Coder.Q8_0/Yi-9B-Coder.Q8_0-00001-of-XXXXX.gguf Yi-9B-Coder.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
duyntnet/MythoMax-L2-13b-imatrix-GGUF | duyntnet | "2024-06-26T23:42:18Z" | 25,696 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"MythoMax-L2-13b",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-06-26T15:53:46Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- MythoMax-L2-13b
---
Quantizations of https://huggingface.co/Gryphe/MythoMax-L2-13b
# From original readme
An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
### Prompt Format
This model primarily uses Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
``` |
EleutherAI/gpt-neox-20b | EleutherAI | "2024-01-31T20:30:35Z" | 25,694 | 502 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"arxiv:2204.06745",
"arxiv:2101.00027",
"arxiv:2201.07311",
"arxiv:2104.09864",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-04-07T20:28:29Z" | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- EleutherAI/pile
---
GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained
on [the Pile](https://pile.eleuther.ai/) using the [GPT-NeoX
library](https://github.com/EleutherAI/gpt-neox). Its architecture intentionally
resembles that of GPT-3, and is almost identical to that of [GPT-J-
6B](https://huggingface.co/EleutherAI/gpt-j-6B). Its training dataset contains
a multitude of English-language texts, reflecting the general-purpose nature
of this model. See the [accompanying paper](https://arxiv.org/abs/2204.06745)
for details about model architecture (including how it differs from GPT-3),
training procedure, and additional evaluations.
### Model details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [GPT-NeoX-20B: An Open-Source Autoregressive Language
Model](https://arxiv.org/abs/2204.06745). For details about the training dataset,
see [the Pile paper](https://arxiv.org/abs/2101.00027), and [its data
sheet](https://arxiv.org/abs/2201.07311).
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing GPT-NeoX-20B documentation before asking about the model
on Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure style="width:30em">
| Hyperparameter | Value |
| ---------------------- | ----------- |
| n<sub>parameters</sub> | 20554567680 |
| n<sub>layers</sub> | 44 |
| d<sub>model</sub> | 6144 |
| n<sub>heads</sub> | 64 |
| d<sub>head</sub> | 96 |
| n<sub>vocab</sub> | 50257 |
| Sequence Length | 2048 |
| Learning Rate | 0.97 x 10<sup>-5</sup> |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
</figure>
### Uses and limitations
#### Intended use
GPT-NeoX-20B was developed primarily for research purposes. It learns an inner
representation of the English language that can be used to extract features
useful for downstream tasks.
In addition to scientific uses, you may also further fine-tune and adapt
GPT-NeoX-20B for deployment, as long as your use is in accordance with the
Apache 2.0 license. This model works with the [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that
you need to conduct your own risk and bias assessment.
#### Out-of-scope use
GPT-NeoX-20B is **not** intended for deployment as-is. It is not a product
and cannot be used for human-facing interactions without supervision.
GPT-NeoX-20B has not been fine-tuned for downstream tasks for which language
models are commonly deployed, such as writing genre prose, or commercial
chatbots. This means GPT-NeoX-20B will likely **not** respond to a given prompt
the way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B,
ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human
Feedback (RLHF) to better “understand” human instructions and dialogue.
This model is English-language only, and thus cannot be used for translation
or generating text in other languages.
#### Limitations and biases
The core functionality of GPT-NeoX-20B is to take a string of text and predict
the next token. Remember that the statistically most likely next token need
not result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce
factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
GPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
We recommend curating the outputs of this model before presenting it to a human
reader. Please inform your audience that you are using artificially generated
text.
#### How to use
If you simply want to try out some prompts, check out [this
playground](https://20b.eleuther.ai/).
GPT-NeoX-20B can be loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
```
### Training
#### Training dataset
The Pile is a 825GiB general-purpose dataset in English. It was created by
EleutherAI specifically for training large language models. It contains texts
from 22 diverse sources, roughly broken down into five categories: academic
writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project
Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,
Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for
a breakdown of all data sources, methodology, and a discussion of ethical
implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for
more detailed documentation about the Pile and its component datasets. The
Pile can be downloaded from the [official website](https://pile.eleuther.ai/),
or from a [community mirror](https://the-eye.eu/public/AI/pile/).
The Pile was **not** deduplicated before being used to train GPT-NeoX-20B.
#### Training procedure
GPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens
(1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor
parallelism and pipeline parallelism were used to distribute the model across
GPUs. Additional details about the training procedure are in [Section 3 of
the accompanying paper](https://arxiv.org/abs/2204.06745).
### Evaluations
<figure style="width:55em">
| Model | OpenAI’s LAMBADA | SciQ | PIQA | TriviaQA | ARC (Challenge) |
| ------------- | :--------------: | :-----------: | :-----------: | :-----------: | :-------------: |
| GPT-J-6B | 0.683 ± 0.006 | 0.910 ± 0.009 | 0.752 ± 0.010 | 0.170 ± 0.004 | 0.340 ± 0.014 |
| FairSeq 6.7B | 0.673 ± 0.007 | 0.895 ± 0.010 | 0.762 ± 0.010 | 0.221 ± 0.004 | 0.329 ± 0.014 |
| GPT-3 Curie | 0.693 ± 0.006 | 0.918 ± 0.009 | 0.767 ± 0.010 | 0.196 ± 0.004 | 0.334 ± 0.014 |
| FairSeq 13B | 0.709 ± 0.006 | 0.910 ± 0.009 | 0.769 ± 0.010 | 0.270 ± 0.004 | 0.345 ± 0.014 |
| GPT-NeoX-20B | 0.720 ± 0.006 | 0.928 ± 0.008 | 0.779 ± 0.010 | 0.259 ± 0.004 | 0.380 ± 0.014 |
| GPT-3 DaVinci | 0.752 ± 0.006 | 0.949 ± 0.007 | 0.791 ± 0.009 | 0.409 ± 0.005 | 0.435 ± 0.014 |
<figcaption>Zero-shot performance on selected natural language tasks.</figcaption>
</figure>
This is a heavily abridged version of the evaluation results. Appendix D of the
[GPT-NeoX-20B paper](https://arxiv.org/abs/2204.06745) compares more model
sizes, and contains additional evaluations, including on: zero and five-shot
natural language tasks, zero and five-shot Basic Arithmetic and MATH,
and zero-shot Hendrycks tasks.
### BibTeX
To cite the GPT-NeoX-20B paper:
```
@misc{https://doi.org/10.48550/arxiv.2204.06745,
doi = {10.48550/ARXIV.2204.06745},
url = {https://arxiv.org/abs/2204.06745},
author = {Black, Sid and Biderman, Stella and Hallahan, Eric and Anthony, Quentin and Gao, Leo and Golding, Laurence and He, Horace and Leahy, Connor and McDonell, Kyle and Phang, Jason and Pieler, Michael and Prashanth, USVSN Sai and Purohit, Shivanshu and Reynolds, Laria and Tow, Jonathan and Wang, Ben and Weinbach, Samuel},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {GPT-NeoX-20B: An Open-Source Autoregressive Language Model},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__gpt-neox-20b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 36.02 |
| ARC (25-shot) | 45.73 |
| HellaSwag (10-shot) | 73.45 |
| MMLU (5-shot) | 25.0 |
| TruthfulQA (0-shot) | 31.61 |
| Winogrande (5-shot) | 68.9 |
| GSM8K (5-shot) | 2.43 |
| DROP (3-shot) | 5.04 |
|
Yntec/DreamShaperRemix | Yntec | "2024-06-27T10:11:22Z" | 25,693 | 2 | diffusers | [
"diffusers",
"safetensors",
"General",
"Anime",
"Art",
"Girl",
"Photorealistic",
"LandScapes",
"Lykon",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-22T20:18:15Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Anime
- Art
- Girl
- Photorealistic
- LandScapes
- Lykon
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
UPDATE: The model that used to be hosted here has been renamed to DreamShaperRemixAlpha.
# DreamShaperRemix
A remix of Dream Shaper 8. I created hundreds of mixes with all the DreamShaper versions and this was the best one. Then I mixed it with Dreamshaper 6.2 to add more finesse to its outputs, and relaunched it. And then I merged it back with Dreamshaper 8 to improve the eyes, and relaunched it again.
Samples and prompts:

(Click for larger)
Top left: Children book illustration of A cute girl drinking coffee-to-go, classic video game art
Top right: anime coloring, anime screencap, ghibli, mappa, anime style, 1girl, hatsune miku, white gown, angel, angel wings, golden halo, dark background, upper body, closed mouth, looking at viewer, arms behind back, blue theme, stars, starry night
Bottom left: cute girl with her bearded boyfriend,retro cartoon style
Bottom right: 8k portrait of beautiful girl with brown hair, detailed beautiful eyes, intricate, elegant, highly detailed, majestic, digital photography, art by artgerm and ruan jia and greg rutkowski surreal painting gold butterfly filigree, infinity, infinite symbol, broken glass, masterpiece, sidelighting, finely hdr, detailed background window to a new dimension, plants and flowers
Original pages:
https://civitai.com/models/4384?modelVersionId=128713 (DreamShaper 8)
https://civitai.com/models/4384?modelVersionId=88504 (DreamShaper 6.2)
Buy Lykon a coffee:
https://snipfeed.co/lykon
# Recipes
-Add Difference 1.0-
Primary model:
DreamShaper8
Secondary model:
DreamShaper8
Tertiary model:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output Model:
DreamShaperEssense
-Weighted Sum 0.85-
Primary model:
v1-5-pruned-fp16-no-ema
Secondary model:
DreamShaperEssense
Output Model:
DreamShaperPlus
-Weighted Sum 0.55-
Primary model:
DreamShaperPlus
Secondary model:
DreamShaper8
Output Model:
DreamShaperAlpha
-Weighted Sum 0.95-
Primary model:
DreamShaperPlus
Secondary model:
DreamShaperAlpha
Output Model:
DreamShaperRemixNoise
-SuperMerger Elemental Adjust-
0,0,0,0,0,1,0
Output Model:
DreamShaperRemixAlpha
SuperMerger Weight sum MBW 1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1
Model A: Dreamshaper 6.2
Model B: DreamShaperRemixAlpha
Output: DreamShaperRemixBeta
SuperMerger Weight sum MBW 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1
Model A: DreamShaperRemixBeta
Model B: DreamShaper 8
Output: DreamShaperRemix
No-ema:
DreamShaperRemixMini |
Jean-Baptiste/roberta-large-ner-english | Jean-Baptiste | "2023-03-22T02:19:36Z" | 25,673 | 66 | transformers | [
"transformers",
"pytorch",
"tf",
"onnx",
"safetensors",
"roberta",
"token-classification",
"en",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-03-02T23:29:04Z" | ---
language: en
datasets:
- conll2003
widget:
- text: "My name is jean-baptiste and I live in montreal"
- text: "My name is clara and I live in berkeley, california."
- text: "My name is wolfgang and I live in berlin"
train-eval-index:
- config: conll2003
task: token-classification
task_id: entity_extraction
splits:
eval_split: validation
col_mapping:
tokens: tokens
ner_tags: tags
license: mit
---
# roberta-large-ner-english: model fine-tuned from roberta-large for NER task
## Introduction
[roberta-large-ner-english] is an english NER model that was fine-tuned from roberta-large on conll2003 dataset.
Model was validated on emails/chat data and outperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
## Training data
Training data was classified as follow:
Abbreviation|Description
-|-
O |Outside of a named entity
MISC |Miscellaneous entity
PER |Person’s name
ORG |Organization
LOC |Location
In order to simplify, the prefix B- or I- from original conll2003 was removed.
I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size:
Train | Validation
-|-
17494 | 3250
## How to use roberta-large-ner-english with HuggingFace
##### Load roberta-large-ner-english and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer")
[{'entity_group': 'ORG',
'score': 0.99381506,
'word': ' Apple',
'start': 0,
'end': 5},
{'entity_group': 'PER',
'score': 0.99970853,
'word': ' Steve Jobs',
'start': 29,
'end': 39},
{'entity_group': 'PER',
'score': 0.99981767,
'word': ' Steve Wozniak',
'start': 41,
'end': 54},
{'entity_group': 'PER',
'score': 0.99956465,
'word': ' Ronald Wayne',
'start': 59,
'end': 71},
{'entity_group': 'PER',
'score': 0.9997918,
'word': ' Wozniak',
'start': 92,
'end': 99},
{'entity_group': 'MISC',
'score': 0.99956393,
'word': ' Apple I',
'start': 102,
'end': 109}]
```
## Model performances
Model performances computed on conll2003 validation dataset (computed on the tokens predictions)
entity|precision|recall|f1
-|-|-|-
PER|0.9914|0.9927|0.9920
ORG|0.9627|0.9661|0.9644
LOC|0.9795|0.9862|0.9828
MISC|0.9292|0.9262|0.9277
Overall|0.9740|0.9766|0.9753
On private dataset (email, chat, informal discussion), computed on word predictions:
entity|precision|recall|f1
-|-|-|-
PER|0.8823|0.9116|0.8967
ORG|0.7694|0.7292|0.7487
LOC|0.8619|0.7768|0.8171
By comparison on the same private dataset, Spacy (en_core_web_trf-3.2.0) was giving:
entity|precision|recall|f1
-|-|-|-
PER|0.9146|0.8287|0.8695
ORG|0.7655|0.6437|0.6993
LOC|0.8727|0.6180|0.7236
For those who could be interested, here is a short article on how I used the results of this model to train a LSTM model for signature detection in emails:
https://medium.com/@jean-baptiste.polle/lstm-model-for-email-signature-detection-8e990384fefa
|
phiyodr/bert-large-finetuned-squad2 | phiyodr | "2021-05-20T02:36:12Z" | 25,670 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"en",
"dataset:squad2",
"arxiv:1810.04805",
"arxiv:1806.03822",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
metrics:
- exact
- f1
widget:
- text: "What discipline did Winkelmann create?"
context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. The prophet and founding hero of modern archaeology, Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art."
---
# bert-large-finetuned-squad2
## Model description
This model is based on **[bert-large-uncased](https://huggingface.co/bert-large-uncased)** and was finetuned on **[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/)**. The corresponding papers you can found [here (model)](https://arxiv.org/abs/1810.04805) and [here (data)](https://arxiv.org/abs/1806.03822).
## How to use
```python
from transformers.pipelines import pipeline
model_name = "phiyodr/bert-large-finetuned-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'What discipline did Winkelmann create?',
'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. '
}
nlp(inputs)
```
## Training procedure
```
{
"base_model": "bert-large-uncased",
"do_lower_case": True,
"learning_rate": 3e-5,
"num_train_epochs": 4,
"max_seq_length": 384,
"doc_stride": 128,
"max_query_length": 64,
"batch_size": 96
}
```
## Eval results
- Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
- Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md))
```
{
"exact": 76.22336393497852,
"f1": 79.72527570261339,
"total": 11873,
"HasAns_exact": 76.19770580296895,
"HasAns_f1": 83.21157193271408,
"HasAns_total": 5928,
"NoAns_exact": 76.24894869638352,
"NoAns_f1": 76.24894869638352,
"NoAns_total": 5945
}
```
|
Qwen/Qwen1.5-14B-Chat | Qwen | "2024-04-30T07:22:51Z" | 25,654 | 102 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-30T17:20:42Z" | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-14B-Chat
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen1.5-14B-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-14B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-14B-Chat-GPTQ-Int4`, `Qwen1.5-14B-Chat-GPTQ-Int8`, `Qwen1.5-14B-Chat-AWQ`, and `Qwen1.5-14B-Chat-GGUF`.
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
microsoft/unispeech-large-1500h-cv | microsoft | "2021-11-05T12:41:56Z" | 25,632 | 1 | transformers | [
"transformers",
"pytorch",
"unispeech",
"pretraining",
"speech",
"en",
"dataset:common_voice",
"arxiv:2101.07597",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language:
- en
datasets:
- common_voice
tags:
- speech
---
# UniSpeech-Large
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The large model pretrained on 16kHz sampled speech audio and phonetic labels. When using the model make sure that your speech input is also sampled at 16kHz and your text in converted into a sequence of phonemes.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Paper: UniSpeech: Unified Speech Representation Learning
with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597)
Authors: Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang
**Abstract**
*In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech.
# Usage
This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
used in inference. The model was pre-trained in English and should therefore perform well only in English.
**Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
of phonemes before fine-tuning.
## Speech Recognition
To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition).
## Speech Classification
To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
lanwuwei/BERTOverflow_stackoverflow_github | lanwuwei | "2023-06-27T00:07:48Z" | 25,618 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" |
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/).
#### How to use
```python
from transformers import *
import torch
tokenizer = AutoTokenizer.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
model = AutoModelForTokenClassification.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{tabassum2020code,
title={Code and Named Entity Recognition in StackOverflow},
author={Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan },
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
year = {2020},
}
```
|
valhalla/distilbart-mnli-12-1 | valhalla | "2021-06-14T10:27:55Z" | 25,612 | 49 | transformers | [
"transformers",
"pytorch",
"jax",
"bart",
"text-classification",
"distilbart",
"distilbart-mnli",
"zero-shot-classification",
"dataset:mnli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | "2022-03-02T23:29:05Z" | ---
datasets:
- mnli
tags:
- distilbart
- distilbart-mnli
pipeline_tag: zero-shot-classification
---
# DistilBart-MNLI
distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `bart-large-mnli` and finetune more on the same data.
| | matched acc | mismatched acc |
| ------------------------------------------------------------------------------------ | ----------- | -------------- |
| [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) (baseline, 12-12) | 89.9 | 90.01 |
| [distilbart-mnli-12-1](https://huggingface.co/valhalla/distilbart-mnli-12-1) | 87.08 | 87.5 |
| [distilbart-mnli-12-3](https://huggingface.co/valhalla/distilbart-mnli-12-3) | 88.1 | 88.19 |
| [distilbart-mnli-12-6](https://huggingface.co/valhalla/distilbart-mnli-12-6) | 89.19 | 89.01 |
| [distilbart-mnli-12-9](https://huggingface.co/valhalla/distilbart-mnli-12-9) | 89.56 | 89.52 |
This is a very simple and effective technique, as we can see the performance drop is very little.
Detailed performace trade-offs will be posted in this [sheet](https://docs.google.com/spreadsheets/d/1dQeUvAKpScLuhDV1afaPJRRAE55s2LpIzDVA5xfqxvk/edit?usp=sharing).
## Fine-tuning
If you want to train these models yourself, clone the [distillbart-mnli repo](https://github.com/patil-suraj/distillbart-mnli) and follow the steps below
Clone and install transformers from source
```bash
git clone https://github.com/huggingface/transformers.git
pip install -qqq -U ./transformers
```
Download MNLI data
```bash
python transformers/utils/download_glue_data.py --data_dir glue_data --tasks MNLI
```
Create student model
```bash
python create_student.py \
--teacher_model_name_or_path facebook/bart-large-mnli \
--student_encoder_layers 12 \
--student_decoder_layers 6 \
--save_path student-bart-mnli-12-6 \
```
Start fine-tuning
```bash
python run_glue.py args.json
```
You can find the logs of these trained models in this [wandb project](https://wandb.ai/psuraj/distilbart-mnli). |
echarlaix/tiny-random-stable-diffusion-xl | echarlaix | "2023-07-06T16:08:37Z" | 25,595 | 1 | diffusers | [
"diffusers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-07-06T16:06:52Z" | ---
license: apache-2.0
---
|
Lin-Chen/ShareGPT4V-7B | Lin-Chen | "2024-06-06T13:50:33Z" | 25,541 | 75 | transformers | [
"transformers",
"pytorch",
"share4v",
"text-generation",
"image-text-to-text",
"arxiv:2311.12793",
"autotrain_compatible",
"region:us"
] | image-text-to-text | "2023-11-20T13:55:13Z" | ---
inference: false
pipeline_tag: image-text-to-text
---
<br>
<br>
# ShareGPT4V-7B Model Card
## Model details
**Model type:**
ShareGPT4V-7B is an open-source chatbot trained by fine-tuning CLP vision tower and LLaMA/Vicuna on GPT4-Vision-assisted [ShareGPT4V](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) data and LLaVA instruction-tuning data.
**Model date:**
ShareGPT4V-7B was trained in Nov 2023.
**Paper or resources for more information:**
[[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/ShareGPT4Omni/ShareGPT4V)]
## Usage
You can directly utilize this model as we provide in our [[repository](https://github.com/ShareGPT4Omni/ShareGPT4V)]. Moreover, you can modify the architecture name from "Share4VLlamaForCausalLM" to "LLaVALlamaForCausalLM" and the model_type keyword from "share4v" to "llava" in our config file and seamlessly load our model in the [[LLaVA repository](https://github.com/haotian-liu/LLaVA)].
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Intended use
**Primary intended uses:**
The primary use of ShareGPT4V-7B is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M high-quality image-text pairs, i.e., ShareGPT4V-PT data
- 100K GPT4-Vision-generated image-text pairs
- LLaVA instruction-tuning data
## Evaluation dataset
A collection of 11 benchmarks |
Xenova/jina-embeddings-v2-small-en | Xenova | "2024-03-12T01:53:46Z" | 25,536 | 1 | transformers.js | [
"transformers.js",
"onnx",
"bert",
"fill-mask",
"feature-extraction",
"custom_code",
"region:us"
] | feature-extraction | "2023-10-25T17:25:16Z" | ---
library_name: transformers.js
pipeline_tag: feature-extraction
---
https://huggingface.co/jinaai/jina-embeddings-v2-small-en with ONNX weights to be compatible with Transformers.js.
## Usage with 🤗 Transformers.js
```js
// npm i @xenova/transformers
import { pipeline, cos_sim } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/jina-embeddings-v2-small-en',
{ quantized: false } // Comment out this line to use the quantized version
);
// Generate embeddings
const output = await extractor(
['How is the weather today?', 'What is the current weather like today?'],
{ pooling: 'mean' }
);
// Compute cosine similarity
console.log(cos_sim(output[0].data, output[1].data)); // 0.9399812684139274 (unquantized) vs. 0.9341121503699659 (quantized)
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
asapp/sew-tiny-100k | asapp | "2023-07-21T23:05:12Z" | 25,522 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"sew",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-tiny
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`. |
asapp/sew-d-tiny-100k | asapp | "2023-07-21T23:05:03Z" | 25,506 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`. |
Helsinki-NLP/opus-mt-hi-en | Helsinki-NLP | "2023-08-16T11:57:38Z" | 25,486 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"hi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hi-en
* source languages: hi
* target languages: en
* OPUS readme: [hi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014.hi.en | 9.1 | 0.357 |
| newstest2014-hien.hi.en | 13.6 | 0.409 |
| Tatoeba.hi.en | 40.4 | 0.580 |
|
textattack/roberta-base-CoLA | textattack | "2021-05-20T22:05:35Z" | 25,459 | 15 | transformers | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 5 epochs with a batch size of 32, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.850431447746884, as measured by the
eval set accuracy, found after 1 epoch.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF | mradermacher | "2024-07-01T06:08:31Z" | 25,413 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"en",
"dataset:princeton-nlp/llama3-ultrafeedback",
"base_model:Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T05:39:59Z" | ---
base_model: Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT
datasets:
- princeton-nlp/llama3-ultrafeedback
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO2-NT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jozhang97/deta-swin-large | jozhang97 | "2024-05-20T08:15:29Z" | 25,409 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deta",
"object-detection",
"vision",
"arxiv:2212.06137",
"endpoints_compatible",
"region:us"
] | object-detection | "2023-01-30T15:46:20Z" | ---
pipeline_tag: object-detection
tags:
- vision
---
# Detection Transformers with Assignment
By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Jang Hyun Cho](https://sites.google.com/view/janghyuncho/), [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/), [Philipp Krähenbühl](http://www.philkr.net/)
From the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137).
**TL; DR.** **De**tection **T**ransformers with **A**ssignment (DETA) re-introduce IoU assignment and NMS for transformer-based detectors. DETA trains and tests comparibly as fast as Deformable-DETR and converges much faster (50.2 mAP in 12 epochs on COCO). |
facebook/nougat-small | facebook | "2023-11-20T09:27:37Z" | 25,405 | 22 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vision-encoder-decoder",
"vision",
"nougat",
"image-to-text",
"arxiv:2308.13418",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | image-to-text | "2023-09-21T08:51:00Z" | ---
license: cc-by-4.0
tags:
- vision
- nougat
pipeline_tag: image-to-text
---
# Nougat model, small-sized version
Nougat model trained on PDF-to-markdown. It was introduced in the paper [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Blecher et al. and first released in [this repository](https://github.com/facebookresearch/nougat/tree/main).
Disclaimer: The team releasing Nougat did not write a model card for this model so this model card has been written by the Hugging Face team.
Note: this model corresponds to the "0.1.0-small" version of the original repository.
## Model description
Nougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy-to-use markdown format. The model consists of a Swin Transformer as vision encoder, and an mBART model as text decoder.
The model is trained to autoregressively predict the markdown given only the pixels of the PDF image as input.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/nougat_architecture.jpg"
alt="drawing" width="600"/>
<small> Nougat high-level overview. Taken from the <a href="https://arxiv.org/abs/2308.13418">original paper</a>. </small>
## Intended uses & limitations
You can use the raw model for transcribing a PDF into Markdown. See the [model hub](https://huggingface.co/models?search=nougat) to look for other
fine-tuned versions that may interest you.
### How to use
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/nougat).
### BibTeX entry and citation info
```bibtex
@misc{blecher2023nougat,
title={Nougat: Neural Optical Understanding for Academic Documents},
author={Lukas Blecher and Guillem Cucurull and Thomas Scialom and Robert Stojnic},
year={2023},
eprint={2308.13418},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
beomi/KoAlpaca-Polyglot-12.8B | beomi | "2023-09-15T01:28:23Z" | 25,394 | 52 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"polyglot-ko",
"gpt-neox",
"KoAlpaca",
"ko",
"dataset:KoAlpaca-v1.1b",
"base_model:EleutherAI/polyglot-ko-12.8b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-18T07:46:03Z" | ---
language:
- ko
license: apache-2.0
tags:
- generated_from_trainer
- polyglot-ko
- gpt-neox
- KoAlpaca
datasets:
- KoAlpaca-v1.1b
pipeline_tag: text-generation
base_model: EleutherAI/polyglot-ko-12.8b
model-index:
- name: KoAlpaca-Polyglot-12.8B
results: []
---
Update @ 2023.06.01
- Add Safetensor sharded model weight (max shard = 1GB)
# KoAlpaca-Polyglot-12.8B (v1.1b)
This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on a KoAlpaca Dataset v1.1b
Detail Codes are available at [KoAlpaca Github Repository](https://github.com/Beomi/KoAlpaca)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A100 80G)
- num_devices: 4
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3 |
minimaxir/sdxl-wrong-lora | minimaxir | "2023-08-24T02:59:47Z" | 25,374 | 115 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2023-08-14T04:26:16Z" | ---
license: mit
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# sdxl-wrong-lora

A LoRA for SDXL 1.0 Base which improves output image quality after loading it and using `wrong` as a negative prompt during inference. You can demo image generation using this LoRA in [this Colab Notebook](https://colab.research.google.com/github/minimaxir/sdxl-experiments/blob/main/sdxl_image_generation.ipynb).
The LoRA is also available in a `safetensors` format for other UIs such as A1111; however this LoRA was created using `diffusers` and I cannot guarantee its efficacy outside of it.
Benefits of using this LoRA:
- Higher detail in textures/fabrics, particularly at full 1024x1024 resolution.
- Higher color saturation and vibrance.
- Higher sharpness for blurry/background objects.
- Better at anatomically-correct hands.
- Less likely to have random artifacts.
- Appears to allow the model to follow the input prompt with a more expected behavior, particularly with prompt weighting such as the [Compel](https://github.com/damian0815/compel) syntax.
## Usage
The LoRA can be loaded using `load_lora_weights` like any other LoRA in `diffusers`:
```py
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16
)
base = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
base.load_lora_weights("minimaxir/sdxl-wrong-lora")
_ = base.to("cuda")
```
During image generation, use `wrong` as the negative prompt. That's it!
## Examples
**Left image** is the base model output (no LoRA) + refiner, **right image** is base (w/ LoRA) + refiner + `wrong` negative prompt. Both generations use the same seed.
I have also [released a Colab Notebook](https://colab.research.google.com/github/minimaxir/sdxl-experiments/blob/main/sdxl_wrong_comparison.ipynb) to generate these kinds of side-by-side comparison images, although the seeds listed will not give the same results since they were generated on a different GPU/CUDA than the Colab Notebook.
`realistic human Shrek blogging at a computer workstation, hyperrealistic award-winning photo for vanity fair` (cfg = 13, seed = 56583700)

`pepperoni pizza in the shape of a heart, hyperrealistic award-winning professional food photography` (cfg = 13, seed = 75789081)

`presidential painting of realistic human Spongebob Squarepants wearing a suit, (oil on canvas)+++++` (cfg = 13, seed = 85588026)

`San Francisco panorama attacked by (one massive kitten)++++, hyperrealistic award-winning photo by the Associated Press` (cfg = 13, seed = 45454868)

`hyperrealistic death metal album cover featuring edgy moody realistic (human Super Mario)++, edgy and moody` (cfg = 13, seed = 30416580)

## Methodology
The methodology and motivation for creating this LoRA is similar to my [wrong SD 2.0 textual inversion embedding](https://huggingface.co/minimaxir/wrong_embedding_sd_2_0) by training on a balanced variety of undesirable outputs, except trained as a LoRA since textual inversion with SDXL is complicated. The base images were generated from SDXL itself, with some prompt weighting to emphasize undesirable attributes for test images.
You can see the code to generate the wrong images [in this Jupyter Notebook](https://github.com/minimaxir/sdxl-experiments/blob/main/wrong_image_generator.ipynb).
## Notes
- The intuitive way to think about how this LoRA works is that on training start, it indicates an undesirable area of the vast highdimensional latent space which the rest of the diffusion process will move away from. This may work more effectively than textual inversion but more testing needs to be done.
- The description of this LoRA is very careful to not state that the output is objectively _better_ than not using LoRA, because everything is subjective and there are use cases where vibrant output is not desired. For most use cases, the output should be better desired however.
- It's possible to use `not wrong` in the normal prompt itself but in testing it has not much effect.
- You can use other negative prompts in conjunction with the `wrong` prompt but you may want to weight them appropriately.
- All the Notebooks noted here are available [in this GitHub repo](https://github.com/minimaxir/sdxl-experiments).
|
laion/larger_clap_music_and_speech | laion | "2023-10-31T19:56:11Z" | 25,345 | 17 | transformers | [
"transformers",
"pytorch",
"clap",
"feature-extraction",
"arxiv:2211.06687",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2023-10-30T18:16:44Z" | ---
license: apache-2.0
---
# Model
## TL;DR
CLAP is to audio what CLIP is to image. This is an improved CLAP checkpoint, specifically trained on music and speech.
## Description
CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score.
# Usage
You can use this model for zero shot audio classification or extracting audio and/or textual features.
# Uses
## Perform zero-shot audio classification
### Using `pipeline`
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("ashraq/esc50")
audio = dataset["train"]["audio"][-1]["array"]
audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/larger_clap_music_and_speech")
output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
print(output)
>>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
```
## Run the model:
You can also get the audio and text embeddings using `ClapModel`
### Run the model on CPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/larger_clap_music_and_speech")
processor = ClapProcessor.from_pretrained("laion/larger_clap_music_and_speech")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
audio_embed = model.get_audio_features(**inputs)
```
### Run the model on GPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/larger_clap_music_and_speech").to(0)
processor = ClapProcessor.from_pretrained("laion/larger_clap_music_and_speech")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
audio_embed = model.get_audio_features(**inputs)
```
# Citation
If you are using this model for your work, please consider citing the original paper:
```
@misc{https://doi.org/10.48550/arxiv.2211.06687,
doi = {10.48550/ARXIV.2211.06687},
url = {https://arxiv.org/abs/2211.06687},
author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
allegro/plt5-large | allegro | "2022-08-03T20:20:09Z" | 25,317 | 5 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"T5",
"translation",
"summarization",
"question answering",
"reading comprehension",
"pl",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | translation | "2022-03-02T23:29:05Z" | ---
language: pl
tags:
- T5
- translation
- summarization
- question answering
- reading comprehension
datasets:
- ccnet
- nkjp
- wikipedia
- open subtitles
- free readings
license: cc-by-4.0
---
# plT5 Large
**plT5** models are T5-based language models trained on Polish corpora. The models were optimized for the original T5 denoising target.
## Corpus
plT5 was trained on six different corpora available for Polish language:
| Corpus | Tokens | Documents |
| :------ | ------: | ------: |
| [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M |
| [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M |
| [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M |
| [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M
| [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M |
| [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k |
## Tokenizer
The training dataset was tokenized into subwords using a sentencepiece unigram model with
vocabulary size of 50k tokens.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("allegro/plt5-large")
model = AutoModel.from_pretrained("allegro/plt5-large")
```
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@article{chrabrowa2022evaluation,
title={Evaluation of Transfer Learning for Polish with a Text-to-Text Model},
author={Chrabrowa, Aleksandra and Dragan, {\L}ukasz and Grzegorczyk, Karol and Kajtoch, Dariusz and Koszowski, Miko{\l}aj and Mroczkowski, Robert and Rybak, Piotr},
journal={arXiv preprint arXiv:2205.08808},
year={2022}
}
```
## Authors
The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/).
You can contact us at: <a href="mailto:[email protected]">[email protected]</a> |
sdadas/polish-gpt2-xl | sdadas | "2023-04-18T17:54:31Z" | 25,285 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"pl",
"license:lgpl",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-01-06T09:39:52Z" | ---
license: lgpl
language:
- pl
--- |
thanathorn/mt5-cpe-kmutt-thai-sentence-sum | thanathorn | "2022-05-13T18:20:03Z" | 25,259 | 6 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"th",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-04-27T09:12:47Z" | ---
tags:
- summarization
- mT5
language:
- th
widget:
- text: "simplify: ถ้าพูดถึงขนมหวานในตำนานที่ชื่นใจที่สุดแล้วละก็ต้องไม่พ้น น้ำแข็งใส แน่เพราะว่าเป็นอะไรที่ชื่นใจสุด"
---
# mt5-cpe-kmutt-thai-sentence-sum
This repository contains the finetuned mT5-base model for Thai sentence summarization. The architecture of the model is based on mT5 model and fine-tuned on text-summarization pairs in Thai. Also, this project is a Senior Project of Computer Engineering Student at King Mongkut’s University of Technology Thonburi.
## Usage on SimpleTransformer (Tested on version 0.63.4)
```python
from simpletransformers.t5 import T5Model, T5Args
from torch import cuda
model = T5Model("t5", "thanathorn/mt5-cpe-kmutt-thai-sentence-sum", use_cuda=cuda.is_available())
sentence = "simplify: ถ้าพูดถึงขนมหวานในตำนานที่ชื่นใจที่สุดแล้วละก็ต้องไม่พ้น น้ำแข็งใส แน่เพราะว่าเป็นอะไรที่ชื่นใจสุด"
prediction = model.predict([sentence])
print(prediction[0])
```
(See the example on <a href="https://colab.research.google.com/drive/1XiNkZLgy1USwHYFVf_nEzOSWbHGSnYdg?usp=sharing">Google Colab</a>)
### Score
<ul>
<li>ROUGE-1: 61.7805</li>
<li>ROUGE-2: 45.9689</li>
<li>ROUGE-L: 59.3542</li>
</ul>
### Intended uses & limitations
<ul>
<li>You can use this model for Thai sentence text summarization.</li>
<li>Not intended to use with paragraph text.</li>
</ul> |
flair/ner-spanish-large | flair | "2021-05-08T15:36:59Z" | 25,240 | 9 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"es",
"dataset:conll2003",
"arxiv:2011.06993",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: es
datasets:
- conll2003
widget:
- text: "George Washington fue a Washington"
---
## Spanish NER in Flair (large model)
This is the large 4-class NER model for Spanish that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **90,54** (CoNLL-03 Spanish)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-spanish-large")
# make example sentence
sentence = Sentence("George Washington fue a Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington fue a Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
import torch
# 1. get the corpus
from flair.datasets import CONLL_03_SPANISH
corpus = CONLL_03_SPANISH()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-spanish-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
julien-c/dummy-unknown | julien-c | "2021-05-20T17:31:14Z" | 25,212 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"ci",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
tags:
- ci
---
## Dummy model used for unit testing and CI
```python
import json
import os
from transformers import RobertaConfig, RobertaForMaskedLM, TFRobertaForMaskedLM
DIRNAME = "./dummy-unknown"
config = RobertaConfig(10, 20, 1, 1, 40)
model = RobertaForMaskedLM(config)
model.save_pretrained(DIRNAME)
tf_model = TFRobertaForMaskedLM.from_pretrained(DIRNAME, from_pt=True)
tf_model.save_pretrained(DIRNAME)
# Tokenizer:
vocab = [
"l",
"o",
"w",
"e",
"r",
"s",
"t",
"i",
"d",
"n",
"\u0120",
"\u0120l",
"\u0120n",
"\u0120lo",
"\u0120low",
"er",
"\u0120lowest",
"\u0120newer",
"\u0120wider",
"<unk>",
]
vocab_tokens = dict(zip(vocab, range(len(vocab))))
merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""]
vocab_file = os.path.join(DIRNAME, "vocab.json")
merges_file = os.path.join(DIRNAME, "merges.txt")
with open(vocab_file, "w", encoding="utf-8") as fp:
fp.write(json.dumps(vocab_tokens) + "\n")
with open(merges_file, "w", encoding="utf-8") as fp:
fp.write("\n".join(merges))
```
|
jhgan/ko-sbert-nli | jhgan | "2022-08-16T12:45:45Z" | 25,201 | 14 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ko-sbert-nli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-nli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-nli')
model = AutoModel.from_pretrained('jhgan/ko-sbert-nli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 82.24
- Cosine Spearman: 83.16
- Euclidean Pearson: 82.19
- Euclidean Spearman: 82.31
- Manhattan Pearson: 82.18
- Manhattan Spearman: 82.30
- Dot Pearson: 79.30
- Dot Spearman: 78.78
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 889,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
stabilityai/TripoSR | stabilityai | "2024-03-05T13:56:02Z" | 25,197 | 389 | null | [
"3d",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"arxiv:2403.02151",
"license:mit",
"region:us"
] | image-to-3d | "2024-02-29T13:01:16Z" | ---
datasets:
- allenai/objaverse
tags:
- 3d
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I ALLOW Stability AI to email me about new model releases: checkbox
license: mit
pipeline_tag: image-to-3d
---
# TripoSR

TripoSR is a fast and feed-forward 3D generative model developed in collaboration between Stability AI and Tripo AI.
## Model Details
### Model Description
We closely follow [LRM](https://arxiv.org/abs/2311.04400) network architecture for the model design, where TripoSR incorporates a series of technical advancements over the LRM model in terms of both data curation as well as model and training improvements. For more technical details and evaluations, please refer to [our tech report](https://arxiv.org/abs/2403.02151).
* **Developed by**: [Stability AI](https://stability.ai/), [Tripo AI](https://tripo3d.ai/)
* **Model type**: Feed-forward 3D reconstruction from a single image
* **License**: MIT
* **Hardware**: We train `TripoSR` for 5 days on 22 GPU nodes each with 8 A100 40GB GPUs
### Model Sources
* **Repository**: https://github.com/VAST-AI-Research/TripoSR
* **Tech report**: https://arxiv.org/abs/2403.02151
* **Demo**: https://huggingface.co/spaces/stabilityai/TripoSR
### Training Dataset
We use renders from the [Objaverse](https://objaverse.allenai.org/objaverse-1.0) dataset, utilizing our enhanced rendering method that more closely replicate the distribution of images found in the real world, significantly improving our model’s ability to generalize. We selected a carefully curated subset of the Objaverse dataset for the training data, which is available under the CC-BY license.
## Usage
* For usage instructions, please refer to our [TripoSR GitHub repository](https://github.com/VAST-AI-Research/TripoSR)
* You can also try it in [our gradio demo](https://huggingface.co/spaces/stabilityai/TripoSR)
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate 3D models that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. |
deepset/xlm-roberta-base-squad2-distilled | deepset | "2023-05-05T06:59:18Z" | 25,191 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"exbert",
"multilingual",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
language: multilingual
datasets:
- squad_v2
license: mit
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
tags:
- exbert
---
# deepset/xlm-roberta-base-squad2-distilled
- haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
## Overview
**Language model:** deepset/xlm-roberta-base-squad2-distilled
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
batch_size = 56
n_epochs = 4
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 3
distillation_loss_weight = 0.75
```
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled")
# or
reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled")
```
For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-base-squad2-distilled"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set
```
"exact": 74.06721131980123%
"f1": 76.39919553344667%
```
## Authors
**Timo Möller:** [email protected]
**Julian Risch:** [email protected]
**Malte Pietsch:** [email protected]
**Michel Bartels:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) |
urchade/gliner_multi-v2.1 | urchade | "2024-04-10T10:09:43Z" | 25,161 | 67 | gliner | [
"gliner",
"pytorch",
"token-classification",
"multilingual",
"dataset:urchade/pile-mistral-v0.1",
"arxiv:2311.08526",
"license:apache-2.0",
"region:us"
] | token-classification | "2024-04-09T20:38:45Z" | ---
license: apache-2.0
language:
- multilingual
library_name: gliner
datasets:
- urchade/pile-mistral-v0.1
pipeline_tag: token-classification
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
## Links
* Paper: https://arxiv.org/abs/2311.08526
* Repository: https://github.com/urchade/GLiNER
## Available models
| Release | Model Name | # of Parameters | Language | License |
| - | - | - | - | - |
| v0 | [urchade/gliner_base](https://huggingface.co/urchade/gliner_base)<br>[urchade/gliner_multi](https://huggingface.co/urchade/gliner_multi) | 209M<br>209M | English<br>Multilingual | cc-by-nc-4.0 |
| v1 | [urchade/gliner_small-v1](https://huggingface.co/urchade/gliner_small-v1)<br>[urchade/gliner_medium-v1](https://huggingface.co/urchade/gliner_medium-v1)<br>[urchade/gliner_large-v1](https://huggingface.co/urchade/gliner_large-v1) | 166M<br>209M<br>459M | English <br> English <br> English | cc-by-nc-4.0 |
| v2 | [urchade/gliner_small-v2](https://huggingface.co/urchade/gliner_small-v2)<br>[urchade/gliner_medium-v2](https://huggingface.co/urchade/gliner_medium-v2)<br>[urchade/gliner_large-v2](https://huggingface.co/urchade/gliner_large-v2) | 166M<br>209M<br>459M | English <br> English <br> English | apache-2.0 |
| v2.1 | [urchade/gliner_small-v2.1](https://huggingface.co/urchade/gliner_small-v2.1)<br>[urchade/gliner_medium-v2.1](https://huggingface.co/urchade/gliner_medium-v2.1)<br>[urchade/gliner_large-v2.1](https://huggingface.co/urchade/gliner_large-v2.1) <br>[urchade/gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1) | 166M<br>209M<br>459M<br>209M | English <br> English <br> English <br> Multilingual | apache-2.0 |
## Installation
To use this model, you must install the GLiNER Python library:
```
!pip install gliner
```
## Usage
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("urchade/gliner_multi-v2.1")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
## Named Entity Recognition benchmark result

## Model Authors
The model authors are:
* [Urchade Zaratiana](https://huggingface.co/urchade)
* Nadi Tomeh
* Pierre Holat
* Thierry Charnois
## Citation
```bibtex
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Yntec/UltraHighDefinition | Yntec | "2024-05-29T21:00:41Z" | 25,110 | 1 | diffusers | [
"diffusers",
"safetensors",
"Ultra Detailed",
"Ultra Realistic",
"Ultra Versatile",
"Photographic",
"Cinematic",
"artificialguybr",
"LEOSAM",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-05-29T16:32:28Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Ultra Detailed
- Ultra Realistic
- Ultra Versatile
- Photographic
- Cinematic
- artificialguybr
- LEOSAM
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
The most detailed parts of FilmGirlUltra and LiberteRedmond mixed together to create this ultra detailed model! But I think UltraHighDefinition sounds cooler...
Sometimes I just sit generating many samples for models, I think this one deserves it.
Samples and prompts (keep scrolling to see more):

(Click for larger)
Top left: analog 1997 movie screenshot woman mother with Santa Claus and daughter enjoying cake with candles. sitting with a pretty cute little girl, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom
Top right: An intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, of fantasy by thomas kinkade
Bottom left: Beach, hectic, busy, circular, ,T shirt design,
Bottom right: analog style 60s color movie still of beautiful face, young pretty Audrey Hepburn voluptuous at a neon convenience storefront

(Click for larger)
Top left: timeless style portrait of heath ledger, studio lighting, colorful
Top right: city lights, reflections, water, mountain
Bottom left: a chubby cute pokemon gym room, 3 d illustration, isometric, 1 0 0 mm, studio lighting
Bottom right: pretty cute little girl sitting on a giant hamburger themed lemon, high quality

(Click for larger)
Top left: Baby girl with a giant basket full of grapes, high quality, grass by wess anderson
Top right: A high contrast portrait of a happy fuzzy panda bear dressed as a chef in a high end kitchen making dough. There is a painting of flowers on the wall behind
Bottom left: 60's interiors of an apartment, photorealistic, cinematic, volume light, rendered in octane, artstation
Bottom right: a pretty cute little girl with curly ponytail hair, detailed face, bow, holding her tennis gameboy up, northern sky, walking by the city, blue sky, vast clouds

(Click for larger)
Top left: vertical fruit peaks. movie still
Top right: pretty lady with cool guy together standing, cute eyes, photoreal portrait, is on top of he Closeup a of rocks on pile top of a ocean moon to the magazine.
Bottom left: a pretty cute indian girl wearing an apron. kitchen
Bottom right: a long pier, gloomy, cinematic, cold, landscape. wine bottle

(Click for larger)
Top left: House with a waterwheel built into the roots of a giant tree, next to games, a colorful river landscape painting from a fantasy point and click 2 d graphic adventure game, art inspired by ROSSDRAWS and larry elmore and john shroades, king's quest, sierra entertainment
Top right: a Playing with toys of a beautiful young cute girl. TV ad screen capture
Bottom left: close up of two pretty cute young girls, Rihanna's daughters wearing a red dress, centered, little sunset friend with long hair, behind busy street, (high detailed skin:1.2), film grain, Fujifilm XT3, (high detailed face:1.3)
Bottom right: home studio for a radio streaming, realistic, octane render, cinematic, gaming system theme, lighting shadows, detailed illustration, 8 k, intricate details, oil painting

(Click for larger)
Top left: a lighthouse on top of a rocky outcropping with ships in the background. close up of pretty cute little Swedish girl
Top right: healthy beet juice cherries smoothie.
Bottom left: a city street where everything is made from tiny inflatable balloons, hyper real, trending on Art Station, Octane render
Bottom right: centered, (messy bun), pale skin, behind glacial mountains, a cute orange, (high detailed skin:1.2), film grain, Fujifilm XT3, (high detailed face:1.3)

(Click for larger)
Top left: young cowboy dad with pretty daughter riding white pony, closeup, cute faces, sunset, ocean
Top right: full grill full of meat and artstation. fire
Bottom left: Romanticism In Photography The Beauty Grandeur And behind trees Of Nature The Suggestion Of The Divine In The Light And Nature Photos Nature Photography Nature, wallpaper hd, stunning photorealistic painting, photoshop, divine night sky,1920x1080
Bottom right: timeless style pretty cute girl victorian portrait as Django. golden, teals, blues

(Click for larger)
Top left: spanakopita on a plate. teal
Top right: 90s Anime cute little girl, bangs, depth of field, embedded, hair ribbon, long hair, looking at viewer, neck ribbon, non-web source, palm leaf, palm tree, purple eyes, purple hair, red ribbon, ribbon, self upload, solo
Bottom left: a PEACEFUL of a beautiful young girl looking with cleavage. Skirt
Bottom right: what's for dinner? unreal 5, daz, hyperrealistic, octane render cinematic volume inner glowing aura global illumination ray tracing hdr

(Click for larger)
Top left: anime, manga, digital art, trending on artstation, digital painting, a painting of a closeup of a beautiful cute girl standing behind a berry bar
Top right: an amazing close up photo of a detailed sunset porsche 911 on a curvy, asphalt road, mountain
Bottom left: “ depiction of the beginning of the universe, surreal, award winning, highly detailed, style by mark rogers, paul bonner, oil on canvas. ”
Bottom right: digital painting, anime, trending on artstation close up of pretty cute asian girl, tattoos, centered, (messy bun), blue eyes, pale skin, behind trees, (high detailed skin:1.2), beach, Fujifilm XT3, (high detailed face:1.3)

(Click for larger)
Top left: Mystery village landscape with a blue snow to another dimension, concept art, low angle, high detail, warm lighting, volumetric, godrays, vivid, beautiful,
Top right: manga art, muted colors, detailed painting, halftone dithering, cute girl with shoulderlength black bobcut in baggy black clothes, pixar cape, beautiful eyes, complex sigils
Bottom left: a close up portrait photo of pretty cute little girl in wastelander clothes, long haircut, pale skin, background is city ruins, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
Bottom right: snowglobe of a tiny town brighton uk, tilt-shift lomo photo
Original pages:
https://civitai.com/models/94123?modelVersionId=100409 (LiberteRedmond)
https://civitai.com/models/33208/leosams-filmgirl-ultra
# Recipe:
- SuperMerger Weight Sum Use MBW 1,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,1,1,1,1,1,1,1,1
Model A:
LiberteRedmond
Model B:
LEOSAMsFilmGirlUltra
Output:
UltraHighDefinition |
facebook/deit-base-patch16-224 | facebook | "2022-07-13T11:40:44Z" | 25,078 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"vit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2012.12877",
"arxiv:2006.03677",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- image-classification
datasets:
- imagenet-1k
---
# Data-efficient Image Transformer (base-sized model)
Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman.
Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This model is actually a more efficiently trained Vision Transformer (ViT).
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for
fine-tuned versions on a task that interests you.
### How to use
Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.
## Training data
The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78).
At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------|
| DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 |
| DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 |
| **DeiT-base** | **81.8** | **95.6** | **86M** | **https://huggingface.co/facebook/deit-base-patch16-224** |
| DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 |
| DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 |
| DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 |
| DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 |
| DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 |
Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{touvron2021training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2021},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
PKU-Alignment/beaver-7b-v1.0-reward | PKU-Alignment | "2024-04-20T18:06:36Z" | 25,068 | 13 | safe-rlhf | [
"safe-rlhf",
"safetensors",
"llama",
"reinforcement-learning-from-human-feedback",
"reinforcement-learning",
"beaver",
"safety",
"ai-safety",
"deepspeed",
"rlhf",
"alpaca",
"en",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2302.13971",
"arxiv:2307.04657",
"arxiv:2310.12773",
"region:us"
] | reinforcement-learning | "2023-07-08T10:38:54Z" | ---
datasets:
- PKU-Alignment/PKU-SafeRLHF
language:
- en
tags:
- reinforcement-learning-from-human-feedback
- reinforcement-learning
- beaver
- safety
- llama
- ai-safety
- deepspeed
- rlhf
- alpaca
library_name: safe-rlhf
---
# 🦫 Beaver's Reward Model
## Model Details
The Beaver reward model is a preference model trained using the [PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It can play a role in the safe RLHF algorithm, helping the Beaver model become more helpful.
- **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team.
- **Model Type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license.
- **Fine-tuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
## Model Sources
- **Repository:** <https://github.com/PKU-Alignment/safe-rlhf>
- **Beaver:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0>
- **Dataset:** <https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF>
- **Reward Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-reward>
- **Cost Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-cost>
- **Dataset Paper:** <https://arxiv.org/abs/2307.04657>
- **Paper:** <https://arxiv.org/abs/2310.12773>
## How to Use the Reward Model
```python
import torch
from transformers import AutoTokenizer
from safe_rlhf.models import AutoModelForScore
model = AutoModelForScore.from_pretrained('PKU-Alignment/beaver-7b-v1.0-reward', torch_dtype=torch.bfloat16, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained('PKU-Alignment/beaver-7b-v1.0-reward')
input = 'BEGINNING OF CONVERSATION: USER: hello ASSISTANT:Hello! How can I help you today?'
input_ids = tokenizer(input, return_tensors='pt')
output = model(**input_ids)
print(output)
# ScoreModelOutput(
# scores=tensor([[[-19.7500],
# [-19.3750],
# [-20.1250],
# [-18.0000],
# [-20.0000],
# [-23.8750],
# [-23.5000],
# [-22.0000],
# [-21.0000],
# [-20.1250],
# [-23.7500],
# [-21.6250],
# [-21.7500],
# [-12.9375],
# [ -6.4375],
# [ -8.1250],
# [ -7.3438],
# [ -9.1875],
# [-13.6250],
# [-10.5625],
# [ -9.9375],
# [ -6.4375],
# [ -6.0938],
# [ -5.8438],
# [ -6.6562],
# [ -5.9688],
# [ -9.1875],
# [-11.4375]]], grad_fn=<ToCopyBackward0>),
# end_scores=tensor([[-11.4375]], grad_fn=<ToCopyBackward0>),
# last_hidden_state=tensor([[[ 0.7461, -0.6055, -0.4980, ..., 0.1670, 0.7812, -0.3242],
# [ 0.7383, -0.5391, -0.1836, ..., -0.1396, 0.5273, -0.2256],
# [ 0.6836, -0.7031, -0.3730, ..., 0.2100, 0.5000, -0.6328],
# ...,
# [-1.7969, 1.0234, 1.0234, ..., -0.8047, 0.2500, -0.8398],
# [ 2.0469, -1.3203, 0.8984, ..., -0.7734, -1.4141, -1.6797],
# [ 4.3438, -0.6953, 0.9648, ..., -0.1787, 0.6680, -3.0000]]],
# dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>),
# end_last_hidden_state=tensor([[ 4.3438, -0.6953, 0.9648, ..., -0.1787, 0.6680, -3.0000]],
# dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>),
# end_index=tensor([27])
# )
```
|
suriya7/bart-finetuned-text-summarization | suriya7 | "2024-03-24T13:29:28Z" | 25,055 | 5 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:EdinburghNLP/xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2024-03-17T17:46:05Z" | ---
license: mit
pipeline_tag: summarization
widget:
- text: >-
Now, there is no doubt that one of the most important aspects of any Pixel
phone is its camera. And there might be good news for all camera lovers.
Rumours have suggested that the Pixel 9 could come with a telephoto lens,
improving its photography capabilities even further. Google will likely
continue to focus on using AI to enhance its camera performance, in order to
make sure that Pixel phones remain top contenders in the world of mobile
photography.
- text: >-
The Samastha Kerala Sunni Students Federation (SKSSF) has also expressed
concern over holding the election on Friday. In a statement issued in
Kozhikode on Saturday, SKSSF state secretariat asked the EC to postpone the
election to another day. It said conducting elections on Friday will cause
inconvenience to people from the Muslim community deputed on poll duty or as
booth agents of political parties to participate in Friday juma prayers.
Meanwhile, the Wisdom Islamic Organisation has asked the state government to
officially demand the EC to hold the elections in Kerala and Tamil Nadu on
some other day, citing inconvenience of believers. State president P N Abdul
Latheef Madani said all secular forces should put pressure on the poll panel
to change the date of elections.
datasets:
- EdinburghNLP/xsum
language:
- en
---
# BART Large CNN Text Summarization Model
This model is based on the Facebook BART (Bidirectional and Auto-Regressive Transformers) architecture, specifically the large variant fine-tuned for text summarization tasks. BART is a sequence-to-sequence model introduced by Facebook AI, capable of handling various natural language processing tasks, including summarization.
## Model Details:
- **Architecture**: BART Large CNN
- **Pre-trained model**: BART Large
- **Fine-tuned for**: Text Summarization
- **Fine-tuning dataset**: [xsum](https://huggingface.co/datasets/EdinburghNLP/xsum)
## Usage:
### Installation:
You can install the necessary libraries using pip:
```bash
pip install transformers
```
### Inferecnce
provided a simple snippet of how to use this model for the task of paragraph summarization in PyTorch.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("suriya7/bart-finetuned-text-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("suriya7/bart-finetuned-text-summarization")
def generate_summary(text):
inputs = tokenizer([text], max_length=1024, return_tensors='pt', truncation=True)
summary_ids = model.generate(inputs['input_ids'], max_new_tokens=100, do_sample=False)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
text_to_summarize = """Now, there is no doubt that one of the most important aspects of any Pixel phone is its camera.
And there might be good news for all camera lovers. Rumours have suggested that the Pixel 9 could come with a telephoto lens,
improving its photography capabilities even further. Google will likely continue to focus on using AI to enhance its camera performance,
in order to make sure that Pixel phones remain top contenders in the world of mobile photography."""
summary = generate_summary(text_to_summarize)
print(summary)
```
```
Google is rumoured to be about to unveil its next-generation Pixel smartphone,
the Google Pixel 9,which is expected to come with a telephoto lens and an artificial intelligence (AI)
system to improve its camera capabilities, as well as improve the quality of its images.
```
### Training Parameters
```python
num_train_epochs=1,
warmup_steps = 500,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
weight_decay = 0.01,
gradient_accumulation_steps=16
``` |
Qwen/Qwen1.5-72B-Chat | Qwen | "2024-04-30T07:33:42Z" | 25,033 | 214 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-30T17:20:46Z" | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-72B-Chat
## Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
* 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
* Significant performance improvement in human preference for chat models;
* Multilingual support of both base and chat models;
* Stable support of 32K context length for models of all sizes
* No need of `trust_remote_code`.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
<br>
## Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen1.5-72B-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-72B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-72B-Chat-GPTQ-Int4`, `Qwen1.5-72B-Chat-GPTQ-Int8`, `Qwen1.5-72B-Chat-AWQ`, and `Qwen1.5-72B-Chat-GGUF`.
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
``` |
NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF | NousResearch | "2024-06-20T20:00:28Z" | 25,013 | 25 | null | [
"gguf",
"distillation",
"synthetic data",
"function calling",
"structured outputs",
"json mode",
"text-generation",
"en",
"license:llama3",
"region:us"
] | text-generation | "2024-06-20T03:10:02Z" | ---
license: llama3
language:
- en
pipeline_tag: text-generation
tags:
- distillation
- synthetic data
- function calling
- structured outputs
- json mode
---
# Hermes 2 Theta Llama-3 70B Model Card

## Model Description
**This is the quantized GGUF version of the model, for the full precision huggingface model, see [Here](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B)
Hermes-2 Θ (Theta) 70B is the continuation of our experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard and [Arcee AI](https://www.arcee.ai/), the team behind [MergeKit](https://github.com/arcee-ai/mergekit).
Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model.
# Comparison to Llama-3 70B Instruct Model:

# Prompt Format & Capabilities
Hermes 2 Θ uses ChatML as the prompt format, opening up a very structured and steerable, multiturn system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where uniquely added tokens were added to denote the beginning and end of any turn, along with roles for the turns.
The model is also specifically trained for Function Calling, Structured Outputs with JSON, and Feature Extraction from RAG Documents (see below).
# Example Outputs
## System Prompt: Roleplay as an Anime Catgirl who's good at programming and a hacker.

## Providing Structured Outputs for Annotating LLM Training Data

## System Prompt: Roleplay as a bombastic alchemist from the 17th century in France. You are on a journey with the user to find the philosopher's stone.

## Conversational Chats
Here is an example prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
## Function Calling Format
Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
<tools>
{"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}}
</tools>
Use the following pydantic model json schema for each tool call you will make:
{"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"}
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function.
See example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:
<schema>
{schema}
</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmark Details
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6638|_ |0.0138|
| | |acc_norm|0.6903|_ |0.0135|
|arc_easy | 0|acc |0.8851|_ |0.0065|
| | |acc_norm|0.8712|_ |0.0069|
|boolq | 1|acc |0.8820|_ |0.0056|
|hellaswag | 0|acc |0.6579|_ |0.0047|
| | |acc_norm|0.8432|_ |0.0036|
|openbookqa | 0|acc |0.3920|_ |0.0219|
| | |acc_norm|0.4740|_ |0.0224|
|piqa | 0|acc |0.8286|_ |0.0088|
| | |acc_norm|0.8351|_ |0.0087|
|winogrande | 0|acc |0.7893|_ |0.0115|
```
Average: 76.93
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.4055|_ |0.0309|
| | |acc_norm|0.4094|_ |0.0309|
|agieval_logiqa_en | 0|acc |0.5100|_ |0.0196|
| | |acc_norm|0.5023|_ |0.0196|
|agieval_lsat_ar | 0|acc |0.2783|_ |0.0296|
| | |acc_norm|0.2957|_ |0.0302|
|agieval_lsat_lr | 0|acc |0.7451|_ |0.0193|
| | |acc_norm|0.7333|_ |0.0196|
|agieval_lsat_rc | 0|acc |0.8290|_ |0.0230|
| | |acc_norm|0.8104|_ |0.0239|
|agieval_sat_en | 0|acc |0.9029|_ |0.0207|
| | |acc_norm|0.9029|_ |0.0207|
|agieval_sat_en_without_passage| 0|acc |0.5825|_ |0.0344|
| | |acc_norm|0.5631|_ |0.0346|
|agieval_sat_math | 0|acc |0.6318|_ |0.0326|
| | |acc_norm|0.6227|_ |0.0328|
```
Average: 60.50
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.6737|_ |0.0341|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7724|_ |0.0219|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3256|_ |0.0292|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4763|_ |0.0264|
| | |exact_str_match |0.0000|_ |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.4720|_ |0.0223|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.3486|_ |0.0180|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.6367|_ |0.0278|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.5220|_ |0.0224|
|bigbench_navigate | 0|multiple_choice_grade|0.5930|_ |0.0155|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.8600|_ |0.0078|
|bigbench_ruin_names | 0|multiple_choice_grade|0.7411|_ |0.0207|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.5281|_ |0.0158|
|bigbench_snarks | 0|multiple_choice_grade|0.6961|_ |0.0343|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5751|_ |0.0158|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.9880|_ |0.0034|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2296|_ |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1691|_ |0.0090|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.6367|_ |0.0278|
```
Average: 56.91
## TruthfulQA:
```| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4565|_ |0.0174|
| | |mc2 |0.6288|_ |0.0151|
```
62.88
## IFEval:
**87.99**
## MTBench:
First Turn - **9.1625**
Second Turn - **8.925**
Average - **9.04375**
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-70B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Theta-Llama-3-70B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Theta-Llama-3-70B,
url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B][NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B))},
title={Hermes-2-Theta-Llama-3-70B},
author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
|
SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep | SPO-Diffusion-Models | "2024-06-12T02:45:35Z" | 25,005 | 36 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"dataset:yuvalkirstain/pickapic_v1",
"arxiv:2406.04314",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-06T18:58:18Z" | ---
license: apache-2.0
datasets:
- yuvalkirstain/pickapic_v1
language:
- en
pipeline_tag: text-to-image
---
# Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step
<a href="https://arxiv.org/abs/2406.04314"><img src="https://img.shields.io/badge/Paper-arXiv-red?style=for-the-badge" height=22.5></a>
<a href="https://github.com/RockeyCoss/SPO"><img src="https://img.shields.io/badge/Gihub-Code-succees?style=for-the-badge&logo=GitHub" height=22.5></a>
<a href="https://rockeycoss.github.io/spo.github.io/"><img src="https://img.shields.io/badge/Project-Page-blue?style=for-the-badge" height=22.5></a>
<table>
<tr>
<td><img src="assets/imgs/0.png" alt="teaser example 0" width="200"/></td>
<td><img src="assets/imgs/1.png" alt="teaser example 1" width="200"/></td>
<td><img src="assets/imgs/2.png" alt="teaser example 2" width="200"/></td>
<td><img src="assets/imgs/3.png" alt="teaser example 3" width="200"/></td>
</tr>
</table>
## Abstract
<p>
Recently, Direct Preference Optimization (DPO) has extended its success from aligning large language models (LLMs) to aligning text-to-image diffusion models with human preferences.
Unlike most existing DPO methods that assume all diffusion steps share a consistent preference order with the final generated images, we argue that this assumption neglects step-specific denoising performance and that preference labels should be tailored to each step's contribution.
</p>
<p>
To address this limitation, we propose Step-aware Preference Optimization (SPO), a novel post-training approach that independently evaluates and adjusts the denoising performance at each step, using a <em>step-aware preference model</em> and a <em>step-wise resampler</em> to ensure accurate step-aware supervision.
Specifically, at each denoising step, we sample a pool of images, find a suitable win-lose pair, and, most importantly, randomly select a single image from the pool to initialize the next denoising step. This step-wise resampler process ensures the next win-lose image pair comes from the same image, making the win-lose comparison independent of the previous step. To assess the preferences at each step, we train a separate step-aware preference model that can be applied to both noisy and clean images.
</p>
<p>
Our experiments with Stable Diffusion v1.5 and SDXL demonstrate that SPO significantly outperforms the latest Diffusion-DPO in aligning generated images with complex, detailed prompts and enhancing aesthetics, while also achieving more than 20× times faster in training efficiency. Code and model: <a ref="https://rockeycoss.github.io/spo.github.io/">https://rockeycoss.github.io/spo.github.io/</a>
</p>
## Model Description
This model is fine-tuned from [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). It has been trained on 4,000 prompts for 10 epochs.
This is a merged checkpoint that combines the LoRA checkpoint with the base model [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). If you want to access the LoRA checkpoint, please visit [SPO-SDXL_4k-p_10ep_LoRA](https://huggingface.co/SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep_LoRA). We also provide a LoRA checkpoint compatible with [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), which can be accessed [here](https://civitai.com/models/510261?modelVersionId=567119).
## A quick example
```python
from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel
import torch
# load pipeline
inference_dtype = torch.float16
pipe = StableDiffusionXLPipeline.from_pretrained(
"SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep",
torch_dtype=inference_dtype,
)
vae = AutoencoderKL.from_pretrained(
'madebyollin/sdxl-vae-fp16-fix',
torch_dtype=inference_dtype,
)
pipe.vae = vae
pipe.to('cuda')
generator=torch.Generator(device='cuda').manual_seed(42)
image = pipe(
prompt='a child and a penguin sitting in front of the moon',
guidance_scale=5.0,
generator=generator,
output_type='pil',
).images[0]
image.save('moon.png')
```
## Citation
If you find our work or codebase useful, please consider giving us a star and citing our work.
```
@article{liang2024step,
title={Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step},
author={Liang, Zhanhao and Yuan, Yuhui and Gu, Shuyang and Chen, Bohan and Hang, Tiankai and Li, Ji and Zheng, Liang},
journal={arXiv preprint arXiv:2406.04314},
year={2024}
}
``` |
xbgoose/hubert-large-speech-emotion-recognition-russian-dusha-finetuned | xbgoose | "2024-04-26T20:07:32Z" | 24,944 | 8 | transformers | [
"transformers",
"pytorch",
"safetensors",
"hubert",
"audio-classification",
"SER",
"speech",
"audio",
"russian",
"ru",
"dataset:xbgoose/dusha",
"base_model:facebook/hubert-large-ls960-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-05-28T14:27:28Z" | ---
language:
- ru
tags:
- SER
- speech
- audio
- russian
license: apache-2.0
pipeline_tag: audio-classification
base_model: facebook/hubert-large-ls960-ft
datasets:
- xbgoose/dusha
---
# HuBERT fine-tuned on DUSHA dataset for speech emotion recognition in russian language
The pre-trained model is this one - [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft)
The DUSHA dataset used can be found [here](https://github.com/salute-developers/golos/tree/master/dusha#dataset-structure)
# Fine-tuning
Fine-tuned in Google Colab using Pro account with A100 GPU
Freezed all layers exept projector, classifier and all 24 HubertEncoderLayerStableLayerNorm layers
Used half of the train dataset
# Training parameters
- 2 epochs
- train batch size = 8
- eval batch size = 8
- gradient accumulation steps = 4
- learning rate = 5e-5 without warm up and decay
# Metrics
Achieved
- accuracy = 0.86
- balanced = 0.76
- macro f1 score = 0.81
on test set, improving accucary and f1 score compared to dataset baseline
# Usage
```python
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
import torchaudio
import torch
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertForSequenceClassification.from_pretrained("xbgoose/hubert-speech-emotion-recognition-russian-dusha-finetuned")
num2emotion = {0: 'neutral', 1: 'angry', 2: 'positive', 3: 'sad', 4: 'other'}
filepath = "path/to/audio.wav"
waveform, sample_rate = torchaudio.load(filepath, normalize=True)
transform = torchaudio.transforms.Resample(sample_rate, 16000)
waveform = transform(waveform)
inputs = feature_extractor(
waveform,
sampling_rate=feature_extractor.sampling_rate,
return_tensors="pt",
padding=True,
max_length=16000 * 10,
truncation=True
)
logits = model(inputs['input_values'][0]).logits
predictions = torch.argmax(logits, dim=-1)
predicted_emotion = num2emotion[predictions.numpy()[0]]
print(predicted_emotion)
``` |
HuggingFaceM4/tiny-random-MistralForCausalLM | HuggingFaceM4 | "2023-10-26T12:54:14Z" | 24,914 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-26T12:21:08Z" | Entry not found |
legraphista/llm-compiler-7b-ftd-IMat-GGUF | legraphista | "2024-06-28T00:28:33Z" | 24,902 | 1 | gguf | [
"gguf",
"quantized",
"GGUF",
"quantization",
"imat",
"imatrix",
"static",
"16bit",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"base_model:facebook/llm-compiler-7b-ftd",
"license:other",
"region:us"
] | text-generation | "2024-06-28T00:01:56Z" | ---
base_model: facebook/llm-compiler-7b-ftd
extra_gated_button_content: I Accept Meta LLM Compiler License and AUP
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_fields:
Affiliation: text
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
Country: country
Date of birth: date_picker
First Name: text
I accept the terms and conditions: checkbox
Last Name: text
geo: ip_location
extra_gated_prompt: "**Meta Large Language Model Compiler (LLM Compiler) LICENSE AGREEMENT**\n\
Version Release Date: 27th June 2024\n\u201C**Agreement**\u201D means the terms\
\ and conditions for use, reproduction, distribution and modification of the LLM\
\ Compiler Materials set forth herein.\n\u201C**Documentation**\u201D means the\
\ specifications, manuals and documentation accompanying the LLM Compiler distributed\
\ by Meta at:\n* [https://huggingface.co/facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b)\
\ * [https://huggingface.co/facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd)\
\ * [https://huggingface.co/facebook/llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b)\
\ * [https://huggingface.co/facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd)\n\
\u201C**Licensee**\u201D or \u201C**you**\u201D means you, or your employer or any\
\ other person or entity (if you are entering into this Agreement on such person\
\ or entity\u2019s behalf), of the age required under applicable laws, rules or\
\ regulations to provide legal consent and that has legal authority to bind your\
\ employer or such other person or entity if you are entering in this Agreement\
\ on their behalf.\n\u201C**Meta Large Language Model Compiler\u201D and \u201C\
LLM Compiler**\u201D mean the foundational large language models and software and\
\ algorithms, including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at:\n* [https://huggingface.co/facebook/llm-compiler-7b](https://huggingface.co/facebook/llm-compiler-7b)\
\ * [https://huggingface.co/facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd)\
\ * [https://huggingface.co/facebook/llm-compiler-13b](https://huggingface.co/facebook/llm-compiler-13b)\
\ * [https://huggingface.co/facebook/llm-compiler-13b-ftd](https://huggingface.co/facebook/llm-compiler-13b-ftd)\n\
\u201C**LLM Compiler Materials**\u201D means, collectively, Meta\u2019s proprietary\
\ LLM Compiler and Documentation (and any portion thereof) made available under\
\ this Agreement.\n\u201C**Meta**\u201D or \u201C**we**\u201D means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland). \nBy clicking \u201CI Accept\u201D\
\ below or by using or distributing any portion or element of the LLM Compiler Materials,\
\ you agree to be bound by this Agreement.\n1. **License Rights and Redistribution**.\
\ \\\n\n a. <span style=\"text-decoration:underline;\">Grant of Rights</span>.\
\ You are granted a non-exclusive, worldwide, non-transferable and royalty-free\
\ limited license under Meta\u2019s intellectual property or other rights owned\
\ by Meta embodied in the LLM Compiler Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the LLM Compiler Materials.\
\ \n\n b. <span style=\"text-decoration:underline;\">Redistribution and Use</span>.\
\ \n\n i. If you distribute or make available the LLM Compiler Materials (or\
\ any derivative works thereof), or a product or service that uses any of them,\
\ including another AI model, you shall (A) provide a copy of this Agreement with\
\ any such LLM Compiler Materials; and (B) prominently display \u201CBuilt with\
\ LLM Compiler\u201D on a related website, user interface, blogpost, about page,\
\ or product documentation. If you use the LLM Compiler Materials to create, train,\
\ fine tune, or otherwise improve an AI model, which is distributed or made available,\
\ you shall also include \u201CLLM Compiler\u201D at the beginning of any such AI\
\ model name.\n\n ii. If you receive LLM Compiler Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you. \n\n iii. You must retain\
\ in all copies of the LLM Compiler Materials that you distribute the following\
\ attribution notice within a \u201CNotice\u201D text file distributed as a part\
\ of such copies: \u201CLLM Compiler is licensed under the LLM Compiler License,\
\ Copyright \xA9 Meta Platforms, Inc. All Rights Reserved.\u201D\n\n iv. Your\
\ use of the LLM Compiler Materials must comply with applicable laws and regulations\
\ (including trade compliance laws and regulations) and adhere to the Acceptable\
\ Use Policy for Llama Materials (available at https://llama.meta.com/llama3/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n\n v. You will\
\ not use the LLM Compiler Materials or any output or results of the LLM Compiler\
\ Materials to improve any other large language model. \n\n2. **Additional Commercial\
\ Terms**. If, on the LLM Compiler release date, the monthly active users of the\
\ products or services made available by or for Licensee, or Licensee\u2019s affiliates,\
\ is greater than 700 million monthly active users in the preceding calendar month,\
\ you must request a license from Meta, which Meta may grant to you in its sole\
\ discretion, and you are not authorized to exercise any of the rights under this\
\ Agreement unless or until Meta otherwise expressly grants you such rights. \n\
3**. Disclaimer of Warranty**. UNLESS REQUIRED BY APPLICABLE LAW, THE LLM COMPILER\
\ MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLM COMPILER MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE\
\ LLM COMPILER MATERIALS AND ANY OUTPUT AND RESULTS.\n4. **Limitation of Liability**.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. **Intellectual Property**.\n\
\n a. No trademark licenses are granted under this Agreement, and in connection\
\ with the LLM Compiler Materials, neither Meta nor Licensee may use any name or\
\ mark owned by or associated with the other or any of its affiliates, except as\
\ required for reasonable and customary use in describing and redistributing the\
\ LLM Compiler Materials or as set forth in this Section 5(a). Meta hereby grants\
\ you a license to use LLM Compiler (the \u201CMark\u201D) solely as required to\
\ comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019\
s brand guidelines (currently accessible at[ https://about.meta.com/brand/resources/meta/company-brand/)](https://about.meta.com/brand/resources/meta/company-brand/).\
\ All goodwill arising out of your use of the Mark will inure to the benefit of\
\ Meta. \n\n b. Subject to Meta\u2019s ownership of LLM Compiler Materials and\
\ derivatives made by or for Meta, with respect to any derivative works and modifications\
\ of the LLM Compiler Materials that are made by you, as between you and Meta, you\
\ are and will be the owner of such derivative works and modifications.\n\n c.\
\ If you institute litigation or other proceedings against Meta or any entity (including\
\ a cross-claim or counterclaim in a lawsuit) alleging that the LLM Compiler Materials\
\ or LLM Compiler outputs or results, or any portion of any of the foregoing, constitutes\
\ infringement of intellectual property or other rights owned or licensable by you,\
\ then any licenses granted to you under this Agreement shall terminate as of the\
\ date such litigation or claim is filed or instituted. You will indemnify and hold\
\ harmless Meta from and against any claim by any third party arising out of or\
\ related to your use or distribution of the LLM Compiler Materials.\n\n6. **Term\
\ and Termination**. The term of this Agreement will commence upon your acceptance\
\ of this Agreement or access to the LLM Compiler Materials and will continue in\
\ full force and effect until terminated in accordance with the terms and conditions\
\ herein. Meta may terminate this Agreement if you are in breach of any term or\
\ condition of this Agreement. Upon termination of this Agreement, you shall delete\
\ and cease use of the LLM Compiler Materials. Sections 3, 4 and 7 shall survive\
\ the termination of this Agreement. \n7. **Governing Law and Jurisdiction**. This\
\ Agreement will be governed and construed under the laws of the State of California\
\ without regard to choice of law principles, and the UN Convention on Contracts\
\ for the International Sale of Goods does not apply to this Agreement. The courts\
\ of California shall have exclusive jurisdiction of any dispute arising out of\
\ this Agreement. "
inference: false
library_name: gguf
license: other
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# llm-compiler-7b-ftd-IMat-GGUF
_Llama.cpp imatrix quantization of facebook/llm-compiler-7b-ftd_
Original Model: [facebook/llm-compiler-7b-ftd](https://huggingface.co/facebook/llm-compiler-7b-ftd)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3256](https://github.com/ggerganov/llama.cpp/releases/tag/b3256)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [llm-compiler-7b-ftd.Q8_0.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q8_0.gguf) | Q8_0 | 7.16GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.Q6_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q6_K.gguf) | Q6_K | 5.53GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.Q4_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q4_K.gguf) | Q4_K | 4.08GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q3_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q3_K.gguf) | Q3_K | 3.30GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q2_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q2_K.gguf) | Q2_K | 2.53GB | ✅ Available | 🟢 IMatrix | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [llm-compiler-7b-ftd.BF16.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.BF16.gguf) | BF16 | 13.48GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.FP16.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.FP16.gguf) | F16 | 13.48GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.Q8_0.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q8_0.gguf) | Q8_0 | 7.16GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.Q6_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q6_K.gguf) | Q6_K | 5.53GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.Q5_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q5_K.gguf) | Q5_K | 4.78GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.Q5_K_S.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q5_K_S.gguf) | Q5_K_S | 4.65GB | ✅ Available | ⚪ Static | 📦 No
| [llm-compiler-7b-ftd.Q4_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q4_K.gguf) | Q4_K | 4.08GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q4_K_S.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q4_K_S.gguf) | Q4_K_S | 3.86GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ4_NL.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ4_NL.gguf) | IQ4_NL | 3.83GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ4_XS.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ4_XS.gguf) | IQ4_XS | 3.62GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q3_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q3_K.gguf) | Q3_K | 3.30GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q3_K_L.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q3_K_L.gguf) | Q3_K_L | 3.60GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q3_K_S.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q3_K_S.gguf) | Q3_K_S | 2.95GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ3_M.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ3_M.gguf) | IQ3_M | 3.11GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ3_S.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ3_S.gguf) | IQ3_S | 2.95GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ3_XS.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ3_XS.gguf) | IQ3_XS | 2.80GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ3_XXS.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ3_XXS.gguf) | IQ3_XXS | 2.59GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q2_K.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q2_K.gguf) | Q2_K | 2.53GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.Q2_K_S.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.Q2_K_S.gguf) | Q2_K_S | 2.32GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ2_M.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ2_M.gguf) | IQ2_M | 2.36GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ2_S.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ2_S.gguf) | IQ2_S | 2.20GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ2_XS.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ2_XS.gguf) | IQ2_XS | 2.03GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ2_XXS.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ2_XXS.gguf) | IQ2_XXS | 1.85GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ1_M.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ1_M.gguf) | IQ1_M | 1.65GB | ✅ Available | 🟢 IMatrix | 📦 No
| [llm-compiler-7b-ftd.IQ1_S.gguf](https://huggingface.co/legraphista/llm-compiler-7b-ftd-IMat-GGUF/blob/main/llm-compiler-7b-ftd.IQ1_S.gguf) | IQ1_S | 1.53GB | ✅ Available | 🟢 IMatrix | 📦 No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/llm-compiler-7b-ftd-IMat-GGUF --include "llm-compiler-7b-ftd.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/llm-compiler-7b-ftd-IMat-GGUF --include "llm-compiler-7b-ftd.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Llama.cpp
```
llama.cpp/main -m llm-compiler-7b-ftd.Q8_0.gguf --color -i -p "prompt here"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `llm-compiler-7b-ftd.Q8_0`)
3. Run `gguf-split --merge llm-compiler-7b-ftd.Q8_0/llm-compiler-7b-ftd.Q8_0-00001-of-XXXXX.gguf llm-compiler-7b-ftd.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
NousResearch/Nous-Hermes-2-Yi-34B | NousResearch | "2024-02-20T09:17:20Z" | 24,875 | 238 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"yi",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:01-ai/Yi-34B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-23T19:47:48Z" | ---
base_model: 01-ai/Yi-34B
tags:
- yi
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Yi-34B
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
---
# Nous Hermes 2 - Yi-34B

## Model description
Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune.
Nous Hermes 2 Yi 34B was trained on 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape.
# Table of Contents
1. [Example Outputs](#example-outputs)
- Discussing the Laws of Gravity
- Create a Flask based FTP Server
2. [Benchmark Results](#benchmark-results)
- GPT4All
- AGIEval
- BigBench
- Averages Compared
3. [Prompt Format](#prompt-format)
4. [Quantized Models](#quantized-models)
## Example Outputs
### Discussions about the Law of Gravity:

### Create an FTP Server in FLASK:

## Benchmark Results
Nous-Hermes 2 on Yi 34B outperforms all Nous-Hermes & Open-Hermes models of the past, achieving new heights in all benchmarks for a Nous Research LLM as well as surpassing many popular finetunes.
# Benchmarks Compared
### GPT4All:

### AGIEval:

### BigBench:

### TruthfulQA:

## GPT4All
GPT-4All Benchmark Set
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6067|_ |0.0143|
| | |acc_norm|0.6416|_ |0.0140|
|arc_easy | 0|acc |0.8594|_ |0.0071|
| | |acc_norm|0.8569|_ |0.0072|
|boolq | 1|acc |0.8859|_ |0.0056|
|hellaswag | 0|acc |0.6407|_ |0.0048|
| | |acc_norm|0.8388|_ |0.0037|
|openbookqa | 0|acc |0.3520|_ |0.0214|
| | |acc_norm|0.4760|_ |0.0224|
|piqa | 0|acc |0.8215|_ |0.0089|
| | |acc_norm|0.8303|_ |0.0088|
|winogrande | 0|acc |0.7908|_ |0.0114|
Average: 76.00%
```
AGI-Eval
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.3189|_ |0.0293|
| | |acc_norm|0.2953|_ |0.0287|
|agieval_logiqa_en | 0|acc |0.5438|_ |0.0195|
| | |acc_norm|0.4977|_ |0.0196|
|agieval_lsat_ar | 0|acc |0.2696|_ |0.0293|
| | |acc_norm|0.2087|_ |0.0269|
|agieval_lsat_lr | 0|acc |0.7078|_ |0.0202|
| | |acc_norm|0.6255|_ |0.0215|
|agieval_lsat_rc | 0|acc |0.7807|_ |0.0253|
| | |acc_norm|0.7063|_ |0.0278|
|agieval_sat_en | 0|acc |0.8689|_ |0.0236|
| | |acc_norm|0.8447|_ |0.0253|
|agieval_sat_en_without_passage| 0|acc |0.5194|_ |0.0349|
| | |acc_norm|0.4612|_ |0.0348|
|agieval_sat_math | 0|acc |0.4409|_ |0.0336|
| | |acc_norm|0.3818|_ |0.0328|
Average: 50.27%
```
BigBench Reasoning Test
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|_ |0.0360|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7263|_ |0.0232|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3953|_ |0.0305|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.4457|_ |0.0263|
| | |exact_str_match |0.0000|_ |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2820|_ |0.0201|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2186|_ |0.0156|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4733|_ |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.5200|_ |0.0224|
|bigbench_navigate | 0|multiple_choice_grade|0.4910|_ |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7495|_ |0.0097|
|bigbench_ruin_names | 0|multiple_choice_grade|0.5938|_ |0.0232|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.3808|_ |0.0154|
|bigbench_snarks | 0|multiple_choice_grade|0.8066|_ |0.0294|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5101|_ |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3850|_ |0.0154|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2160|_ |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1634|_ |0.0088|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4733|_ |0.0289|
Average: 46.69%
```
TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4333|_ |0.0173|
| | |mc2 |0.6034|_ |0.0149|
```
Average Score Comparison between OpenHermes-1 Llama-2 13B and OpenHermes-2 Mistral 7B against OpenHermes-2.5 on Mistral-7B:
```
| Bench | OpenHermes-2.5 Mistral 7B | Nous-Hermes-2-Yi-34B | Change/OpenHermes2 |
|---------------|---------------------------|----------------------|--------------------|
|GPT4All | 73.12| 76.00| +2.88|
|---------------------------------------------------------------------------------------|
|BigBench | 40.96| 46.69| +5.73|
|---------------------------------------------------------------------------------------|
|AGI Eval | 43.07| 50.27| +7.20|
|---------------------------------------------------------------------------------------|
|TruthfulQA | 53.04| 60.34| +7.30|
|---------------------------------------------------------------------------------------|
|Total Score | 210.19| 233.30| +23.11|
|---------------------------------------------------------------------------------------|
|Average Total | 52.38| 58.33| +5.95|
```
# Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Quantized Models:
GGUF: https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B-GGUF
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
Linaqruf/style-enhancer-xl-lora | Linaqruf | "2023-11-24T11:08:53Z" | 24,848 | 17 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"safetensors",
"stable-diffusion-xl",
"en",
"base_model:Linaqruf/animagine-xl-2.0",
"license:openrail++",
"region:us"
] | text-to-image | "2023-11-23T03:50:05Z" | ---
library_name: diffusers
license: openrail++
language:
- en
tags:
- text-to-image
- stable-diffusion
- lora
- safetensors
- stable-diffusion-xl
base_model: Linaqruf/animagine-xl-2.0
widget:
- text: face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck
parameter:
negative_prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
example_title: 1girl
- text: face focus, bishounen, masterpiece, best quality, 1boy, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck
parameter:
negative_prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
example_title: 1boy
---
<style>
.title-container {
display: flex;
flex-direction: column; /* Allow vertical stacking of title and subtitle */
justify-content: center;
align-items: center;
height: 100vh;
background-color: #f5f5f5;
}
.title {
font-size: 2em;
text-align: center;
color: #333;
font-family: 'Verdana', sans-serif;
text-transform: uppercase;
padding: 1em;
box-shadow: 0px 0px 0px rgba(0,0,0,0.1);
}
.title span {
background: -webkit-linear-gradient(45deg, #ff9a9e, #fad0c4, #f6d365);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 2em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 10px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 10px;
transition: transform .7s;
/* Smooth transition for the container */
}
.custom-image-container:hover {
transform: scale(1.05);
/* Scale the container on hover */
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.nsfw-filter {
filter: blur(8px); /* Apply a blur effect */
transition: filter 0.3s ease; /* Smooth transition for the blur effect */
}
.custom-image-container:hover .nsfw-filter {
filter: none; /* Remove the blur effect on hover */
}
</style>
<h1 class="title">
<span>Style Enhancer XL LoRA</span>
</h1>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/kMqCcN3CpMPHO1qyEgnGk.png" alt="sample1">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/pfLIDf7ZX6WJHgTlfqiQk.png" alt="sample4">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/s5ZCIaETb_eKijgbLbXwU.png" alt="sample2">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/qjC5jKExA_JE5BuQJ6-Ue.png" alt="sample3">
</td>
<td>
<div class="custom-image-container">
<img class="custom-image nsfw-filter" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/1vvUY1qbgot5np4CPmORh.png" alt="sample1">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/7OQJdRKKpZponZiZ8cOre.png" alt="sample4">
</div>
</td>
</tr>
</table>
<hr>
## Overview
**Style Enhancer XL LoRA** is an advanced, high-resolution LoRA (Low-Rank Adaptation) adapter designed to enhance the capabilities of Animagine XL 2.0. This innovative model excels in fine-tuning and refining anime-style images, producing unparalleled quality and detail. It seamlessly integrates with the Stable Diffusion XL framework, and uniquely supports Danbooru tags for precise and creative image generation.
Example tags include _**face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck**_.
<hr>
## Model Details
- **Developed by:** [Linaqruf](https://github.com/Linaqruf)
- **Model type:** LoRA adapter for Stable Diffusion XL
- **Model Description:** A compact yet powerful adapter designed to augment and enhance the output of large models like Animagine XL 2.0. This adapter not only improves the style and quality of anime-themed images but also allows users to recreate the distinct 'old-school' art style of SD 1.5. It's the perfect tool for generating high-fidelity, anime-inspired visual content.
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Finetuned from model:** [Animagine XL 2.0](https://huggingface.co/Linaqruf/animagine-xl-2.0)
<hr>
## 🧨 Diffusers Installation
Ensure the installation of the latest `diffusers` library, along with other essential packages:
```bash
pip install diffusers --upgrade
pip install transformers accelerate safetensors
```
The following Python script demonstrates how to utilize the Style Enhancer XL LoRA with Animagine XL 2.0. The default scheduler is EulerAncestralDiscreteScheduler, but it can be explicitly defined for clarity.
```py
import torch
from diffusers import (
StableDiffusionXLPipeline,
EulerAncestralDiscreteScheduler,
AutoencoderKL
)
# Initialize LoRA model and weights
lora_model_id = "Linaqruf/style-enhancer-xl-lora"
lora_filename = "style-enhancer-xl.safetensors"
# Load VAE component
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16
)
# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"Linaqruf/animagine-xl-2.0",
vae=vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16"
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')
# Load and fuse LoRA weights
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
pipe.fuse_lora(lora_scale=0.6)
# Define prompts and generate image
prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
image = pipe(
prompt,
negative_prompt=negative_prompt,
width=1024,
height=1024,
guidance_scale=12,
num_inference_steps=50
).images[0]
# Unfuse LoRA before saving the image
pipe.unfuse_lora()
image.save("anime_girl.png")
```
|
neulab/codebert-java | neulab | "2023-02-27T20:55:40Z" | 24,744 | 10 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2302.05527",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-09-26T14:10:02Z" | This is a `microsoft/codebert-base-mlm` model, trained for 1,000,000 steps (with `batch_size=32`) on **Java** code from the `codeparrot/github-code-clean` dataset, on the masked-language-modeling task.
It is intended to be used in CodeBERTScore: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score), but can be used for any other model or task.
For more information, see: [https://github.com/neulab/code-bert-score](https://github.com/neulab/code-bert-score)
## Citation
If you use this model for research, please cite:
```
@article{zhou2023codebertscore,
url = {https://arxiv.org/abs/2302.05527},
author = {Zhou, Shuyan and Alon, Uri and Agarwal, Sumit and Neubig, Graham},
title = {CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code},
publisher = {arXiv},
year = {2023},
}
``` |
Helsinki-NLP/opus-mt-tc-big-sh-en | Helsinki-NLP | "2023-10-10T10:32:07Z" | 24,691 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"tc",
"big",
"sh",
"en",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-04-13T16:21:20Z" | ---
language:
- bs_Latn
- en
- hr
- sh
- sr_Cyrl
- sr_Latn
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-sh-en
results:
- task:
name: Translation hrv-eng
type: translation
args: hrv-eng
dataset:
name: flores101-devtest
type: flores_101
args: hrv eng devtest
metrics:
- name: BLEU
type: bleu
value: 37.1
- task:
name: Translation bos_Latn-eng
type: translation
args: bos_Latn-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bos_Latn-eng
metrics:
- name: BLEU
type: bleu
value: 66.5
- task:
name: Translation hbs-eng
type: translation
args: hbs-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hbs-eng
metrics:
- name: BLEU
type: bleu
value: 56.4
- task:
name: Translation hrv-eng
type: translation
args: hrv-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hrv-eng
metrics:
- name: BLEU
type: bleu
value: 58.8
- task:
name: Translation srp_Cyrl-eng
type: translation
args: srp_Cyrl-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Cyrl-eng
metrics:
- name: BLEU
type: bleu
value: 44.7
- task:
name: Translation srp_Latn-eng
type: translation
args: srp_Latn-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Latn-eng
metrics:
- name: BLEU
type: bleu
value: 58.4
---
# opus-mt-tc-big-sh-en
Neural machine translation model for translating from Serbo-Croatian (sh) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): bos_Latn hrv srp_Cyrl srp_Latn
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT hbs-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Ispostavilo se da je istina.",
"Ovaj vikend imamo besplatne pozive."
]
model_name = "pytorch-models/opus-mt-tc-big-sh-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Turns out it's true.
# We got free calls this weekend.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-sh-en")
print(pipe("Ispostavilo se da je istina."))
# expected output: Turns out it's true.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.80010 | 66.5 | 301 | 1826 |
| hbs-eng | tatoeba-test-v2021-08-07 | 0.71744 | 56.4 | 10017 | 68934 |
| hrv-eng | tatoeba-test-v2021-08-07 | 0.73563 | 58.8 | 1480 | 10620 |
| srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.68248 | 44.7 | 1580 | 10181 |
| srp_Latn-eng | tatoeba-test-v2021-08-07 | 0.71781 | 58.4 | 6656 | 46307 |
| hrv-eng | flores101-devtest | 0.63948 | 37.1 | 1012 | 24721 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:21:10 EEST 2022
* port machine: LM0-400-22516.local
|
bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF | bartowski | "2024-06-20T17:12:43Z" | 24,662 | 40 | null | [
"gguf",
"text-generation",
"license:other",
"region:us"
] | text-generation | "2024-06-17T14:35:57Z" | ---
license: other
license_name: deepseek-license
license_link: LICENSE
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of DeepSeek-Coder-V2-Lite-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3166">b3166</a> for quantization.
Original model: https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|begin▁of▁sentence|>{system_prompt}
User: {prompt}
Assistant: <|end▁of▁sentence|>Assistant:
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [DeepSeek-Coder-V2-Lite-Instruct-Q8_0_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q8_0_L.gguf) | Q8_0_L | 17.09GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q8_0.gguf) | Q8_0 | 16.70GB | Extremely high quality, generally unneeded but max available quant. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q6_K_L.gguf) | Q6_K_L | 14.56GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q6_K.gguf) | Q6_K | 14.06GB | Very high quality, near perfect, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_L.gguf) | Q5_K_L | 12.37GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_M.gguf) | Q5_K_M | 11.85GB | High quality, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q5_K_S.gguf) | Q5_K_S | 11.14GB | High quality, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_L.gguf) | Q4_K_L | 10.91GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf) | Q4_K_M | 10.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_S.gguf) | Q4_K_S | 9.53GB | Slightly lower quality with more space savings, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-IQ4_XS.gguf) | IQ4_XS | 8.57GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q3_K_L.gguf) | Q3_K_L | 8.45GB | Lower quality but usable, good for low RAM availability. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q3_K_M.gguf) | Q3_K_M | 8.12GB | Even lower quality. |
| [DeepSeek-Coder-V2-Lite-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-IQ3_M.gguf) | IQ3_M | 7.55GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q3_K_S.gguf) | Q3_K_S | 7.48GB | Low quality, not recommended. |
| [DeepSeek-Coder-V2-Lite-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-IQ3_XS.gguf) | IQ3_XS | 7.12GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [DeepSeek-Coder-V2-Lite-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 6.96GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf) | Q2_K | 6.43GB | Very low quality but surprisingly usable. |
| [DeepSeek-Coder-V2-Lite-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-IQ2_M.gguf) | IQ2_M | 6.32GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [DeepSeek-Coder-V2-Lite-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-IQ2_S.gguf) | IQ2_S | 6.00GB | Very low quality, uses SOTA techniques to be usable. |
| [DeepSeek-Coder-V2-Lite-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Instruct-IQ2_XS.gguf) | IQ2_XS | 5.96GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF --include "DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF --include "DeepSeek-Coder-V2-Lite-Instruct-Q8_0.gguf/*" --local-dir DeepSeek-Coder-V2-Lite-Instruct-Q8_0
```
You can either specify a new local-dir (DeepSeek-Coder-V2-Lite-Instruct-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
impira/layoutlm-invoices | impira | "2023-03-25T20:21:25Z" | 24,636 | 150 | transformers | [
"transformers",
"pytorch",
"safetensors",
"layoutlm",
"document-question-answering",
"pdf",
"invoices",
"en",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | document-question-answering | "2022-09-06T17:49:13Z" | ---
language: en
license: cc-by-nc-sa-4.0
pipeline_tag: document-question-answering
tags:
- layoutlm
- document-question-answering
- pdf
- invoices
widget:
- text: "What is the invoice number?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
- text: "What is the purchase amount?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
---
# LayoutLM for Invoices
This is a fine-tuned version of the multi-modal [LayoutLM](https://aka.ms/layoutlm) model for the task of question answering on invoices and other documents. It has been fine-tuned on a proprietary dataset of
invoices as well as both [SQuAD2.0](https://huggingface.co/datasets/squad_v2) and [DocVQA](https://www.docvqa.org/) for general comprehension.
## Non-consecutive tokens
Unlike other QA models, which can only extract consecutive tokens (because they predict the start and end of a sequence), this model can predict longer-range, non-consecutive sequences with an additional
classifier head. For example, QA models often encounter this failure mode:
### Before

### After
However this model is able to predict non-consecutive tokens and therefore the address correctly:

## Getting started with the model
The best way to use this model is via [DocQuery](https://github.com/impira/docquery).
## About us
This model was created by the team at [Impira](https://www.impira.com/).
|
NousResearch/Hermes-2-Pro-Mistral-7B | NousResearch | "2024-06-05T06:53:08Z" | 24,633 | 476 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Mistral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-11T12:55:27Z" | ---
base_model: mistralai/Mistral-7B-v0.1
tags:
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
model-index:
- name: Hermes-2-Pro-Mistral-7B
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.
---
# Hermes 2 Pro - Mistral 7B

## Model Description
Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes!
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Thank you to Latitude.sh for sponsoring compute for this model!
## Example Outputs
### Explaining Problems with Quantum Gravity:

### Roleplaying as a Cosmic Super Intelligence:

### Detailing the Theory of AI Consciousness in JSON

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks
## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5461|± |0.0145|
| | |acc_norm|0.5623|± |0.0145|
|arc_easy | 0|acc |0.8157|± |0.0080|
| | |acc_norm|0.7934|± |0.0083|
|boolq | 1|acc |0.8688|± |0.0059|
|hellaswag | 0|acc |0.6272|± |0.0048|
| | |acc_norm|0.8057|± |0.0039|
|openbookqa | 0|acc |0.3360|± |0.0211|
| | |acc_norm|0.4300|± |0.0222|
|piqa | 0|acc |0.7954|± |0.0094|
| | |acc_norm|0.7998|± |0.0093|
|winogrande | 0|acc |0.7230|± |0.0126|
```
Average: 71.19
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2047|± |0.0254|
| | |acc_norm|0.2283|± |0.0264|
|agieval_logiqa_en | 0|acc |0.3779|± |0.0190|
| | |acc_norm|0.3932|± |0.0192|
|agieval_lsat_ar | 0|acc |0.2652|± |0.0292|
| | |acc_norm|0.2522|± |0.0287|
|agieval_lsat_lr | 0|acc |0.5216|± |0.0221|
| | |acc_norm|0.5137|± |0.0222|
|agieval_lsat_rc | 0|acc |0.5911|± |0.0300|
| | |acc_norm|0.5836|± |0.0301|
|agieval_sat_en | 0|acc |0.7427|± |0.0305|
| | |acc_norm|0.7184|± |0.0314|
|agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348|
| | |acc_norm|0.4466|± |0.0347|
|agieval_sat_math | 0|acc |0.3818|± |0.0328|
| | |acc_norm|0.3545|± |0.0323|
```
Average: 44.52
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214|
| | |exact_str_match |0.2256|± |0.0221|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142|
|bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289|
```
Average: 41.65
## TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4100|± |0.0172|
| | |mc2 |0.5911|± |0.0158|
```
# Function Calling Evaluations
We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode.
## Function Calling Accuracy: 91%

## JSON Mode Accuracy: 84%

Run the evaluator yourself using @interstellarninja's codebase here:
https://github.com/interstellarninja/function-calling-eval
You can find the evaluation datasets here:
https://huggingface.co/datasets/NousResearch/func-calling-eval
https://huggingface.co/datasets/NousResearch/json-mode-eval
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MistralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True)
model = MistralForCausalLM.from_pretrained(
"NousResearch/Hermes-2-Pro-Mistral-7B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Mistral-7B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)},
title={Hermes-2-Pro-Mistral-7B},
author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"}
}
```
|
stablediffusionapi/sdxxxl-v30-jan24 | stablediffusionapi | "2024-03-12T13:17:28Z" | 24,627 | 4 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-01-29T22:33:31Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# sdxxxl-v30-jan24 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "sdxxxl-v30-jan24"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/sdxxxl-v30-jan24)
Model link: [View model](https://modelslab.com/models/sdxxxl-v30-jan24)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "sdxxxl-v30-jan24",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
katuni4ka/tiny-random-glm4 | katuni4ka | "2024-06-20T19:33:31Z" | 24,573 | 0 | transformers | [
"transformers",
"safetensors",
"chatglm",
"feature-extraction",
"custom_code",
"region:us"
] | feature-extraction | "2024-06-20T19:27:57Z" | Entry not found |
abbasgolestani/ag-nli-DeTS-sentence-similarity-v2 | abbasgolestani | "2024-06-27T18:35:00Z" | 24,480 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"feature-extraction",
"sentence-similarity",
"en",
"nl",
"de",
"fr",
"it",
"es",
"dataset:multi_nli",
"dataset:pietrolesci/nli_fever",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-28T16:30:13Z" | ---
license: apache-2.0
datasets:
- multi_nli
- pietrolesci/nli_fever
pipeline_tag: text-classification
tags:
- feature-extraction
- sentence-similarity
- transformers
language:
- en
- nl
- de
- fr
- it
- es
---
# Cross-Encoder for Sentence Similarity
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
## Training Data
This model was trained on 6 different nli datasets. The model will predict a score between 0 (not similar) and 1 (very similar) for the semantic similarity of two sentences.
## Usage (CrossEncoder)
Comparing each sentence of sentences1 array to the corrosponding sentence of sentences2 array like comparing the first sentnece of each array, then comparing the second sentence of each array,...
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('abbasgolestani/ag-nli-DeTS-sentence-similarity-v2')
# Two lists of sentences
sentences1 = ['I am honored to be given the opportunity to help make our company better',
'I love my job and what I do here',
'I am excited about our company’s vision']
sentences2 = ['I am hopeful about the future of our company',
'My work is aligning with my passion',
'Definitely our company vision will be the next breakthrough to change the world and I’m so happy and proud to work here']
pairs = zip(sentences1,sentences2)
list_pairs=list(pairs)
scores1 = model.predict(list_pairs, show_progress_bar=False)
print(scores1)
for i in range(len(sentences1)):
print("{} \t\t {} \t\t Score: {:.4f}".format(sentences1[i], sentences2[i], scores1[i]))
```
## Usage #2
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('abbasgolestani/ag-nli-DeTS-sentence-similarity-v2')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`.
You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class |
cmarkea/distilcamembert-base | cmarkea | "2023-08-01T10:05:33Z" | 24,479 | 26 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1910.01108",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: fr
license: mit
datasets:
- oscar
widget:
- text: "J'aime lire les <mask> de SF."
---
DistilCamemBERT
===============
We present a distillation version of the well named [CamemBERT](https://huggingface.co/camembert-base), a RoBERTa French model version, alias DistilCamemBERT. The aim of distillation is to drastically reduce the complexity of the model while preserving the performances. The proof of concept is shown in the [DistilBERT paper](https://arxiv.org/abs/1910.01108) and the code used for the training is inspired by the code of [DistilBERT](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation).
Loss function
-------------
The training for the distilled model (student model) is designed to be the closest as possible to the original model (teacher model). To perform this the loss function is composed of 3 parts:
* DistilLoss: a distillation loss which measures the silimarity between the probabilities at the outputs of the student and teacher models with a cross-entropy loss on the MLM task ;
* CosineLoss: a cosine embedding loss. This loss function is applied on the last hidden layers of student and teacher models to guarantee a collinearity between them ;
* MLMLoss: and finaly a Masked Language Modeling (MLM) task loss to perform the student model with the original task of the teacher model.
The final loss function is a combination of these three losses functions. We use the following ponderation:
$$Loss = 0.5 \times DistilLoss + 0.3 \times CosineLoss + 0.2 \times MLMLoss$$
Dataset
-------
To limit the bias between the student and teacher models, the dataset used for the DstilCamemBERT training is the same as the camembert-base training one: OSCAR. The French part of this dataset approximately represents 140 GB on a hard drive disk.
Training
--------
We pre-trained the model on a nVidia Titan RTX during 18 days.
Evaluation results
------------------
| Dataset name | f1-score |
| :----------: | :------: |
| [FLUE](https://huggingface.co/datasets/flue) CLS | 83% |
| [FLUE](https://huggingface.co/datasets/flue) PAWS-X | 77% |
| [FLUE](https://huggingface.co/datasets/flue) XNLI | 77% |
| [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr) NER | 98% |
How to use DistilCamemBERT
--------------------------
Load DistilCamemBERT and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cmarkea/distilcamembert-base")
model = AutoModel.from_pretrained("cmarkea/distilcamembert-base")
model.eval()
...
```
Filling masks using pipeline :
```python
from transformers import pipeline
model_fill_mask = pipeline("fill-mask", model="cmarkea/distilcamembert-base", tokenizer="cmarkea/distilcamembert-base")
results = model_fill_mask("Le camembert est <mask> :)")
results
[{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.3878222405910492, 'token': 7200},
{'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.06469205021858215, 'token': 2183},
{'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.04534877464175224, 'token': 1654},
{'sequence': '<s> Le camembert est succulent :)</s>', 'score': 0.04128391295671463, 'token': 26202},
{'sequence': '<s> Le camembert est magnifique :)</s>', 'score': 0.02425697259604931, 'token': 1509}]
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
``` |
unsloth/mistral-7b-instruct-v0.3-bnb-4bit | unsloth | "2024-05-22T18:24:37Z" | 24,459 | 13 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"mistral-7b",
"mistral-instruct",
"instruct",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | "2024-05-22T18:14:39Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- mistral
- mistral-7b
- mistral-instruct
- instruct
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
We have a Google Colab Tesla T4 notebook for Mistral v3 7b here: https://colab.research.google.com/drive/1_yNCks4BTD5zOnjozppphh5GzMFaMKq_?usp=sharing
For conversational ShareGPT style and using Mistral v3 Instruct: https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
google-t5/t5-11b | google-t5 | "2023-01-02T16:15:50Z" | 24,440 | 55 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:c4",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
language:
- en
- fr
- ro
- de
- multilingual
license: apache-2.0
tags:
- summarization
- translation
datasets:
- c4
inference: false
---
# Model Card for T5 11B

# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-11B is the checkpoint with 11 billion parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-11B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
## Disclaimer
**Before `transformers` v3.5.0**, due do its immense size, `t5-11b` required some special treatment.
If you're using transformers `<= v3.4.0`, `t5-11b` should be loaded with flag `use_cdn` set to `False` as follows:
```python
t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-11b', use_cdn = False)
```
Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB.
- Model parallelism has to be used here to overcome this problem as is explained in this [PR](https://github.com/huggingface/transformers/pull/3578).
- DeepSpeed's ZeRO-Offload is another approach as explained in this [post](https://github.com/huggingface/transformers/issues/9996).
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context.
|
Lykon/dreamshaper-xl-turbo | Lykon | "2024-01-17T22:30:46Z" | 24,424 | 39 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-turbo",
"text-to-image",
"art",
"artistic",
"anime",
"dreamshaper",
"turbo",
"lcm",
"en",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-12-05T22:22:29Z" | ---
language:
- en
license: openrail++
tags:
- stable-diffusion
- stable-diffusion-diffusers
- stable-diffusion-xl
- stable-diffusion-xl-turbo
- text-to-image
- art
- artistic
- diffusers
- anime
- dreamshaper
- turbo
- lcm
duplicated_from: lykon/dreamshaper-xl-turbo
---
# Dreamshaper SDXL-Turbo
`lykon/dreamshaper-xl-turbo` is a Stable Diffusion model that has been fine-tuned on [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
Please consider supporting me:
- on [Patreon](https://www.patreon.com/Lykon275)
- or [buy me a coffee](https://snipfeed.co/lykon)
## Diffusers
For more general information on how to run text-to-image models with 🧨 Diffusers, see [the docs](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation).
1. Installation
```
pip install diffusers transformers accelerate
```
2. Run
```py
from diffusers import AutoPipelineForText2Image, DPMSolverMultistepScheduler
import torch
pipe = AutoPipelineForText2Image.from_pretrained('lykon/dreamshaper-xl-turbo', torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "portrait photo of muscular bearded guy in a worn mech suit, light bokeh, intricate, steel metal, elegant, sharp focus, soft lighting, vibrant colors"
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=6, guidance_scale=2).images[0]
image.save("./image.png")
``` |
stablediffusionapi/sdxl-unstable-diffusers-y | stablediffusionapi | "2023-10-08T07:28:58Z" | 24,379 | 7 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-10-08T07:26:48Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# SDXL Unstable Diffusers ☛ YamerMIX V8 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "sdxl-unstable-diffusers-y"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/sdxl-unstable-diffusers-y)
Model link: [View model](https://stablediffusionapi.com/models/sdxl-unstable-diffusers-y)
Credits: [View credits](https://civitai.com/?query=SDXL%20Unstable%20Diffusers%20%E2%98%9B%20YamerMIX%20V8)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "sdxl-unstable-diffusers-y",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320 | timm | "2024-02-10T23:38:05Z" | 24,372 | 4 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:laion-2b",
"arxiv:2210.08402",
"arxiv:2201.03545",
"arxiv:2103.00020",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-03-31T22:26:03Z" | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- laion-2b
---
# Model card for convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320
A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-12k followed by ImageNet-1k in `timm` bby Ross Wightman.
Please see related OpenCLIP model cards for more details on pretrain:
* https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup
* https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 200.1
- GMACs: 70.2
- Activations (M): 88.0
- Image size: 320 x 320
- **Papers:**
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- **Original:** https://github.com/mlfoundations/open_clip
- **Pretrain Dataset:** LAION-2B
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 192, 80, 80])
# torch.Size([1, 384, 40, 40])
# torch.Size([1, 768, 20, 20])
# torch.Size([1, 1536, 10, 10])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1536, 10, 10) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
piuba-bigdata/beto-contextualized-hate-speech | piuba-bigdata | "2023-11-17T18:33:38Z" | 24,350 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-11-30T13:09:43Z" | ---
language:
- es
metrics:
- f1
pipeline_tag: text-classification
---
## Contextualized, fine-grained hate speech detection
Try our [demo]((https://huggingface.co/spaces/piubamas/discurso-de-odio).
Model trained to detect hate speech comments in news articles. Base model is BETO, a Spanish BERT pre-trained model. The task the model was trained on is a multilabel classification problem, where each input have a label for each of the considered groups:
| Label | Description |
| :--------- | :-------------------------------------- |
| WOMEN | Against women |
| LGBTI | Against LGBTI |
| RACISM | Racist |
| CLASS | Classist |
| POLITICS | Because of politics |
| DISABLED | Against disabled |
| APPEARANCE | Against people because their appearance |
| CRIMINAL | Against criminals |
There is an extra label `CALLS`, which represents whether a comment is a call to violent action or not.
## Input
The model was trained taking into account both the comment and the context. To feed this model, use the template
```python
TEXT [SEP] CONTEXT
```
where `[SEP]` is the special token used to separate the comment from the context.
### Example
If we want to analyze
```
Comment: Hay que matarlos a todos!!! Nos infectaron con su virus!
Context: China prohibió la venta de perros y gatos para consumo humano
```
The input should be
```python
Hay que matarlos a todos!!! Nos infectaron con su virus! [SEP] China prohibió la venta de perros y gatos para consumo humano
```
## Usage:
Sadly, the `huggingface` pipeline does not support multi-label classification, so this model cannot be tested directly in the side widget.
To use it, you can try our [demo](https://huggingface.co/spaces/piubamas/discurso-de-odio). If you want to use it with your own code, use the following snippet:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "piubamas/beto-contextualized-hate-speech"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
id2label = [model.config.id2label[k] for k in range(len(model.config.id2label))]
def predict(*args):
encoding = tokenizer.encode_plus(*args)
inputs = {
k:torch.LongTensor(encoding[k]).reshape(1, -1) for k in {"input_ids", "attention_mask", "token_type_ids"}
}
output = model.forward(
**inputs
)
chars = list(zip(id2label, list(output.logits[0].detach().cpu().numpy() > 0)))
return [char for char, pred in chars if pred]
context = "China prohíbe la cría de perros para consumo humano")
text = "Chinos hdrmp hay que matarlos a todos"
prediction = predict(text, context)
```
## Citation
```bibtex
@article{perez2023assessing,
title={Assessing the impact of contextual information in hate speech detection},
author={P{\'e}rez, Juan Manuel and Luque, Franco M and Zayat, Demian and Kondratzky, Mart{\'\i}n and Moro, Agust{\'\i}n and Serrati, Pablo Santiago and Zajac, Joaqu{\'\i}n and Miguel, Paula and Debandi, Natalia and Gravano, Agust{\'\i}n and others},
journal={IEEE Access},
volume={11},
pages={30575--30590},
year={2023},
publisher={IEEE}
}
```
|
CohereForAI/aya-23-8B | CohereForAI | "2024-05-27T03:50:48Z" | 24,338 | 307 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"arxiv:2405.15032",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-19T20:01:07Z" | ---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
# Model Card for Aya-23-8B
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aryabumi2024aya,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi and John Dang and Dwarak Talupuru and Saurabh Dash and David Cairuz and Hangyu Lin and Bharat Venkitesh and Madeline Smith and Kelly Marchisio and Sebastian Ruder and Acyr Locatelli and Julia Kreutzer and Nick Frosst and Phil Blunsom and Marzieh Fadaee and Ahmet Üstün and Sara Hooker},
year={2024},
eprint={2405.15032},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
csebuetnlp/banglat5_nmt_bn_en | csebuetnlp | "2023-04-18T01:52:24Z" | 24,337 | 4 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"bn",
"en",
"multilingual",
"arxiv:2205.11081",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | translation | "2022-08-20T15:30:12Z" | ---
language:
- bn
- en
- multilingual
tags:
- translation
licenses:
- cc-by-nc-sa-4.0
---
# banglat5_nmt_bn_en
This repository contains the **BanglaT5** checkpoint finetuned on the [BanglaNMT]() Bengali-English dataset.
**Note**: The pretrained model uses a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). For best results, make sure the text units are normalized using this library before tokenization.
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
model = AutoModelForSeq2SeqLM.from_pretrained("csebuetnlp/banglat5_nmt_bn_en")
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglat5_nmt_bn_en", use_fast=False)
input_sentence = ""
input_ids = tokenizer(normalize(input_sentence), return_tensors="pt").input_ids
generated_tokens = model.generate(input_ids)
decoded_tokens = tokenizer.batch_decode(generated_tokens)[0]
print(decoded_tokens)
```
## Benchmarks
* On BanglaNMT test set:
| Model | Params | MT (SacreBLEU) |
|--------------------|------------|-----------------------|
|[mT5 (base)](https://huggingface.co/google/mt5-base) | 582M | 36.6 |
|[XLM-ProphetNet](https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased) | 616M | 23.3 |
|[mBART-50](https://huggingface.co/facebook/mbart-large-50) | 611M | 23.6 |
|[IndicBART](https://huggingface.co/ai4bharat/IndicBART) | 244M | 22.7 |
|[BanglaT5](https://huggingface.co/csebuetnlp/banglat5) | 247M | 38.8 |
## Citation
If you use this model, please cite the following paper:
```
@article{bhattacharjee2022banglanlg,
author = {Abhik Bhattacharjee and Tahmid Hasan and Wasi Uddin Ahmad and Rifat Shahriyar},
title = {BanglaNLG: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla},
journal = {CoRR},
volume = {abs/2205.11081},
year = {2022},
url = {https://arxiv.org/abs/2205.11081},
eprinttype = {arXiv},
eprint = {2205.11081}
}
```
If you use the normalization module, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
|
mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF | mradermacher | "2024-07-01T08:46:14Z" | 24,277 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-AI/Hathor_Aleph-L3-8B-v0.72",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T05:25:11Z" | ---
base_model: Nitral-AI/Hathor_Aleph-L3-8B-v0.72
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nitral-AI/Hathor_Aleph-L3-8B-v0.72
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hathor_Aleph-L3-8B-v0.72-GGUF/resolve/main/Hathor_Aleph-L3-8B-v0.72.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hf-tiny-model-private/tiny-random-GPT2LMHeadModel | hf-tiny-model-private | "2023-03-29T18:57:04Z" | 24,242 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-03-29T18:57:00Z" | Entry not found |
kwoncho/gaincut_news_pre2017 | kwoncho | "2024-06-15T04:48:02Z" | 24,235 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-25T04:30:04Z" | Entry not found |
mrm8488/t5-small-finetuned-common_gen | mrm8488 | "2021-06-23T13:08:18Z" | 24,232 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | Entry not found |
mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF | mradermacher | "2024-06-28T09:29:27Z" | 24,221 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-28T04:08:27Z" | ---
base_model: Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO
datasets:
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-MT-UltraDPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Magpie-Pro-MT-UltraDPO-GGUF/resolve/main/Llama-3-8B-Magpie-Pro-MT-UltraDPO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
q-future/one-align | q-future | "2024-05-14T14:51:43Z" | 24,217 | 16 | transformers | [
"transformers",
"pytorch",
"mplug_owl2",
"feature-extraction",
"zero-shot-image-classification",
"custom_code",
"arxiv:2312.17090",
"license:mit",
"region:us"
] | zero-shot-image-classification | "2023-12-22T04:04:39Z" | ---
license: mit
pipeline_tag: zero-shot-image-classification
---
The model that corresponds to Q-Align (ICML2024).
## Quick Start with AutoModel
For this image,  start an AutoModel scorer with `transformers==4.36.1`:
```python
import requests
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("q-future/one-align", trust_remote_code=True, attn_implementation="eager",
torch_dtype=torch.float16, device_map="auto")
from PIL import Image
url = "https://raw.githubusercontent.com/Q-Future/Q-Align/main/fig/singapore_flyer.jpg"
image = Image.open(requests.get(url,stream=True).raw)
model.score([image], task_="quality", input_="image")
# task_ : quality | aesthetics; # input_: image | video
```
Result should be 1.911 (in range [1,5], higher is better).
From paper: `arxiv.org/abs/2312.17090`.
## Syllabus

## IQA Results (Spearman/Pearson/Kendall)
|Datasets | KonIQ (NR-IQA, seen) | SPAQ (NR-IQA, Seen) | KADID (FR-IQA, Seen) | LIVE-C (NR-IQA, *Unseen*) | LIVE (FR-IQA, *Unseen*) | CSIQ (FR-IQA, *Unseen*) | AGIQA (AIGC, *Unseen*)|
| --- | --- | --- | --- | --- | --- | --- | --- |
| *Previous SOTA* | 0.916/0.928 (MUSIQ, ICCV2021) | 0.922/0.919 (LIQE, CVPR2023) | 0.934/0.937 (CONTRIQUE, TIP2022) | NA | NA | NA | NA |
| Q-Align (IQA) | 0.937/0.945/0.785 | 0.931/0.933/0.763 | 0.934/0.934/0.777 | 0.887/0.896/0.706 | 0.874/0.840/0.682 | 0.845/0.876/0.654 | 0.731/0.791/0.529 |
| Q-Align (IQA+VQA) | **0.944**/0.949/**0.797** | 0.931/0.934/0.764 | **0.952**/**0.953**/**0.809** | **0.892**/**0.899**/**0.715** | 0.874/0.846/0.684 | 0.852/0.876/0.663 | 0.739/0.782/0.526 |
| **OneAlign** (IQA+IAA+VQA) | 0.941/**0.950**/0.791 | **0.932**/**0.935**/**0.766** | 0.941/0.942/0.791 | 0.881/0.894/0.699 | **0.887**/**0.856**/**0.699** | **0.881**/**0.906**/**0.699** | **0.801**/**0.838**/**0.602** |
## IAA Results (Spearman/Pearson)
| Dataset | AVA_test |
| --- | --- |
| *VILA (CVPR, 2023)* | 0.774/0.774 |
| *LIQE (CVPR, 2023)* | 0.776/0.763 |
| *Aesthetic Predictor (retrained on AVA_train)* | 0.721/0.723 |
| Q-Align (IAA) | 0.822/0.817 |
| **OneAlign** (IQA+IAA+VQA) | **0.823**/**0.819** |
## VQA Results (Spearman/Pearson)
| Datasets | LSVQ_test | LSVQ_1080p | KoNViD-1k | MaxWell_test |
| --- | --- | --- | --- | --- |
| *SimpleVQA (ACMMM, 2022)* | 0.867/0.861 | 0.764/0.803 | 0.840/0.834 | 0.720/0.715 |
| *FAST-VQA (ECCV 2022)* | 0.876/0.877 | 0.779/0.814 | 0.859/0.855 | 0.721/0.724 |
| Q-Align (VQA) | 0.883/0.882 | 0.797/0.830 | 0.865/0.877 | 0.780/0.782 |
| Q-Align (IQA+VQA) | 0.885/0.883 | 0.802/0.829 | 0.867/0.880 | **0.781**/**0.787** |
| **OneAlign** (IQA+IAA+VQA) | **0.886**/**0.886** | **0.803**/**0.837** | **0.876**/**0.888** | **0.781**/0.786| |
UnfilteredAI/NSFW-GEN-ANIME-v2 | UnfilteredAI | "2024-05-03T08:50:09Z" | 24,214 | 16 | diffusers | [
"diffusers",
"safetensors",
"NSFW",
"UnfilteredAI",
"Anime",
"Text-to-Image",
"text-to-image",
"en",
"base_model:OEvortex/PixelGen",
"doi:10.57967/hf/2177",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-05-03T08:33:53Z" | ---
base_model:
- OEvortex/PixelGen
- UnfilteredAI/NSFW-gen
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- NSFW
- UnfilteredAI
- Anime
- Text-to-Image
---
**Model Name:** NSFW-GEN-ANIME
**Type:** Anime Text-to-Image Generator
**Description:** NSFW-GEN-ANIME is a text-to-anime image generator developed by UnfilteredAI. This model is designed to generate various kinds of images, including explicit and NSFW (Not Safe For Work) content, from textual inputs.
**Features:**
- **Anime Output:** The model produces uncensored and potentially explicit anime-style images based on textual inputs.
- **Tensor Type:** Operates with FP16 tensor type for optimized performance and efficiency.
- **Large Model Size:** With 3.47 billion parameters, the model offers a vast capacity for learning and generating diverse anime imagery.
- **Community Engagement:** As part of UnfilteredAI's open-source initiatives, the model encourages collaboration and contributions from the AI community.
**Usage Guidelines:**
- **Responsible Use:** Users are advised to exercise discretion and responsibility when generating content with this model.
- **Age Restriction:** Due to the explicit nature of the generated content, usage is restricted to individuals over the legal age in their jurisdiction.
- **Ethical Considerations:** Avoid using the model to create harmful or offensive anime imagery.
**Get Involved:**
- **Contribute:** Help enhance the capabilities and ethical considerations of the model by contributing to its development on UnfilteredAI's open-source platform.
- **Explore:** Dive into the anime imagery produced by the model to explore its creative potential and applications.
- **Connect:** Engage with the UnfilteredAI community to share insights, feedback, and ideas related to NSFW anime content generation and AI ethics.
|
emilianJR/epiCRealism | emilianJR | "2023-07-23T14:14:35Z" | 24,199 | 55 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-06-25T12:13:26Z" | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Diffuser model for this SD checkpoint:
https://civitai.com/models/25694/epicrealism
**emilianJR/epiCRealism** is the HuggingFace diffuser that you can use with **diffusers.StableDiffusionPipeline()**.
Examples | Examples | Examples
---- | ---- | ----
,%20raw%20photo,%20((monochrome)),%20((grayscale)),%20black%20and%20white%20photo.jpeg) |  | ![]()
,%20lotr,%20fantasy,%20elf,%20female,%20full%20body,%20looking%20at%20viewer,%20portrait,%20phot.jpeg) | ,%20soft%20lighting,%20h.jpeg) | ![]()
-------
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "emilianJR/epiCRealism"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "YOUR PROMPT"
image = pipe(prompt).images[0]
image.save("image.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
fmops/distilbert-prompt-injection | fmops | "2023-07-28T17:56:32Z" | 24,194 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"en",
"de",
"es",
"dataset:deepset/prompt-injections",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-17T21:14:51Z" | ---
license: apache-2.0
datasets:
- deepset/prompt-injections
language:
- en
- de
- es
metrics:
- accuracy
library_name: transformers
--- |
internlm/internlm2-chat-7b-sft | internlm | "2024-07-02T12:26:38Z" | 24,143 | 5 | transformers | [
"transformers",
"safetensors",
"internlm2",
"text-generation",
"conversational",
"custom_code",
"arxiv:2403.17297",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-01-11T07:29:45Z" | ---
pipeline_tag: text-generation
license: other
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) • [📜Technical Report](https://arxiv.org/abs/2403.17297)
</div>
## Introduction
InternLM2 has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:
- **200K Context window**: Nearly perfect at finding needles in the haystack with 200K-long context, with leading performance on long-context tasks like LongBench and L-Eval. Try it with [LMDeploy](https://github.com/InternLM/lmdeploy) for 200K-context inference.
- **Outstanding comprehensive performance**: Significantly better than the last generation in all dimensions, especially in reasoning, math, code, chat experience, instruction following, and creative writing, with leading performance among open-source models in similar sizes. In some evaluations, InternLM2-Chat-20B may match or even surpass ChatGPT (GPT-3.5).
- **Code interpreter & Data analysis**: With code interpreter, InternLM2-Chat-20B obtains compatible performance with GPT-4 on GSM8K and MATH. InternLM2-Chat also provides data analysis capability.
- **Stronger tool use**: Based on better tool utilization-related capabilities in instruction following, tool selection and reflection, InternLM2 can support more kinds of agents and multi-step tool calling for complex tasks. See [examples](https://github.com/InternLM/lagent).
## InternLM2-Chat-7B-SFT
InternLM2-Chat-7B-SFT is the SFT version based on InternLM2-Base, and InternLM2-Chat-7B is further trained from InternLM2-Chat-7B-SFT by Online RLHF.
We release the SFT version so that the community can study the influence of RLHF deeply.
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM2 using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://rank.opencompass.org.cn/leaderboard-llm) for more evaluation results.
| Dataset\Models | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 |
| --- | --- | --- | --- | --- | --- | --- |
| MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 |
| AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 |
| BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 |
| GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 |
| MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 |
| HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 |
| MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 |
- The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means come from the original papers), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/).
**Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
### Import from Transformers
To load the InternLM 7B Chat model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-7b-sft", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-7b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Hello! How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)
```
The responses can be streamed using `stream_chat`:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-7b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Hello", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
```bash
pip install lmdeploy
```
You can run batch inference locally with the following python code:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-7b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
Or you can launch an OpenAI compatible server with the following command:
```bash
lmdeploy serve api_server internlm/internlm2-chat-7b-sft --model-name internlm2-chat-7b-sft --server-port 23333
```
Then you can send a chat request to the server:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-7b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [LMDeploy documentation](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
Launch OpenAI compatible server with `vLLM>=0.3.2`:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-7b-sft --served-model-name internlm2-chat-7b-sft --trust-remote-code
```
Then you can send a chat request to the server:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-7b-sft",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Introduce deep learning to me."}
]
}'
```
Find more details in the [vLLM documentation](https://docs.vllm.ai/en/latest/index.html)
## Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <[email protected]>.
## Citation
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 简介
InternLM2 ,即书生·浦语大模型第二代,开源了面向实用场景的70亿参数基础模型与对话模型 (InternLM2-Chat-7B)。模型具有以下特点:
- 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。 可以通过 [LMDeploy](https://github.com/InternLM/lmdeploy) 尝试20万字超长上下文推理。
- 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码、对话体验、指令遵循和创意写作等方面的能力提升尤为显著,综合性能达到同量级开源模型的领先水平,在重点能力评测上 InternLM2-Chat-20B 能比肩甚至超越 ChatGPT (GPT-3.5)。
- 代码解释器与数据分析:在配合代码解释器(code-interpreter)的条件下,InternLM2-Chat-20B 在 GSM8K 和 MATH 上可以达到和 GPT-4 相仿的水平。基于在数理和工具方面强大的基础能力,InternLM2-Chat 提供了实用的数据分析能力。
- 工具调用能力整体升级:基于更强和更具有泛化性的指令理解、工具筛选与结果反思等能力,新版模型可以更可靠地支持复杂智能体的搭建,支持对工具进行有效的多轮调用,完成较复杂的任务。可以查看更多[样例](https://github.com/InternLM/lagent)。
## InternLM2-Chat-7B-SFT
InternLM2-Chat-7B-SFT 基于 InternLM2-Base-7B 经过有监督微调(SFT)训练而来,InternLM2-Chat-7B 在 InternLM2-Chat-7B-SFT 的基础上进一步经历了 Online RLHF。
我们开源 SFT 模型以便利社区对 RLHF 的研究。
### 性能评测
我们使用开源评测工具 [OpenCompass](https://github.com/internLM/OpenCompass/) 从学科综合能力、语言能力、知识能力、推理能力、理解能力五大能力维度对InternLM开展全面评测,部分评测结果如下表所示,欢迎访问[ OpenCompass 榜单 ](https://rank.opencompass.org.cn/leaderboard-llm)获取更多的评测结果。
| 评测集\模型 | InternLM2-7B | InternLM2-Chat-7B | InternLM2-20B | InternLM2-Chat-20B | ChatGPT | GPT-4 |
| --- | --- | --- | --- | --- | --- | --- |
| MMLU | 65.8 | 63.7 | 67.7 | 66.5 | 69.1 | 83.0 |
| AGIEval | 49.9 | 47.2 | 53.0 | 50.3 | 39.9 | 55.1 |
| BBH | 65.0 | 61.2 | 72.1 | 68.3 | 70.1 | 86.7 |
| GSM8K | 70.8 | 70.7 | 76.1 | 79.6 | 78.2 | 91.4 |
| MATH | 20.2 | 23.0 | 25.5 | 31.9 | 28.0 | 45.8 |
| HumanEval | 43.3 | 59.8 | 48.8 | 67.1 | 73.2 | 74.4 |
| MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 |
- 以上评测结果基于 [OpenCompass](https://github.com/internLM/OpenCompass/) 获得(部分数据标注`*`代表数据来自原始论文),具体测试细节可参见 [OpenCompass](https://github.com/internLM/OpenCompass/) 中提供的配置文件。
- 评测数据会因 [OpenCompass](https://github.com/internLM/OpenCompass/) 的版本迭代而存在数值差异,请以 [OpenCompass](https://github.com/internLM/OpenCompass/) 最新版的评测结果为主。
**局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。
### 通过 Transformers 加载
通过以下的代码加载 InternLM2 7B Chat SFT 模型
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-7b-sft", trust_remote_code=True)
# `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-7b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
# 你好!有什么我可以帮助你的吗?
response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=history)
print(response)
```
如果想进行流式生成,则可以使用 `stream_chat` 接口:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "internlm/internlm2-chat-7b-sft"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "你好", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
```
## 部署
### LMDeploy
LMDeploy 由 MMDeploy 和 MMRazor 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
```bash
pip install lmdeploy
```
你可以使用以下 python 代码进行本地批量推理:
```python
import lmdeploy
pipe = lmdeploy.pipeline("internlm/internlm2-chat-7b-sft")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
或者你可以使用以下命令启动兼容 OpenAI API 的服务:
```bash
lmdeploy serve api_server internlm/internlm2-chat-7b-sft --server-port 23333
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:23333/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-7b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [LMDeploy 文档](https://lmdeploy.readthedocs.io/en/latest/)
### vLLM
使用`vLLM>=0.3.2`启动兼容 OpenAI API 的服务:
```bash
pip install vllm
```
```bash
python -m vllm.entrypoints.openai.api_server --model internlm/internlm2-chat-7b-sft --trust-remote-code
```
然后你可以向服务端发起一个聊天请求:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "internlm2-chat-7b-sft",
"messages": [
{"role": "system", "content": "你是个友善的AI助手。"},
{"role": "user", "content": "介绍一下深度学习。"}
]
}'
```
更多信息请查看 [vLLM 文档](https://docs.vllm.ai/en/latest/index.html)
## 开源许可证
本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <[email protected]>。
## 引用
```
@misc{cai2024internlm2,
title={InternLM2 Technical Report},
author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
year={2024},
eprint={2403.17297},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf | RichardErkhov | "2024-06-25T15:16:25Z" | 24,130 | 0 | null | [
"gguf",
"region:us"
] | null | "2024-06-25T00:41:34Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
starcoder2-15b-instruct-v0.1 - GGUF
- Model creator: https://huggingface.co/bigcode/
- Original model: https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [starcoder2-15b-instruct-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q2_K.gguf) | Q2_K | 5.77GB |
| [starcoder2-15b-instruct-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 6.25GB |
| [starcoder2-15b-instruct-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.IQ3_S.gguf) | IQ3_S | 6.52GB |
| [starcoder2-15b-instruct-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.51GB |
| [starcoder2-15b-instruct-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.IQ3_M.gguf) | IQ3_M | 6.8GB |
| [starcoder2-15b-instruct-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q3_K.gguf) | Q3_K | 7.49GB |
| [starcoder2-15b-instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.49GB |
| [starcoder2-15b-instruct-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 8.35GB |
| [starcoder2-15b-instruct-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.12GB |
| [starcoder2-15b-instruct-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q4_0.gguf) | Q4_0 | 8.44GB |
| [starcoder2-15b-instruct-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.IQ4_NL.gguf) | IQ4_NL | 8.55GB |
| [starcoder2-15b-instruct-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.53GB |
| [starcoder2-15b-instruct-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q4_K.gguf) | Q4_K | 9.18GB |
| [starcoder2-15b-instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.18GB |
| [starcoder2-15b-instruct-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q4_1.gguf) | Q4_1 | 9.35GB |
| [starcoder2-15b-instruct-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q5_0.gguf) | Q5_0 | 10.27GB |
| [starcoder2-15b-instruct-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.27GB |
| [starcoder2-15b-instruct-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q5_K.gguf) | Q5_K | 10.65GB |
| [starcoder2-15b-instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.65GB |
| [starcoder2-15b-instruct-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q5_1.gguf) | Q5_1 | 11.18GB |
| [starcoder2-15b-instruct-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q6_K.gguf) | Q6_K | 12.2GB |
| [starcoder2-15b-instruct-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/bigcode_-_starcoder2-15b-instruct-v0.1-gguf/blob/main/starcoder2-15b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 15.8GB |
Original model description:
---
pipeline_tag: text-generation
base_model: bigcode/starcoder2-15b
datasets:
- bigcode/self-oss-instruct-sc2-exec-filter-50k
license: bigcode-openrail-m
library_name: transformers
tags:
- code
model-index:
- name: starcoder2-15b-instruct-v0.1
results:
- task:
type: text-generation
dataset:
name: LiveCodeBench (code generation)
type: livecodebench-codegeneration
metrics:
- type: pass@1
value: 20.4
- task:
type: text-generation
dataset:
name: LiveCodeBench (self repair)
type: livecodebench-selfrepair
metrics:
- type: pass@1
value: 20.9
- task:
type: text-generation
dataset:
name: LiveCodeBench (test output prediction)
type: livecodebench-testoutputprediction
metrics:
- type: pass@1
value: 29.8
- task:
type: text-generation
dataset:
name: LiveCodeBench (code execution)
type: livecodebench-codeexecution
metrics:
- type: pass@1
value: 28.1
- task:
type: text-generation
dataset:
name: HumanEval
type: humaneval
metrics:
- type: pass@1
value: 72.6
- task:
type: text-generation
dataset:
name: HumanEval+
type: humanevalplus
metrics:
- type: pass@1
value: 63.4
- task:
type: text-generation
dataset:
name: MBPP
type: mbpp
metrics:
- type: pass@1
value: 75.2
- task:
type: text-generation
dataset:
name: MBPP+
type: mbppplus
metrics:
- type: pass@1
value: 61.2
- task:
type: text-generation
dataset:
name: DS-1000
type: ds-1000
metrics:
- type: pass@1
value: 40.6
---
# StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation

## Model Summary
We introduce StarCoder2-15B-Instruct-v0.1, the very first entirely self-aligned code Large Language Model (LLM) trained with a fully permissive and transparent pipeline. Our open-source pipeline uses StarCoder2-15B to generate thousands of instruction-response pairs, which are then used to fine-tune StarCoder-15B itself without any human annotations or distilled data from huge and proprietary LLMs.
- **Model:** [bigcode/starcoder2-15b-instruct-v0.1](https://huggingface.co/bigcode/starcoder2-instruct-15b-v0.1)
- **Code:** [bigcode-project/starcoder2-self-align](https://github.com/bigcode-project/starcoder2-self-align)
- **Dataset:** [bigcode/self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k/)
- **Authors:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Federico Cassano](https://federico.codes/),
[Jiawei Liu](https://jw-liu.xyz),
[Yifeng Ding](https://yifeng-ding.com),
[Naman Jain](https://naman-ntc.github.io),
[Harm de Vries](https://www.harmdevries.com),
[Leandro von Werra](https://twitter.com/lvwerra),
[Arjun Guha](https://www.khoury.northeastern.edu/home/arjunguha/main/home/),
[Lingming Zhang](https://lingming.cs.illinois.edu).

## Use
### Intended use
The model is designed to respond to **coding-related instructions in a single turn**. Instructions in other styles may result in less accurate responses.
Here is an example to get started with the model using the [transformers](https://huggingface.co/docs/transformers/index) library:
```python
import transformers
import torch
pipeline = transformers.pipeline(
model="bigcode/starcoder2-15b-instruct-v0.1",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
def respond(instruction: str, response_prefix: str) -> str:
messages = [{"role": "user", "content": instruction}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False)
prompt += response_prefix
teminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("###"),
]
result = pipeline(
prompt,
max_length=256,
num_return_sequences=1,
do_sample=False,
eos_token_id=teminators,
pad_token_id=pipeline.tokenizer.eos_token_id,
truncation=True,
)
response = response_prefix + result[0]["generated_text"][len(prompt) :].split("###")[0].rstrip()
return response
instruction = "Write a quicksort function in Python with type hints and a 'less_than' parameter for custom sorting criteria."
response_prefix = ""
print(respond(instruction, response_prefix))
```
Here is the expected output:
``````
Here's how you can implement a quicksort function in Python with type hints and a 'less_than' parameter for custom sorting criteria:
```python
from typing import TypeVar, Callable
T = TypeVar('T')
def quicksort(items: list[T], less_than: Callable[[T, T], bool] = lambda x, y: x < y) -> list[T]:
if len(items) <= 1:
return items
pivot = items[0]
less = [x for x in items[1:] if less_than(x, pivot)]
greater = [x for x in items[1:] if not less_than(x, pivot)]
return quicksort(less, less_than) + [pivot] + quicksort(greater, less_than)
```
``````
### Bias, Risks, and Limitations
StarCoder2-15B-Instruct-v0.1 is primarily finetuned for Python code generation tasks that can be verified through execution, which may lead to certain biases and limitations. For example, the model might not adhere strictly to instructions that dictate the output format. In these situations, it's beneficial to provide a **response prefix** or a **one-shot example** to steer the model’s output. Additionally, the model may have limitations with other programming languages and out-of-domain coding tasks.
The model also inherits the bias, risks, and limitations from its base StarCoder2-15B model. For more information, please refer to the [StarCoder2-15B model card](https://huggingface.co/bigcode/starcoder2-15b).
## Evaluation on EvalPlus, LiveCodeBench, and DS-1000


## Training Details
### Hyperparameters
- **Optimizer:** Adafactor
- **Learning rate:** 1e-5
- **Epoch:** 4
- **Batch size:** 64
- **Warmup ratio:** 0.05
- **Scheduler:** Linear
- **Sequence length:** 1280
- **Dropout**: Not applied
### Hardware
1 x NVIDIA A100 80GB
## Resources
- **Model:** [bigcode/starCoder2-15b-instruct-v0.1](https://huggingface.co/bigcode/starcoder2-instruct-15b-v0.1)
- **Code:** [bigcode-project/starcoder2-self-align](https://github.com/bigcode-project/starcoder2-self-align)
- **Dataset:** [bigcode/self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k/)
### Full Data Pipeline
Our dataset generation pipeline has several steps. We provide intermediate datasets for every step of the pipeline:
1. Original seed dataset filtered from The Stack v1: https://huggingface.co/datasets/bigcode/python-stack-v1-functions-filtered
2. Seed dataset filtered using StarCoder2-15B as a judge for removing items with bad docstrings: https://huggingface.co/datasets/bigcode/python-stack-v1-functions-filtered-sc2
3. seed -> concepts: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-concepts
4. concepts -> instructions: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-instructions
5. instructions -> response: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-responses-unfiltered
6. Responses filtered by executing them: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-500k-raw
7. Executed responses filtered by deduplicating them (final dataset): https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k
|
FlagAlpha/Llama3-Chinese-8B-Instruct | FlagAlpha | "2024-04-24T02:33:43Z" | 24,095 | 50 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"chinese",
"conversational",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-23T03:37:34Z" | ---
license: apache-2.0
tags:
- llama3
- chinese
---
# Llama3-Chinese-8B-Instruct
Llama3-Chinese-8B-Instruct基于Llama3-8B中文微调对话模型,由Llama中文社区和AtomEcho(原子回声)联合研发,我们会持续提供更新的模型参数,模型训练过程见 [https://llama.family](https://llama.family)。
模型的部署、训练、微调等方法详见Llama中文社区GitHub仓库:[https://github.com/LlamaFamily/Llama-Chinese](https://github.com/LlamaFamily/Llama-Chinese)
## 如何使用
```
import transformers
import torch
model_id = "FlagAlpha/Llama3-Chinese-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.float16},
device="cuda",
)
messages = [{"role": "system", "content": ""}]
messages.append(
{"role": "user", "content": "介绍一下机器学习"}
)
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
content = outputs[0]["generated_text"][len(prompt):]
print(content)
``` |
RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf | RichardErkhov | "2024-06-25T23:52:31Z" | 24,064 | 0 | null | [
"gguf",
"arxiv:2203.05482",
"region:us"
] | null | "2024-06-25T19:23:11Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DolphinStar-12.5B - GGUF
- Model creator: https://huggingface.co/Noodlz/
- Original model: https://huggingface.co/Noodlz/DolphinStar-12.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DolphinStar-12.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q2_K.gguf) | Q2_K | 4.33GB |
| [DolphinStar-12.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ3_XS.gguf) | IQ3_XS | 4.81GB |
| [DolphinStar-12.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ3_S.gguf) | IQ3_S | 5.07GB |
| [DolphinStar-12.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K_S.gguf) | Q3_K_S | 5.04GB |
| [DolphinStar-12.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ3_M.gguf) | IQ3_M | 5.24GB |
| [DolphinStar-12.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K.gguf) | Q3_K | 5.62GB |
| [DolphinStar-12.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K_M.gguf) | Q3_K_M | 5.62GB |
| [DolphinStar-12.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [DolphinStar-12.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ4_XS.gguf) | IQ4_XS | 6.3GB |
| [DolphinStar-12.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_0.gguf) | Q4_0 | 6.57GB |
| [DolphinStar-12.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ4_NL.gguf) | IQ4_NL | 6.64GB |
| [DolphinStar-12.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_K_S.gguf) | Q4_K_S | 6.62GB |
| [DolphinStar-12.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_K.gguf) | Q4_K | 6.99GB |
| [DolphinStar-12.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_K_M.gguf) | Q4_K_M | 6.99GB |
| [DolphinStar-12.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_1.gguf) | Q4_1 | 7.29GB |
| [DolphinStar-12.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_0.gguf) | Q5_0 | 8.01GB |
| [DolphinStar-12.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_K_S.gguf) | Q5_K_S | 8.01GB |
| [DolphinStar-12.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_K.gguf) | Q5_K | 8.22GB |
| [DolphinStar-12.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_K_M.gguf) | Q5_K_M | 8.22GB |
| [DolphinStar-12.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_1.gguf) | Q5_1 | 8.73GB |
| [DolphinStar-12.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q6_K.gguf) | Q6_K | 9.53GB |
| [DolphinStar-12.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q8_0.gguf) | Q8_0 | 12.35GB |
Original model description:
---
license: apache-2.0
---

Custom Model "Dolphin2Star1" Merged by Noodlz.
12.5B linear merged from the uncensored mistral 7B v0.2 as the base, with the fine tunes of StarlingLM 7B Beta that's originally mistral 7B v0.1
have fun =)
[EDIT] - preset wise it seems like it likes the "ChatML" format.
[EDIT 2] - Usage Notes - model is sorta picky with the batch size and prompt preset/template. (maybe because merge of ChatML and OpenChat models)
My current recommended setting & findings
- Using LM Studio - use the default preset. GPU acceleration to max. prompt eval size to 1024, context length to 32768. this yields me decent, coherant results. ChatML works too but occasionall spits up odd texts after a couple of turns.
- Using Oobabooga (Windows PC) - runs well using run-in-4bit along with use_flash_attention_2. default presets and everything works just fine.
- Using OobaBooga (Mac) - [investigating]
## Instructions Template:
```
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{{ '<s>' }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
```
## Chat Template:
```
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{%- if message['content'] -%}
{{- message['content'] + '\n\n' -}}
{%- endif -%}
{%- if user_bio -%}
{{- user_bio + '\n\n' -}}
{%- endif -%}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{- name1 + ': ' + message['content'] + '\n'-}}
{%- else -%}
{{- name2 + ': ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
```
---
license: apache-2.0
---
---
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NexusFlow/Starling-LM-7B-beta
library_name: transformers
tags:
- mergekit
- merge
---
# output_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NexusFlow/Starling-LM-7B-beta](https://huggingface.co/NexusFlow/Starling-LM-7B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
parameters:
weight: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0,1]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [0,1]
parameters:
weight: 0
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [1,8]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [4,12]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [8,16]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [12,20]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [16,24]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [20,28]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [24,31]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [31,32]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [31,32]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
```
|
FL33TW00D-HF/whisper-tiny | FL33TW00D-HF | "2024-06-17T14:14:38Z" | 24,051 | 2 | transformers | [
"transformers",
"gguf",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-03-12T16:59:49Z" | ---
license: apache-2.0
---
# Model Card for Ratchet + Whisper Tiny
<!-- Provide a quick summary of what the model is/does. -->
This is a conversion from the GGML format of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) into the Ratchet custom format.
## Model Card Contact
[[email protected]](mailto:[email protected]) |
Yntec/Memento | Yntec | "2024-03-22T19:20:54Z" | 24,050 | 2 | diffusers | [
"diffusers",
"safetensors",
"Photorealistic",
"Base model",
"Abstract",
"Fusch",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-05T01:17:23Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Photorealistic
- Base model
- Abstract
- Fusch
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# Memento
A mix of Real Life v2 with photorealistic models so you can create your memento. MementoVAE has the 840000 VAE baked in.
Original page: https://civitai.com/models/171814?modelVersionId=219513 (Real life v2)
Samples and prompts:

(Click for larger)
Top left: closeup portrait of cowboy George Washington
Top right: a painting of a white lion cub by Bnhr, nature, grass, tree, outdoors, forest, animal focus, yellow eyes,
Bottom left: view from diagonally above, central adjustment, skinny young northern european female, long reddish brunette hair, real hair movement, elongated head, beautiful face, grey eyes, thin bowed eyebrows, snub nose, gentle lower jaw line, narrow chin, da vinci lips, slightly smiling with parted lips, curious friendly facial expression, skirt, slim narrow tapered hips
Bottom right: kodachrome camera transparency, dramatic lighting film grain, PARTY HARD BACKGROUND, pretty cute little girl in Zone 51, Extraterrestrial, Alien Space Ship Delivering Christmas Presents, Alien Space Ship Decorated With Garlands and Christmas Balls, Snowstorm

(Click for larger)
Top left: 1986 movie screenshot Santa Claus with wife and daughter enjoying cake with candles. sitting with a pretty cute little girl, Gift Birthday Theme by Gil_Elvgren and Haddon_Sundblom
Top right: blonde pretty Princess Peach in the mushroom kingdom
Bottom left: a polar Bear playing guitar in a club, whimsical
Bottom right: pretty ladie with tall guy together standing, cute eyes, photoreal portrait, is on top of he Closeup a of rocks on pile top of a ocean moon to the magazine. |
GroNLP/bert-base-dutch-cased | GroNLP | "2023-09-11T08:57:51Z" | 24,008 | 24 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"BERTje",
"nl",
"arxiv:1912.09582",
"doi:10.57967/hf/0149",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:04Z" | ---
language: nl
thumbnail: "https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png"
tags:
- BERTje
---
# BERTje: A Dutch BERT model
[Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) •
[Andreas van Cranenburgh](https://www.semanticscholar.org/author/Andreas-van-Cranenburgh/2791585) •
[Arianna Bisazza](https://www.semanticscholar.org/author/Arianna-Bisazza/3242253) •
[Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) •
[Gertjan van Noord](https://www.semanticscholar.org/author/Gertjan-van-Noord/143715131) •
[Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475)
## Model description
BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.
<img src="https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png" height="250">
For details, check out our paper on [arXiv](https://arxiv.org/abs/1912.09582), the code on [Github](https://github.com/wietsedv/bertje) and related work on [Semantic Scholar](https://www.semanticscholar.org/paper/BERTje%3A-A-Dutch-BERT-Model-Vries-Cranenburgh/a4d5e425cac0bf84c86c0c9f720b6339d6288ffa).
The paper and Github page mention fine-tuned models that are available [here](https://huggingface.co/wietsedv).
## How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased")
model = AutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # Tensorflow
```
**WARNING:** The vocabulary size of BERTje has changed in 2021. If you use an older fine-tuned model and experience problems with the `GroNLP/bert-base-dutch-cased` tokenizer, use use the following tokenizer:
```python
tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased", revision="v1") # v1 is the old vocabulary
```
## Benchmarks
The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures.
More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints.
All of the tested models are *base* sized (12) layers with cased tokenization.
Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available.
### Named Entity Recognition
| Model | [CoNLL-2002](https://www.clips.uantwerpen.be/conll2002/ner/) | [SoNaR-1](https://ivdnt.org/downloads/taalmaterialen/tstc-sonar-corpus) | spaCy UD LassySmall |
| ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| **BERTje** | [**90.24**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner) | [**84.93**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-sonar-ner) | [86.10](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner) |
| [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | [88.61](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner) | [84.19](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner) | [**86.77**](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner) |
| [BERT-NL](http://textdata.nl) | 85.05 | 80.45 | 81.62 |
| [RobBERT](https://github.com/iPieter/RobBERT) | 84.72 | 81.98 | 79.84 |
### Part-of-speech tagging
| Model | [UDv2.5 LassySmall](https://universaldependencies.org/treebanks/nl_lassysmall/index.html) |
| ---------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
| **BERTje** | **96.48** |
| [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | 96.20 |
| [BERT-NL](http://textdata.nl) | 96.10 |
| [RobBERT](https://github.com/iPieter/RobBERT) | 95.91 |
### BibTeX entry and citation info
```bibtex
@misc{devries2019bertje,
\ttitle = {{BERTje}: {A} {Dutch} {BERT} {Model}},
\tshorttitle = {{BERTje}},
\tauthor = {de Vries, Wietse and van Cranenburgh, Andreas and Bisazza, Arianna and Caselli, Tommaso and Noord, Gertjan van and Nissim, Malvina},
\tyear = {2019},
\tmonth = dec,
\thowpublished = {arXiv:1912.09582},
\turl = {http://arxiv.org/abs/1912.09582},
}
```
|
duyntnet/gemma-2-9b-it-imatrix-GGUF | duyntnet | "2024-06-28T18:26:58Z" | 24,008 | 1 | transformers | [
"transformers",
"gguf",
"imatrix",
"gemma-2-9b-it",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | "2024-06-28T15:30:16Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- gemma-2-9b-it
---
Quantizations of https://huggingface.co/google/gemma-2-9b-it
**Note**: You will need latest [llama.cpp](https://github.com/ggerganov/llama.cpp/releases) (b3259 or later) to run Gemma 2.
# From original readme
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print |
femboysLover/RealisticStockPhoto-fp16 | femboysLover | "2023-09-04T11:13:32Z" | 24,003 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-09-04T11:07:34Z" | ---
library_name: diffusers
pipeline_tag: text-to-image
---
original model weights https://civitai.com/models/139565/realistic-stock-photo version 1.0 |
lllyasviel/control_v11p_sd15_softedge | lllyasviel | "2023-05-04T18:50:55Z" | 23,937 | 13 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | "2023-04-14T19:24:54Z" | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- controlnet-v1-1
- image-to-image
duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_softedge
---
# Controlnet - v1.1 - *Soft Edge Version*
**Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/ControlNet)
and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_softedge.pth) into `diffusers` format.
It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet).
ControlNet is a neural network structure to control diffusion models by adding extra conditions.

This checkpoint corresponds to the ControlNet conditioned on **Soft edges**.
## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install https://github.com/patrickvonplaten/controlnet_aux
```sh
$ pip install controlnet_aux==0.3.0
```
2. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
3. Run code:
```python
import torch
import os
from huggingface_hub import HfApi
from pathlib import Path
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from controlnet_aux import PidiNetDetector, HEDdetector
from diffusers import (
ControlNetModel,
StableDiffusionControlNetPipeline,
UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11p_sd15_softedge"
image = load_image(
"https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/input.png"
)
prompt = "royal chamber with fancy bed"
processor = HEDdetector.from_pretrained('lllyasviel/Annotators')
processor = PidiNetDetector.from_pretrained('lllyasviel/Annotators')
control_image = processor(image, safe=True)
control_image.save("./images/control.png")
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0]
image.save('images/image_out.png')
```



## Other released checkpoints v1-1
The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example |
|---|---|---|---|---|
|[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>|
|[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>|
|[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>|
## Improvements in Soft Edge 1.1:
- Soft Edge 1.1 was called HED 1.0 in previous ControlNet.
- The training dataset of previous cnet 1.0 has several problems including (1) a small group of greyscale human images are duplicated thousands of times (!!), causing the previous model somewhat likely to generate grayscale human images; (2) some images has low quality, very blurry, or significant JPEG artifacts; (3) a small group of images has wrong paired prompts caused by a mistake in our data processing scripts. The new model fixed all problems of the training dataset and should be more reasonable in many cases.
- The Soft Edge 1.1 is significantly (in nealy 100% cases) better than HED 1.0. This is mainly because HED or PIDI estimator tend to hide a corrupted greyscale version of original image inside the soft edge map and the previous model HED 1.0 is over-fitted to restore that hidden corrupted image rather than perform boundary-aware diffusion. The training of Soft Edge 1.1 used 75% "safe" filtering to remove such hidden corrupted greyscale images insider control maps. This makes the Soft Edge 1.1 very robust. In out test, Soft Edge 1.1 is as usable as the depth model and has potential to be more frequently used.
## More information
For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly). |
unsloth/gemma-2b-it | unsloth | "2024-04-18T15:00:50Z" | 23,886 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"gemma-2b",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-21T17:40:14Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- gemma
- gemma-2b
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
CiaraRowles/TemporalNet | CiaraRowles | "2023-04-05T22:59:34Z" | 23,873 | 347 | diffusers | [
"diffusers",
"safetensors",
"controlnet",
"stable-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | null | "2023-03-23T22:31:31Z" | ---
license: openrail
tags:
- controlnet
- stable-diffusion
- diffusers
base_model: runwayml/stable-diffusion-v1-5
---
Introducing the Beta Version of TemporalNet
TemporalNet is a ControlNet model designed to enhance the temporal consistency of generated outputs, as demonstrated in this example: https://twitter.com/CiaraRowles1/status/1637486561917906944. While it does not eliminate all flickering, it significantly reduces it, particularly at higher denoise levels. For optimal results, it is recommended to use TemporalNet in combination with other methods.
Instructions for Use:
1) Add the model "diff_control_sd15_temporalnet_fp16.safetensors" to your models folder in the ControlNet extension in Automatic1111's Web UI.
2) Create a folder that contains:
- A subfolder named "Input_Images" with the input frames
- A PNG file called "init.png" that is pre-stylized in your desired style
- The "temporalvideo.py" script
3) Customize the "temporalvideo.py" script according to your preferences, such as the image resolution, prompt, and control net settings.
4) Launch Automatic1111's Web UI with the --api setting enabled.
5) Execute the Python script.
*Please note that the "init.png" image will not significantly influence the style of the output video. Its primary purpose is to prevent a drastic change in aesthetics during the first few frames.*
Also, I highly recommend you use this in conjunction with the hed model, the settings are already in the script.
ToDo:
Write an Extension for the web ui.
Write a feature that automatically generates an "init.png" image if none is provided.
̶C̶h̶a̶n̶g̶e̶ ̶t̶h̶e̶ ̶e̶x̶t̶e̶n̶s̶i̶o̶n̶ ̶t̶o̶ ̶.̶s̶a̶f̶e̶t̶e̶n̶s̶o̶r̶s̶ ̶a̶n̶d̶ ̶i̶n̶v̶e̶s̶t̶i̶g̶a̶t̶e̶ ̶c̶o̶m̶p̶r̶e̶s̶s̶i̶o̶n̶.̶
|
foduucom/table-detection-and-extraction | foduucom | "2023-08-06T09:33:39Z" | 23,862 | 40 | ultralytics | [
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"table detection",
"table extraction",
"table classification",
"document analysis",
"unstructured document",
"unstructured table extraction",
"structured table extraction",
"unstructured table detection",
"structured table detection",
"en",
"dataset:foduucom/table-detection-yolo",
"model-index",
"region:us"
] | object-detection | "2023-08-05T09:44:39Z" | ---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- table detection
- table extraction
- table classification
- document analysis
- unstructured document
- unstructured table extraction
- structured table extraction
- unstructured table detection
- structured table detection
library_name: ultralytics
library_version: 8.0.43
inference: true
model-index:
- name: foduucom/table-detection-and-extraction
results:
- task:
type: object-detection
metrics:
- type: precision
value: 0.96196
name: [email protected](box)
language:
- en
metrics:
- accuracy
datasets:
- foduucom/table-detection-yolo
pipeline_tag: object-detection
---
<div align="center">
<img width="640" alt="foduucom/table-detection-and-extraction" src="https://huggingface.co/foduucom/table-detection-and-extraction/resolve/main/thumbnail.jpg">
</div>
# Model Card for YOLOv8s Table Detection
## Model Summary
The YOLOv8s Table Detection model is an object detection model based on the YOLO (You Only Look Once) framework. It is designed to detect tables, whether they are bordered or borderless, in images. The model has been fine-tuned on a vast dataset and achieved high accuracy in detecting tables and distinguishing between bordered and borderless ones.
## Model Details
### Model Description
The YOLOv8s Table Detection model serves as a versatile solution for precisely identifying tables within images, whether they exhibit a bordered or borderless design. Notably, this model's capabilities extend beyond mere detection – it plays a crucial role in addressing the complexities of unstructured documents. By employing advanced techniques such as bounding box delineation, the model enables users to isolate tables of interest within the visual content.
What sets this model apart is its synergy with Optical Character Recognition (OCR) technology. This seamless integration empowers the model to not only locate tables but also to extract pertinent data contained within. The bounding box information guides the cropping of tables, which is then coupled with OCR to meticulously extract textual data, streamlining the process of information retrieval from unstructured documents.
We invite you to explore the potential of this model and its data extraction capabilities. For those interested in harnessing its power or seeking further collaboration, we encourage you to reach out to us at [email protected]. Whether you require assistance, customization, or have innovative ideas, our collaborative approach is geared towards addressing your unique challenges. Additionally, you can actively engage with our vibrant community section for valuable insights and collective problem-solving. Your input drives our continuous improvement, as we collectively pave the way towards enhanced data extraction and document analysis.
- **Developed by:** FODUU AI
- **Model type:** Object Detection
- **Task:** Table Detection (Bordered and Borderless)
Furthermore, the YOLOv8s Table Detection model is not limited to table detection alone. It is a versatile tool that contributes to the processing of unstructured documents. By utilizing advanced bounding box techniques, the model empowers users to isolate tables within the document's visual content. What sets this model apart is its seamless integration with Optical Character Recognition (OCR) technology. The combination of bounding box information and OCR allows for precise data extraction from the tables. This comprehensive approach streamlines the process of information retrieval from complex documents.
User collaboration is actively encouraged to enrich the model's capabilities. By contributing table images of different designs and types, users play a pivotal role in enhancing the model's ability to detect a diverse range of tables accurately. Community participation can be facilitated through our platform or by reaching out to us at [email protected]. We value collaborative efforts that drive continuous improvement and innovation in table detection and extraction.
### Supported Labels
```
['bordered', 'borderless']
```
## Uses
### Direct Use
The YOLOv8s Table Detection model can be directly used for detecting tables in images, whether they are bordered or borderless. It is equipped with the ability to distinguish between these two categories.
### Downstream Use
The model can also be fine-tuned for specific table detection tasks or integrated into larger applications for furniture recognition, interior design, image-based data extraction, and other related fields.
### Out-of-Scope Use
The model is not designed for unrelated object detection tasks or scenarios outside the scope of table detection.
## Bias, Risks, and Limitations
The YOLOv8s Table Detection model may have some limitations and biases:
- Performance may vary based on the quality, diversity, and representativeness of the training data.
- The model may face challenges in detecting tables with intricate designs or complex arrangements.
- Accuracy may be affected by variations in lighting conditions, image quality, and resolution.
- Detection of very small or distant tables might be less accurate.
- The model's ability to classify bordered and borderless tables may be influenced by variations in design.
### Recommendations
Users should be informed about the model's limitations and potential biases. Further testing and validation are advised for specific use cases to evaluate its performance accurately.
## How to Get Started with the Model
To begin using the YOLOv8s Table Detection model, follow these steps:
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('foduucom/table-detection-and-extraction')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = '/path/to/your/document/images'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
## Training Details
### Training Data
The model is trained on a diverse dataset containing images of tables from various sources. The dataset includes examples of both bordered and borderless tables, capturing different designs and styles.
### Training Procedure
The training process involves extensive computation and is conducted over multiple epochs. The model's weights are adjusted to minimize detection loss and optimize performance.
#### Metrics
- [email protected] (box):
- All: 0.962
- Bordered: 0.961
- Borderless: 0.963
### Model Architecture and Objective
The YOLOv8s architecture employs a modified CSPDarknet53 as its backbone, along with self-attention mechanisms and feature pyramid networks. These components contribute to the model's ability to detect and classify tables accurately, considering variations in size, design, and style.
### Compute Infrastructure
#### Hardware
NVIDIA GeForce RTX 3060 card
#### Software
The model was trained and fine-tuned using a Jupyter Notebook environment.
## Model Card Contact
For inquiries and contributions, please contact us at [email protected].
```bibtex
@ModelCard{
author = {Nehul Agrawal and
Pranjal Singh Thakur},
title = {YOLOv8s Table Detection},
year = {2023}
}
```
--- |
Qdrant/all-MiniLM-L6-v2-onnx | Qdrant | "2024-01-16T08:09:43Z" | 23,850 | 2 | transformers | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2024-01-16T08:09:23Z" | Entry not found |
tomaarsen/span-marker-mbert-base-multinerd | tomaarsen | "2023-09-12T20:45:24Z" | 23,834 | 56 | span-marker | [
"span-marker",
"pytorch",
"tensorboard",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"multilingual",
"dataset:Babelscape/multinerd",
"base_model:bert-base-multilingual-cased",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] | token-classification | "2023-08-07T06:59:57Z" | ---
license: cc-by-nc-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
pipeline_tag: token-classification
widget:
- text: "Amelia Earthart flog mit ihrer einmotorigen Lockheed Vega 5B über den Atlantik nach Paris."
example_title: "German"
- text: "Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris."
example_title: "English"
- text: "Amelia Earthart voló su Lockheed Vega 5B monomotor a través del Océano Atlántico hasta París."
example_title: "Spanish"
- text: "Amelia Earthart a fait voler son monomoteur Lockheed Vega 5B à travers l'ocean Atlantique jusqu'à Paris."
example_title: "French"
- text: "Amelia Earhart ha volato con il suo monomotore Lockheed Vega 5B attraverso l'Atlantico fino a Parigi."
example_title: "Italian"
- text: "Amelia Earthart vloog met haar één-motorige Lockheed Vega 5B over de Atlantische Oceaan naar Parijs."
example_title: "Dutch"
- text: "Amelia Earthart przeleciała swoim jednosilnikowym samolotem Lockheed Vega 5B przez Ocean Atlantycki do Paryża."
example_title: "Polish"
- text: "Amelia Earhart voou em seu monomotor Lockheed Vega 5B através do Atlântico para Paris."
example_title: "Portuguese"
- text: "Амелия Эртхарт перелетела на своем одномоторном самолете Lockheed Vega 5B через Атлантический океан в Париж."
example_title: "Russian"
- text: "Amelia Earthart flaug eins hreyfils Lockheed Vega 5B yfir Atlantshafið til Parísar."
example_title: "Icelandic"
- text: "Η Amelia Earthart πέταξε το μονοκινητήριο Lockheed Vega 5B της πέρα από τον Ατλαντικό Ωκεανό στο Παρίσι."
example_title: "Greek"
- text: "Amelia Earhartová přeletěla se svým jednomotorovým Lockheed Vega 5B přes Atlantik do Paříže."
example_title: "Czech"
- text: "Amelia Earhart lensi yksimoottorisella Lockheed Vega 5B:llä Atlantin yli Pariisiin."
example_title: "Finnish"
- text: "Amelia Earhart fløj med sin enmotoriske Lockheed Vega 5B over Atlanten til Paris."
example_title: "Danish"
- text: "Amelia Earhart flög sin enmotoriga Lockheed Vega 5B över Atlanten till Paris."
example_title: "Swedish"
- text: "Amelia Earhart fløy sin enmotoriske Lockheed Vega 5B over Atlanterhavet til Paris."
example_title: "Norwegian"
- text: "Amelia Earhart și-a zburat cu un singur motor Lockheed Vega 5B peste Atlantic până la Paris."
example_title: "Romanian"
- text: "Amelia Earhart menerbangkan mesin tunggal Lockheed Vega 5B melintasi Atlantik ke Paris."
example_title: "Indonesian"
- text: "Амелія Эрхарт пераляцела на сваім аднаматорным Lockheed Vega 5B праз Атлантыку ў Парыж."
example_title: "Belarusian"
- text: "Амелія Ергарт перелетіла на своєму одномоторному літаку Lockheed Vega 5B через Атлантику до Парижа."
example_title: "Ukrainian"
- text: "Amelia Earhart preletjela je svojim jednomotornim zrakoplovom Lockheed Vega 5B preko Atlantika do Pariza."
example_title: "Croatian"
- text: "Amelia Earhart lendas oma ühemootoriga Lockheed Vega 5B üle Atlandi ookeani Pariisi ."
example_title: "Estonian"
model-index:
- name: SpanMarker w. bert-base-multilingual-cased on MultiNERD by Tom Aarsen
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
type: Babelscape/multinerd
name: MultiNERD
split: test
revision: 2814b78e7af4b5a1f1886fe7ad49632de4d9dd25
metrics:
- type: f1
value: 0.92478
name: F1
- type: precision
value: 0.93385
name: Precision
- type: recall
value: 0.91588
name: Recall
datasets:
- Babelscape/multinerd
language:
- multilingual
metrics:
- f1
- recall
- precision
base_model: bert-base-multilingual-cased
---
# SpanMarker for Multilingual Named Entity Recognition
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for multilingual Named Entity Recognition trained on the [MultiNERD](https://huggingface.co/datasets/Babelscape/multinerd) dataset. In particular, this SpanMarker model uses [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) as the underlying encoder. See [train.py](train.py) for the training script.
Is your data not (always) capitalized correctly? Then consider using this uncased variant of this model by [@lxyuan](https://huggingface.co/lxyuan) for better performance:
[lxyuan/span-marker-bert-base-multilingual-uncased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-uncased-multinerd).
## Metrics
| **Language** | **Precision** | **Recall** | **F1** |
|--------------|---------------|------------|------------|
| **all** | 93.39 | 91.59 | **92.48** |
| **de** | 95.21 | 94.32 | **94.76** |
| **en** | 95.07 | 95.29 | **95.18** |
| **es** | 93.50 | 89.65 | **91.53** |
| **fr** | 93.86 | 90.07 | **91.92** |
| **it** | 91.63 | 93.57 | **92.59** |
| **nl** | 94.86 | 91.74 | **93.27** |
| **pl** | 93.51 | 91.83 | **92.66** |
| **pt** | 94.48 | 91.30 | **92.86** |
| **ru** | 93.70 | 93.10 | **93.39** |
| **zh** | 88.36 | 85.71 | **87.02** |
## Label set
| Class | Description | Examples |
|-------|-------------|----------|
PER (person) | People | Ray Charles, Jessica Alba, Leonardo DiCaprio, Roger Federer, Anna Massey. |
ORG (organization) | Associations, companies, agencies, institutions, nationalities and religious or political groups | University of Edinburgh, San Francisco Giants, Google, Democratic Party. |
LOC (location) | Physical locations (e.g. mountains, bodies of water), geopolitical entities (e.g. cities, states), and facilities (e.g. bridges, buildings, airports). | Rome, Lake Paiku, Chrysler Building, Mount Rushmore, Mississippi River. |
ANIM (animal) | Breeds of dogs, cats and other animals, including their scientific names. | Maine Coon, African Wild Dog, Great White Shark, New Zealand Bellbird. |
BIO (biological) | Genus of fungus, bacteria and protoctists, families of viruses, and other biological entities. | Herpes Simplex Virus, Escherichia Coli, Salmonella, Bacillus Anthracis. |
CEL (celestial) | Planets, stars, asteroids, comets, nebulae, galaxies and other astronomical objects. | Sun, Neptune, Asteroid 187 Lamberta, Proxima Centauri, V838 Monocerotis. |
DIS (disease) | Physical, mental, infectious, non-infectious, deficiency, inherited, degenerative, social and self-inflicted diseases. | Alzheimer’s Disease, Cystic Fibrosis, Dilated Cardiomyopathy, Arthritis. |
EVE (event) | Sport events, battles, wars and other events. | American Civil War, 2003 Wimbledon Championships, Cannes Film Festival. |
FOOD (food) | Foods and drinks. | Carbonara, Sangiovese, Cheddar Beer Fondue, Pizza Margherita. |
INST (instrument) | Technological instruments, mechanical instruments, musical instruments, and other tools. | Spitzer Space Telescope, Commodore 64, Skype, Apple Watch, Fender Stratocaster. |
MEDIA (media) | Titles of films, books, magazines, songs and albums, fictional characters and languages. | Forbes, American Psycho, Kiss Me Once, Twin Peaks, Disney Adventures. |
PLANT (plant) | Types of trees, flowers, and other plants, including their scientific names. | Salix, Quercus Petraea, Douglas Fir, Forsythia, Artemisia Maritima. |
MYTH (mythological) | Mythological and religious entities. | Apollo, Persephone, Aphrodite, Saint Peter, Pope Gregory I, Hercules. |
TIME (time) | Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. No months and days of the week. | Renaissance, Middle Ages, Christmas, Great Depression, 17th Century, 2012. |
VEHI (vehicle) | Cars, motorcycles and other vehicles. | Ferrari Testarossa, Suzuki Jimny, Honda CR-X, Boeing 747, Fairey Fulmar.
## Usage
To use this model for inference, first install the `span_marker` library:
```bash
pip install span_marker
```
You can then run inference with this model like so:
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-mbert-base-multinerd")
# Run inference
entities = model.predict("Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris.")
```
See the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) repository for documentation and additional information on this library.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0179 | 0.01 | 1000 | 0.0146 | 0.8101 | 0.7616 | 0.7851 | 0.9530 |
| 0.0099 | 0.02 | 2000 | 0.0091 | 0.8571 | 0.8425 | 0.8498 | 0.9663 |
| 0.0085 | 0.03 | 3000 | 0.0078 | 0.8729 | 0.8579 | 0.8653 | 0.9700 |
| 0.0075 | 0.04 | 4000 | 0.0072 | 0.8821 | 0.8724 | 0.8772 | 0.9739 |
| 0.0074 | 0.05 | 5000 | 0.0075 | 0.8622 | 0.8841 | 0.8730 | 0.9722 |
| 0.0074 | 0.06 | 6000 | 0.0067 | 0.9056 | 0.8568 | 0.8805 | 0.9749 |
| 0.0066 | 0.07 | 7000 | 0.0065 | 0.9082 | 0.8543 | 0.8804 | 0.9737 |
| 0.0063 | 0.08 | 8000 | 0.0066 | 0.9039 | 0.8617 | 0.8823 | 0.9745 |
| 0.0062 | 0.09 | 9000 | 0.0062 | 0.9323 | 0.8425 | 0.8852 | 0.9754 |
| 0.007 | 0.1 | 10000 | 0.0066 | 0.8898 | 0.8758 | 0.8827 | 0.9746 |
| 0.006 | 0.11 | 11000 | 0.0061 | 0.8986 | 0.8841 | 0.8913 | 0.9766 |
| 0.006 | 0.12 | 12000 | 0.0061 | 0.9171 | 0.8628 | 0.8891 | 0.9763 |
| 0.0062 | 0.13 | 13000 | 0.0060 | 0.9264 | 0.8634 | 0.8938 | 0.9772 |
| 0.0059 | 0.14 | 14000 | 0.0059 | 0.9323 | 0.8508 | 0.8897 | 0.9763 |
| 0.0059 | 0.15 | 15000 | 0.0060 | 0.9011 | 0.8815 | 0.8912 | 0.9758 |
| 0.0059 | 0.16 | 16000 | 0.0060 | 0.9221 | 0.8598 | 0.8898 | 0.9763 |
| 0.0056 | 0.17 | 17000 | 0.0058 | 0.9098 | 0.8839 | 0.8967 | 0.9775 |
| 0.0055 | 0.18 | 18000 | 0.0060 | 0.9103 | 0.8739 | 0.8917 | 0.9765 |
| 0.0054 | 0.19 | 19000 | 0.0056 | 0.9135 | 0.8726 | 0.8925 | 0.9774 |
| 0.0052 | 0.2 | 20000 | 0.0058 | 0.9108 | 0.8834 | 0.8969 | 0.9773 |
| 0.0053 | 0.21 | 21000 | 0.0058 | 0.9038 | 0.8866 | 0.8951 | 0.9773 |
| 0.0057 | 0.22 | 22000 | 0.0057 | 0.9130 | 0.8762 | 0.8942 | 0.9775 |
| 0.0056 | 0.23 | 23000 | 0.0053 | 0.9375 | 0.8604 | 0.8973 | 0.9781 |
| 0.005 | 0.24 | 24000 | 0.0054 | 0.9253 | 0.8822 | 0.9032 | 0.9784 |
| 0.0055 | 0.25 | 25000 | 0.0055 | 0.9182 | 0.8807 | 0.8991 | 0.9787 |
| 0.0049 | 0.26 | 26000 | 0.0053 | 0.9311 | 0.8702 | 0.8997 | 0.9783 |
| 0.0051 | 0.27 | 27000 | 0.0054 | 0.9192 | 0.8877 | 0.9032 | 0.9787 |
| 0.0051 | 0.28 | 28000 | 0.0053 | 0.9332 | 0.8783 | 0.9049 | 0.9795 |
| 0.0049 | 0.29 | 29000 | 0.0054 | 0.9311 | 0.8672 | 0.8981 | 0.9789 |
| 0.0047 | 0.3 | 30000 | 0.0054 | 0.9165 | 0.8954 | 0.9058 | 0.9796 |
| 0.005 | 0.31 | 31000 | 0.0052 | 0.9079 | 0.9016 | 0.9047 | 0.9787 |
| 0.0051 | 0.32 | 32000 | 0.0051 | 0.9157 | 0.9001 | 0.9078 | 0.9796 |
| 0.0046 | 0.33 | 33000 | 0.0051 | 0.9147 | 0.8935 | 0.9040 | 0.9788 |
| 0.0046 | 0.34 | 34000 | 0.0050 | 0.9229 | 0.8847 | 0.9034 | 0.9793 |
| 0.005 | 0.35 | 35000 | 0.0051 | 0.9198 | 0.8922 | 0.9058 | 0.9796 |
| 0.0047 | 0.36 | 36000 | 0.0050 | 0.9321 | 0.8890 | 0.9100 | 0.9807 |
| 0.0048 | 0.37 | 37000 | 0.0050 | 0.9046 | 0.9133 | 0.9089 | 0.9800 |
| 0.0046 | 0.38 | 38000 | 0.0051 | 0.9170 | 0.8973 | 0.9071 | 0.9806 |
| 0.0048 | 0.39 | 39000 | 0.0050 | 0.9417 | 0.8775 | 0.9084 | 0.9805 |
| 0.0042 | 0.4 | 40000 | 0.0049 | 0.9238 | 0.8937 | 0.9085 | 0.9797 |
| 0.0038 | 0.41 | 41000 | 0.0048 | 0.9371 | 0.8920 | 0.9140 | 0.9812 |
| 0.0042 | 0.42 | 42000 | 0.0048 | 0.9359 | 0.8862 | 0.9104 | 0.9808 |
| 0.0051 | 0.43 | 43000 | 0.0049 | 0.9080 | 0.9060 | 0.9070 | 0.9805 |
| 0.0037 | 0.44 | 44000 | 0.0049 | 0.9328 | 0.8877 | 0.9097 | 0.9801 |
| 0.0041 | 0.45 | 45000 | 0.0049 | 0.9231 | 0.8975 | 0.9101 | 0.9813 |
| 0.0046 | 0.46 | 46000 | 0.0046 | 0.9308 | 0.8943 | 0.9122 | 0.9812 |
| 0.0038 | 0.47 | 47000 | 0.0047 | 0.9291 | 0.8969 | 0.9127 | 0.9815 |
| 0.0043 | 0.48 | 48000 | 0.0046 | 0.9308 | 0.8909 | 0.9104 | 0.9804 |
| 0.0043 | 0.49 | 49000 | 0.0046 | 0.9278 | 0.8954 | 0.9113 | 0.9800 |
| 0.0039 | 0.5 | 50000 | 0.0047 | 0.9173 | 0.9073 | 0.9123 | 0.9817 |
| 0.0043 | 0.51 | 51000 | 0.0045 | 0.9347 | 0.8962 | 0.9150 | 0.9821 |
| 0.0047 | 0.52 | 52000 | 0.0045 | 0.9266 | 0.9016 | 0.9139 | 0.9810 |
| 0.0035 | 0.53 | 53000 | 0.0046 | 0.9165 | 0.9122 | 0.9144 | 0.9820 |
| 0.0038 | 0.54 | 54000 | 0.0046 | 0.9231 | 0.9050 | 0.9139 | 0.9823 |
| 0.0036 | 0.55 | 55000 | 0.0046 | 0.9331 | 0.9005 | 0.9165 | 0.9828 |
| 0.0037 | 0.56 | 56000 | 0.0047 | 0.9246 | 0.9016 | 0.9129 | 0.9821 |
| 0.0035 | 0.57 | 57000 | 0.0044 | 0.9351 | 0.9003 | 0.9174 | 0.9829 |
| 0.0043 | 0.57 | 58000 | 0.0043 | 0.9257 | 0.9079 | 0.9167 | 0.9826 |
| 0.004 | 0.58 | 59000 | 0.0043 | 0.9286 | 0.9065 | 0.9174 | 0.9823 |
| 0.0041 | 0.59 | 60000 | 0.0044 | 0.9324 | 0.9050 | 0.9185 | 0.9825 |
| 0.0039 | 0.6 | 61000 | 0.0044 | 0.9268 | 0.9041 | 0.9153 | 0.9815 |
| 0.0038 | 0.61 | 62000 | 0.0043 | 0.9367 | 0.8918 | 0.9137 | 0.9819 |
| 0.0037 | 0.62 | 63000 | 0.0044 | 0.9249 | 0.9160 | 0.9205 | 0.9833 |
| 0.0036 | 0.63 | 64000 | 0.0043 | 0.9398 | 0.8975 | 0.9181 | 0.9827 |
| 0.0036 | 0.64 | 65000 | 0.0043 | 0.9260 | 0.9118 | 0.9188 | 0.9829 |
| 0.0035 | 0.65 | 66000 | 0.0044 | 0.9375 | 0.8988 | 0.9178 | 0.9828 |
| 0.0034 | 0.66 | 67000 | 0.0043 | 0.9272 | 0.9143 | 0.9207 | 0.9833 |
| 0.0033 | 0.67 | 68000 | 0.0044 | 0.9332 | 0.9024 | 0.9176 | 0.9827 |
| 0.0035 | 0.68 | 69000 | 0.0044 | 0.9396 | 0.8981 | 0.9184 | 0.9825 |
| 0.0038 | 0.69 | 70000 | 0.0042 | 0.9265 | 0.9163 | 0.9214 | 0.9827 |
| 0.0035 | 0.7 | 71000 | 0.0044 | 0.9375 | 0.9013 | 0.9191 | 0.9827 |
| 0.0037 | 0.71 | 72000 | 0.0042 | 0.9264 | 0.9171 | 0.9217 | 0.9830 |
| 0.0039 | 0.72 | 73000 | 0.0043 | 0.9399 | 0.9003 | 0.9197 | 0.9826 |
| 0.0039 | 0.73 | 74000 | 0.0041 | 0.9341 | 0.9094 | 0.9216 | 0.9832 |
| 0.0035 | 0.74 | 75000 | 0.0042 | 0.9301 | 0.9160 | 0.9230 | 0.9837 |
| 0.0037 | 0.75 | 76000 | 0.0042 | 0.9342 | 0.9107 | 0.9223 | 0.9835 |
| 0.0034 | 0.76 | 77000 | 0.0042 | 0.9331 | 0.9118 | 0.9223 | 0.9836 |
| 0.003 | 0.77 | 78000 | 0.0041 | 0.9330 | 0.9135 | 0.9231 | 0.9838 |
| 0.0034 | 0.78 | 79000 | 0.0041 | 0.9308 | 0.9082 | 0.9193 | 0.9832 |
| 0.0037 | 0.79 | 80000 | 0.0040 | 0.9346 | 0.9128 | 0.9236 | 0.9839 |
| 0.0032 | 0.8 | 81000 | 0.0041 | 0.9389 | 0.9128 | 0.9257 | 0.9841 |
| 0.0031 | 0.81 | 82000 | 0.0040 | 0.9293 | 0.9163 | 0.9227 | 0.9836 |
| 0.0032 | 0.82 | 83000 | 0.0041 | 0.9305 | 0.9160 | 0.9232 | 0.9835 |
| 0.0034 | 0.83 | 84000 | 0.0041 | 0.9327 | 0.9118 | 0.9221 | 0.9838 |
| 0.0028 | 0.84 | 85000 | 0.0041 | 0.9279 | 0.9216 | 0.9247 | 0.9839 |
| 0.0031 | 0.85 | 86000 | 0.0041 | 0.9326 | 0.9167 | 0.9246 | 0.9838 |
| 0.0029 | 0.86 | 87000 | 0.0040 | 0.9354 | 0.9158 | 0.9255 | 0.9841 |
| 0.0031 | 0.87 | 88000 | 0.0041 | 0.9327 | 0.9156 | 0.9241 | 0.9840 |
| 0.0033 | 0.88 | 89000 | 0.0040 | 0.9367 | 0.9141 | 0.9253 | 0.9846 |
| 0.0031 | 0.89 | 90000 | 0.0040 | 0.9379 | 0.9141 | 0.9259 | 0.9844 |
| 0.0031 | 0.9 | 91000 | 0.0040 | 0.9297 | 0.9184 | 0.9240 | 0.9843 |
| 0.0034 | 0.91 | 92000 | 0.0040 | 0.9299 | 0.9188 | 0.9243 | 0.9843 |
| 0.0036 | 0.92 | 93000 | 0.0039 | 0.9324 | 0.9175 | 0.9249 | 0.9843 |
| 0.0028 | 0.93 | 94000 | 0.0039 | 0.9399 | 0.9135 | 0.9265 | 0.9848 |
| 0.0029 | 0.94 | 95000 | 0.0040 | 0.9342 | 0.9173 | 0.9257 | 0.9845 |
| 0.003 | 0.95 | 96000 | 0.0040 | 0.9378 | 0.9184 | 0.9280 | 0.9850 |
| 0.0029 | 0.96 | 97000 | 0.0039 | 0.9380 | 0.9152 | 0.9264 | 0.9847 |
| 0.003 | 0.97 | 98000 | 0.0039 | 0.9372 | 0.9156 | 0.9263 | 0.9849 |
| 0.003 | 0.98 | 99000 | 0.0039 | 0.9387 | 0.9167 | 0.9276 | 0.9851 |
| 0.0031 | 0.99 | 100000 | 0.0039 | 0.9373 | 0.9177 | 0.9274 | 0.9849 |
### Framework versions
- SpanMarker 1.2.4
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
## See also
* [lxyuan/span-marker-bert-base-multilingual-cased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd) is similar to this model, but trained on 3 epochs instead of 2. It reaches better performance on 7 out of the 10 languages.
* [lxyuan/span-marker-bert-base-multilingual-uncased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-uncased-multinerd) is a strong uncased variant of this model, also trained on 3 epochs instead of 2.
## Contributions
Many thanks to [Simone Tedeschi](https://huggingface.co/sted97) from [Babelscape](https://babelscape.com) for his insight when training this model and his involvement in the creation of the training dataset.
|
emanjavacas/MacBERTh | emanjavacas | "2023-11-28T14:15:10Z" | 23,831 | 6 | transformers | [
"transformers",
"pytorch",
"bert",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
license: mit
language:
- en
---
# MacBERTh
This model is a Historical Language Model for English coming from the [MacBERTh project](https://macberth.netlify.app/).
The architecture is based on BERT base uncased from the original BERT pre-training codebase.
The training material comes from different sources including:
- EEBO
- ECCO
- COHA
- CLMET3.1
- EVANS
- Hansard Corpus
with a total word count of approximately 3.9B tokens.
Details and evaluation can be found in the accompanying publications:
- [MacBERTh: Development and Evaluation of a Historically Pre-trained Language Model for English (1450-1950)](https://aclanthology.org/2021.nlp4dh-1.4/)
- [Adapting vs. Pre-training Language Models for Historical Languages](https://doi.org/10.46298/jdmdh.9152) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.