modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
QuantFactory/llama-3-sqlcoder-8b-GGUF | QuantFactory | 2024-06-07T04:24:59Z | 634 | 0 | null | [
"gguf",
"code",
"text-generation",
"base_model:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | 2024-05-29T04:25:33Z | ---
license: cc-by-sa-4.0
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
base_model: defog/llama-3-sqlcoder-8b
---
# QuantFactory/llama-3-sqlcoder-8b-GGUF
This is quantized version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) created using llama.cpp
## Model Description
A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.

Developed by: Defog, Inc
Model type: [Text to SQL]
License: [CC-by-SA-4.0]
Finetuned from model: [Meta-Llama-3-8B-Instruct]
## Demo Page
[https://defog.ai/sqlcoder-demo/](https://defog.ai/sqlcoder-demo/)
## Ideal prompt and inference parameters
Set temperature to 0, and do not do sampling.
### Prompt
```
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Generate a SQL query to answer this question: `{user_question}`
{instructions}
DDL statements:
{create_table_statements}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
The following SQL query best answers the question `{user_question}`:
```sql
```
## Evaluation
This model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/).
## Contact defog
Contact defog on X at [@defogdata](https://twitter.com/defogdata), or on email at [email protected] |
mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF | mradermacher | 2024-06-11T03:39:44Z | 634 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:jondurbin/airoboros-gpt4-m2.0",
"base_model:jondurbin/airoboros-65b-gpt4-m2.0",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-10T09:20:06Z | ---
base_model: jondurbin/airoboros-65b-gpt4-m2.0
datasets:
- jondurbin/airoboros-gpt4-m2.0
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-65b-gpt4-m2.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ1_S.gguf) | i1-IQ1_S | 14.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ1_M.gguf) | i1-IQ1_M | 15.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 17.6 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ2_S.gguf) | i1-IQ2_S | 20.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ2_M.gguf) | i1-IQ2_M | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q2_K.gguf) | i1-Q2_K | 24.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 24.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 26.7 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ3_S.gguf) | i1-IQ3_S | 28.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 28.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ3_M.gguf) | i1-IQ3_M | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 31.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 34.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 34.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q4_0.gguf) | i1-Q4_0 | 37.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 37.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 39.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 45.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 46.3 | |
| [PART 1](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/airoboros-65b-gpt4-m2.0-i1-GGUF/resolve/main/airoboros-65b-gpt4-m2.0.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 53.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
CHE-72/Phi-3-medium-128k-instruct-Q4_K_S-GGUF | CHE-72 | 2024-06-21T20:36:16Z | 634 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"region:us"
] | text-generation | 2024-06-21T20:35:40Z | ---
base_model: microsoft/Phi-3-medium-128k-instruct
language:
- multilingual
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# CHE-72/Phi-3-medium-128k-instruct-Q4_K_S-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -c 2048
```
|
CHE-72/Yi-1.5-6B-Chat-Q4_K_S-GGUF | CHE-72 | 2024-06-22T07:28:04Z | 634 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:01-ai/Yi-1.5-6B-Chat",
"license:apache-2.0",
"region:us"
] | null | 2024-06-22T07:27:49Z | ---
base_model: 01-ai/Yi-1.5-6B-Chat
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# CHE-72/Yi-1.5-6B-Chat-Q4_K_S-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-6b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-6b-chat-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-6b-chat-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q4_K_S-GGUF --hf-file yi-1.5-6b-chat-q4_k_s.gguf -c 2048
```
|
huggingtweets/sexycuckolding | huggingtweets | 2021-08-14T12:11:30Z | 633 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/sexycuckolding/1628943086648/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1392455809330819072/POjhVAU1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cuckolding (female perspective)</div>
<div style="text-align: center; font-size: 14px;">@sexycuckolding</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cuckolding (female perspective).
| Data | Cuckolding (female perspective) |
| --- | --- |
| Tweets downloaded | 2651 |
| Retweets | 364 |
| Short tweets | 311 |
| Tweets kept | 1976 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/120lf3ey/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sexycuckolding's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gmuegp8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gmuegp8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sexycuckolding')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
unicamp-dl/ptt5-large-portuguese-vocab | unicamp-dl | 2024-04-10T17:49:10Z | 633 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"tensorflow",
"pt",
"pt-br",
"dataset:brWaC",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language: pt
license: mit
tags:
- t5
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- brWaC
widget:
- text: "Texto de exemplo em português"
inference: false
---
# Portuguese T5 (aka "PTT5")
## Introduction
PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia).
For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5).
## Available models
| Model | Size | #Params | Vocabulary |
| :-: | :-: | :-: | :-: |
| [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 |
| [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 |
| [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 |
| [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese |
| **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** |
| [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese |
## Usage
```python
# Tokenizer
from transformers import T5Tokenizer
# PyTorch (bare model, baremodel + language modeling head)
from transformers import T5Model, T5ForConditionalGeneration
# Tensorflow (bare model, baremodel + language modeling head)
from transformers import TFT5Model, TFT5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-portuguese-vocab'
tokenizer = T5Tokenizer.from_pretrained(model_name)
# PyTorch
model_pt = T5ForConditionalGeneration.from_pretrained(model_name)
# TensorFlow
model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use PTT5, please cite:
@article{ptt5_2020,
title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data},
author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:2008.09144},
year={2020}
}
|
castorini/monot5-3b-msmarco-10k | castorini | 2022-08-31T19:20:16Z | 633 | 12 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.02873",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-28T15:08:54Z | This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
This model is also the state of the art on the BEIR Benchmark.
- Paper: [No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval](https://arxiv.org/abs/2206.02873)
- [Repository](https://github.com/guilhermemr04/scaling-zero-shot-retrieval)
|
timm/repvgg_b1g4.rvgg_in1k | timm | 2024-02-10T23:34:58Z | 633 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2101.03697",
"license:mit",
"region:us"
] | image-classification | 2023-03-22T07:20:49Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for repvgg_b1g4.rvgg_in1k
A RepVGG image classification model. Trained on ImageNet-1k by paper authors.
This model architecture is implemented using `timm`'s flexible [BYOBNet (Bring-Your-Own-Blocks Network)](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/byobnet.py).
BYOBNet allows configuration of:
* block / stage layout
* stem layout
* output stride (dilation)
* activation and norm layers
* channel and spatial / self-attention layers
...and also includes `timm` features common to many other architectures, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* per-stage feature extraction
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 40.0
- GMACs: 8.1
- Activations (M): 10.6
- Image size: 224 x 224
- **Papers:**
- RepVGG: Making VGG-style ConvNets Great Again: https://arxiv.org/abs/2101.03697
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/DingXiaoH/RepVGG
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('repvgg_b1g4.rvgg_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b1g4.rvgg_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'repvgg_b1g4.rvgg_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@inproceedings{ding2021repvgg,
title={Repvgg: Making vgg-style convnets great again},
author={Ding, Xiaohan and Zhang, Xiangyu and Ma, Ningning and Han, Jungong and Ding, Guiguang and Sun, Jian},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13733--13742},
year={2021}
}
```
|
timm/resnet18.fb_ssl_yfcc100m_ft_in1k | timm | 2024-02-10T23:38:37Z | 633 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1905.00546",
"arxiv:1512.03385",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | 2023-04-05T18:03:30Z | ---
license: cc-by-nc-4.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnet18.fb_ssl_yfcc100m_ft_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Pretrained on a subset of YFCC100M using semi-supervised learning and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 11.7
- GMACs: 1.8
- Activations (M): 2.5
- Image size: 224 x 224
- **Papers:**
- Billion-scale semi-supervised learning for image classification: https://arxiv.org/abs/1905.00546
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/facebookresearch/semi-supervised-ImageNet1K-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnet18.fb_ssl_yfcc100m_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet18.fb_ssl_yfcc100m_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnet18.fb_ssl_yfcc100m_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@misc{yalniz2019billionscale,
title={Billion-scale semi-supervised learning for image classification},
author={I. Zeki Yalniz and Hervé Jégou and Kan Chen and Manohar Paluri and Dhruv Mahajan},
year={2019},
eprint={1905.00546},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
NEU-HAI/Llama-2-7b-alpaca-cleaned | NEU-HAI | 2023-08-24T02:51:32Z | 633 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"alpaca",
"en",
"dataset:yahma/alpaca-cleaned",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-22T18:04:30Z | ---
license: cc-by-nc-4.0
datasets:
- yahma/alpaca-cleaned
language:
- en
pipeline_tag: text-generation
tags:
- llama-2
- alpaca
---
# Model Card for Llama-2-7b-alpaca-cleaned
<!-- Provide a quick summary of what the model is/does. -->
This model checkpoint is the Llama-2-7b fine-tuned on [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) with the original Alpaca fine-tuning hyper-parameters.
## Model Details
### Model Description
This model checkpoint is the Llama-2-7b fine-tuned on [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) with the original Alpaca fine-tuning hyper-parameters. \
The original Alpaca model is fine-tuned on Llama with the alpaca dataset by researchers from Stanford University
- **Developed by:** NEU Human-centered AI Lab
- **Shared by [optional]:** NEU Human-centered AI Lab
- **Model type:** Text-generation
- **Language(s) (NLP):** English
- **License:** cc-by-nc-4.0 (comply with the alpaca-cleaned dataset)
- **Finetuned from model [optional]:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/meta-llama/Llama-2-7b
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is intended to be used for research purposes only in English, complying with [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca). \
The model has been fine-tuned on the [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned) for assistant-like chat and general natural language generation tasks. \
The use of this model should also comply with the restrictions from [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The out-of-Scope use of this model should also comply with [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca) and [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b).
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
{{ bias_risks_limitations | default("[More Information Needed]", true)}}
## How to Get Started with the Model
Use the code below to get started with the model.
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
model = AutoModelForCausalLM.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
We use the [alpaca-cleaned dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned), which is the cleaned version of the original [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) created by researchers from Stanford University.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We follow the same training procedure and mostly same hyper-parameters to fine-tune the original Alpaca model on Llama. The procedure can be found in [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca).
#### Training Hyperparameters
```
--bf16 True \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True
```
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
N/A
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
N/A
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
N/A
### Results
N/A
#### Summary
N/A
<!--
## Environmental Impact
Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}}
- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}}
- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}}
- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}}
- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}}
-->
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
Please cite the [stanford_alpaca project](https://github.com/tatsu-lab/stanford_alpaca)
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
## Model Card Authors
Northeastern Human-centered AI Lab
## Model Card Contact
|
ntc-ai/SDXL-LoRA-slider.Studio-Ghibli-style | ntc-ai | 2024-02-06T00:33:20Z | 633 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-15T13:29:32Z |
---
language:
- en
thumbnail: "images/Studio Ghibli style_17_3.0.png"
widget:
- text: Studio Ghibli style
output:
url: images/Studio Ghibli style_17_3.0.png
- text: Studio Ghibli style
output:
url: images/Studio Ghibli style_19_3.0.png
- text: Studio Ghibli style
output:
url: images/Studio Ghibli style_20_3.0.png
- text: Studio Ghibli style
output:
url: images/Studio Ghibli style_21_3.0.png
- text: Studio Ghibli style
output:
url: images/Studio Ghibli style_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "Studio Ghibli style"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - Studio Ghibli style (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/Studio Ghibli style_17_-3.0.png" width=256 height=256 /> | <img src="images/Studio Ghibli style_17_0.0.png" width=256 height=256 /> | <img src="images/Studio Ghibli style_17_3.0.png" width=256 height=256 /> |
| <img src="images/Studio Ghibli style_19_-3.0.png" width=256 height=256 /> | <img src="images/Studio Ghibli style_19_0.0.png" width=256 height=256 /> | <img src="images/Studio Ghibli style_19_3.0.png" width=256 height=256 /> |
| <img src="images/Studio Ghibli style_20_-3.0.png" width=256 height=256 /> | <img src="images/Studio Ghibli style_20_0.0.png" width=256 height=256 /> | <img src="images/Studio Ghibli style_20_3.0.png" width=256 height=256 /> |
See more at [https://sliders.ntcai.xyz/sliders/app/loras/42dfd05f-0912-4a6b-852f-62521308897b](https://sliders.ntcai.xyz/sliders/app/loras/42dfd05f-0912-4a6b-852f-62521308897b)
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
Studio Ghibli style
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.Studio-Ghibli-style', weight_name='Studio Ghibli style.safetensors', adapter_name="Studio Ghibli style")
# Activate the LoRA
pipe.set_adapters(["Studio Ghibli style"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, Studio Ghibli style"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14602+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities.
Your support on Patreon will allow us to continue developing new models and tools.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
OwenArli/Awanllm-Llama-3-8B-Cumulus-v0.2 | OwenArli | 2024-05-03T03:15:48Z | 633 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-02T22:16:36Z | ---
license: llama3
---
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
This is by far the most completely uncensored Llama 3 8b instruct model. It will literally never refuse anything.
So as a reminder, with great power comes great responsibility.
In terms of reasoning and intelligence, this model is probably worse than the OG model because of the decensoring. However, if you have issues with refusals then this will be superior just because it will not refuse.
Will soon have quants uploaded here on HF and have it up on our site https://awanllm.com for anyone to try.
OpenLLM Benchmark:

Training:
- 4096 sequence length, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine.
- Training duration is around 3 days on an RTX 4090, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights.
- Added DPO fine tuning aside from a more curated dataset for this v0.2 model.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants:
FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v0.2
GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v0.2-GGUF
|
v000000/YamWizard28-7B-Q8_0-GGUF | v000000 | 2024-06-22T02:43:11Z | 633 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"mistral",
"llama-cpp",
"base_model:v000000/YamWizard28-7B-abliterated",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T00:14:43Z | ---
base_model: v000000/YamWizard28-7B-abliterated
library_name: transformers
tags:
- mergekit
- merge
- mistral
- llama-cpp
---
# v000000/YamWizard28-7B-Q8_0-GGUF
This model was converted to GGUF format from [`v000000/YamWizard28-7B`](https://huggingface.co/v000000/YamWizard28-7B) using llama.cpp
Refer to the [original model card](https://huggingface.co/v000000/YamWizard28-7B) for more details on the model.'
### YamWizard28-7B
idk

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
* [fearlessdots/WizardLM-2-7B-abliterated](https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: fearlessdots/WizardLM-2-7B-abliterated
layer_range: [0, 32]
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: fearlessdots/WizardLM-2-7B-abliterated
parameters:
t:
- filter: self_attn
value: [0.1, 0.6, 0.3, 0.8, 0.5]
- filter: mlp
value: [0.9, 0.4, 0.7, 0.2, 0.5]
- value: 0.5
dtype: bfloat16
```
### Prompt Format (Alpaca):
```bash
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
### Instruction:
{prompt}
### Response:
{output}
```
|
Sygil/Sygil-Diffusion | Sygil | 2023-09-10T01:46:55Z | 632 | 36 | diffusers | [
"diffusers",
"stable-diffusion",
"sygil-diffusion",
"text-to-image",
"sygil-devs",
"finetune",
"stable-diffusion-1.5",
"en",
"ja",
"es",
"zh",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-31T12:09:07Z | ---
license: openrail++
language:
- en
- ja
- es
- zh
widget:
- text: environment art, realistic
example_title: Concept Art 1
- text: environment concept art, high quality
example_title: Concept Art 2
- text: environment,landscape, wallpaper
example_title: Concept Art 3
- text: a beautiful illustration of a fantasy forest
example_title: Fantasy Forest
tags:
- stable-diffusion
- sygil-diffusion
- text-to-image
- sygil-devs
- finetune
- stable-diffusion-1.5
inference: true
pinned: true
metrics:
- accuracy
- bertscore
- bleu
- bleurt
- brier_score
- cer
- character
- charcut_mt
- chrf
- code_eval
---
# About the model
-----------------
This model is a fine-tune of Stable Diffusion, trained on the [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset), with the big advantage of allowing the use of multiple namespaces (labeled tags) to control various parts of the final generation.
While current models usually are prone to “context errors” and need substantial negative prompting to set them on the right track, the use of namespaces in this model (eg. “species:seal” or “studio:dc”) stop the model from misinterpreting a seal as the singer Seal, or DC Comics as Washington DC.
This model is also able to understand other languages besides English, currently it can partially understand prompts in Chinese, Japanese and Spanish. More training is already being done in order to have the model completely understand those languages and have it work just like how it works with English prompts.
As the model is fine-tuned on a wide variety of content, it’s able to generate many types of images and compositions, and easily outperforms the original model when it comes to portraits, architecture, reflections, fantasy, concept art, anime, landscapes and a lot more without being hyper-specialized like other community fine-tunes that are currently available.
**Note: The prompt engineering techniques needed are slightly different from other fine-tunes and the original Stable Diffusion model, so while you can still use your favorite prompts, for best results you might need to tweak them to make use of namespaces. A more detailed guide will be available later on, but you can use the tags and namespaces found here [Dataset Explorer](https://huggingface.co/spaces/Sygil/INE-dataset-explorer) should be able to start you off on the right track.
If you find my work useful, please consider supporting me on [GitHub Sponsors](https://github.com/sponsors/ZeroCool940711)!
This model is still in its infancy and it's meant to be constantly updated and trained with more and more data as time goes by, so feel free to give us feedback on our [Discord Server](https://discord.gg/ttM8Tm6wge) or on the discussions section on huggingface. We plan to improve it with more, better tags in the future, so any help is always welcome 😛
[](https://discord.gg/ttM8Tm6wge)
# Showcase

## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Sygil Diffusion in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
```python
import torch
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
model_id = "Sygil/Sygil-Diffusion"
# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a beautiful illustration of a fantasy forest"
image = pipe(prompt).images[0]
image.save("fantasy_forest_illustration.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed).
## Available Checkpoints:
- #### Stable:
- [Sygil Diffusion v0.1](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.1.ckpt): Trained on Stable Diffusion 1.5 for 800,000 steps.
- [Sygil Diffusion v0.2](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.2.ckpt): Resumed from Sygil Diffusion v0.1 and trained for a total of 1.77 million steps.
- [Sygil Diffusion v0.3](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.3.ckpt): Resumed from Sygil Diffusion v0.2 and trained for a total of 2.01 million steps.
- [Sygil Diffusion v0.4](https://huggingface.co/Sygil/Sygil-Diffusion/blob/main/sygil-diffusion-v0.4.ckpt): Resumed from Sygil Diffusion v0.3 and trained for a total of 2.37 million steps.
- #### Beta:
- No active beta right now.
Note: Checkpoints under the Beta section are updated daily or at least 3-4 times a week. This is usually the equivalent of 1-2 training session,
this is done until they are stable enough to be moved into a proper release, usually every 1 or 2 weeks.
While the beta checkpoints can be used as they are only the latest version is kept on the repo and the older checkpoints are removed when a new one
is uploaded to keep the repo clean. The HuggingFace inference API as well as the diffusers library will always use the latest beta checkpoint in the diffusers format.
For special cases we might make additional repositories to keep a copy of the diffusers model like when a model uses a different Stable Diffusion model as base (eg. Stable Diffusion 1.5 vs 2.1).
## Training
**Training Data**:
The model was trained on the following dataset:
- [Imaginary Network Expanded Dataset](https://github.com/Sygil-Dev/INE-dataset) dataset.
**Hardware and others**
- **Hardware:** 1 x Nvidia RTX 3050 8GB GPU
- **Hours Trained:** 857 hours approximately.
- **Optimizer:** AdamW
- **Adam Beta 1**: 0.9
- **Adam Beta 2**: 0.999
- **Adam Weight Decay**: 0.01
- **Adam Epsilon**: 1e-8
- **Gradient Checkpointing**: True
- **Gradient Accumulations**: 400
- **Batch:** 1
- **Learning Rate:** 1e-7
- **Learning Rate Scheduler:** cosine_with_restarts
- **Learning Rate Warmup Steps:** 10,000
- **Lora unet Learning Rate**: 1e-7
- **Lora Text Encoder Learning Rate**: 1e-7
- **Resolution**: 512 pixels
- **Total Training Steps:** 2,370,200
Note: For the learning rate I'm testing something new, after changing from using the `constant` scheduler to `cosine_with_restarts` after v0.3 was released, I noticed
it practically uses the optimal learning rate while trying to minimize the loss value, so, when every training session finishes I use for the next session the latest
learning rate value shown for the last few steps from the last session, this makes it so it will overtime decrease at a constant rate. When I add a lot of data to the training dataset
at once, I move the learning rate back to 1e-7 which then the scheduler will move down again as it learns more from the new data, this makes it so the training
doesn't overfit or uses a learning rate too low that makes the model not learn anything new for a while.
Developed by: [ZeroCool94](https://github.com/ZeroCool940711) at [Sygil-Dev](https://github.com/Sygil-Dev/)
## Community Contributions:
- [Kevin Turner (keturn)](https://huggingface.co/keturn): created the [INE-dataset-explorer](https://huggingface.co/spaces/Sygil/INE-dataset-explorer) space for better browsing of the INE dataset.
*This model card is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
# License
This model is open access and available to all, with a CreativeML Open RAIL++-M License further specifying rights and usage. [Please read the full license here](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) |
amd/resnet50 | amd | 2024-03-26T02:45:52Z | 632 | 0 | transformers | [
"transformers",
"onnx",
"resnet",
"image-classification",
"RyzenAI",
"vision",
"classification",
"pytorch",
"dataset:imagenet-1k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-04T09:23:41Z | ---
license: apache-2.0
datasets:
- imagenet-1k
metrics:
- accuracy
tags:
- RyzenAI
- vision
- classification
- pytorch
---
# ResNet-50 v1.5
Quantized ResNet model that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/).
## Model description
ResNet (Residual Network) was first introduced in the paper Deep Residual Learning for Image Recognition by He et al.
This model is ResNet50 v1.5 from [torchvision](https://pytorch.org/vision/main/models/generated/torchvision.models.resnet50.html).
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation
Follow [PyTorch Example](https://github.com/pytorch/examples/blob/main/imagenet/README.md#requirements) to prepare dataset.
### Model Evaluation
```python
python eval_onnx.py --onnx_model ResNet_int.onnx --ipu --provider_config Path\To\vaip_config.json --data_dir /Path/To/Your/Dataset
```
### Performance
|Metric |Accuracy on IPU|
| :----: | :----: |
|Top1/Top5| 76.17% / 92.86%|
```bibtex
@article{He2015,
author={Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title={Deep Residual Learning for Image Recognition},
journal={arXiv preprint arXiv:1512.03385},
year={2015}
}
``` |
nasiruddin15/Mistral-grok-instract-2-7B-slerp | nasiruddin15 | 2024-04-01T01:22:39Z | 632 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"HuggingFaceH4/mistral-7b-grok",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:HuggingFaceH4/mistral-7b-grok",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-28T23:53:42Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- HuggingFaceH4/mistral-7b-grok
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- HuggingFaceH4/mistral-7b-grok
license: mit
---
# Mistral-grok-instract-2-7B-slerp
Mistral-grok-instract-2-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [HuggingFaceH4/mistral-7b-grok](https://huggingface.co/HuggingFaceH4/mistral-7b-grok)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: HuggingFaceH4/mistral-7b-grok
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nasiruddin15/Mistral-grok-instract-2-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mgoin/Meta-Llama-3-70B-Instruct-GPTQ | mgoin | 2024-04-18T23:12:22Z | 632 | 1 | transformers | [
"transformers",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-04-18T17:58:23Z | Entry not found |
Lewdiculous/Chaos_RP_l3_8B-GGUF-IQ-Imatrix | Lewdiculous | 2024-05-04T14:39:16Z | 632 | 14 | null | [
"gguf",
"roleplay",
"llama3",
"sillytavern",
"license:apache-2.0",
"region:us"
] | null | 2024-04-22T19:00:39Z | ---
tags:
- roleplay
- llama3
- sillytavern
- gguf
license: apache-2.0
---
> [!TIP]
> **Support:** <br>
> My upload speeds have been cooked and unstable lately. <br>
> Realistically I'd need to move to get a better provider. <br>
> If you **want** and you are able to... <br>
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br>
> I apologize for disrupting your experience.
**This is a Llama-3 land now, cowboys!**
"A chaotic force beckons for you, will you heed her call?"
GGUF-IQ-Imatrix quants for [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B).
> [!IMPORTANT]
> **Updated!**
> These quants have been redone with the fixes from [llama.cpp/pull/6920](https://github.com/ggerganov/llama.cpp/pull/6920) in mind. <br>
> Use **KoboldCpp version 1.64** or higher.
> [!NOTE]
> **Quant:** <br>
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes.
> [!WARNING]
> Recommended presets [here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/cope-llama-3-0.1) or [here](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
> Use the latest version of KoboldCpp. **Use the provided presets.** <br>
> This is all still highly experimental, modified configs were used to avoid the tokenizer issues, let the authors know how it performs for you, feedback is more important than ever now.
**Original model information:**
# Chaos RP

A chaotic force beckons for you, will you heed her call?
Built upon an intelligent foundation and tuned for roleplaying, this model will fulfill your wildest fantasies with the bare minimum of effort.
Enjoy! |
QuantFactory/Mistral-7B-v0.3-GGUF | QuantFactory | 2024-05-23T10:55:08Z | 632 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T06:32:19Z | ---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- mistral
base_model: mistralai/Mistral-7B-v0.3
---
# Mistral-7B-v0.3-GGUF
- This is quantized version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) created using llama.cpp
# Model Description
The Mistral-7B-v0.3 Large Language Model (LLM) is a Mistral-7B-v0.2 with extended vocabulary.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
## Installation
It is recommended to use `mistralai/Mistral-7B-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
### Demo
After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.
```
mistral-demo $HOME/mistral_models/7B-v0.3
```
Should give something along the following lines:
```
This is a test of the emergency broadcast system. This is only a test.
If this were a real emergency, you would be told what to do.
This is a test
=====================
This is another test of the new blogging software. I’m not sure if I’m going to keep it or not. I’m not sure if I’m going to keep
=====================
This is a third test, mistral AI is very good at testing. 🙂
This is a third test, mistral AI is very good at testing. 🙂
This
=====================
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Hello my name is", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall |
airev-ai/Jais-Inception-70b-V1.1 | airev-ai | 2024-06-19T08:47:24Z | 632 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-18T22:54:58Z | ---
license: apache-2.0
---
# Jais-Inception-70b-V1.1
## Evals
- Winogrande 82.3
- GSM8K 88.0
- MMLU 83.5
- ARC 68.7
The AI model developed collaboratively by Airev and Inception stands as a cutting-edge solution, meticulously trained on a comprehensive synthetic Arabic dataset. This model leverages advanced machine learning techniques to achieve remarkable proficiency in understanding and processing Arabic language inputs. Its training on synthetic data ensures a diverse and robust learning foundation, enabling it to handle various linguistic nuances and complexities inherent to Arabic. The combined expertise of Airev and Inception has resulted in a highly capable model, designed to excel in a multitude of applications, ranging from natural language processing and machine translation to speech recognition and text analysis. This innovation represents a significant advancement in Arabic language AI, offering unparalleled accuracy and performance. |
Norod78/hebrew-gpt_neo-small | Norod78 | 2022-11-10T10:35:44Z | 631 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"gpt_neo",
"text-generation",
"he",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "עוד בימי קדם"
- text: "קוראים לי דורון ואני מעוניין ל"
- text: "קוראים לי איציק ואני חושב ש"
- text: "החתול שלך מאוד חמוד ו"
license: mit
---
# hebrew-gpt_neo-small
Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
## Datasets
1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ)
2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew)
Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language.
## Training Config
Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-small/configs) <BR>
## Usage
### Google Colab Notebook
Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-small/Norod78_hebrew_gpt_neo_small_Colab.ipynb) <BR>
#### Simple usage sample code
```python
!pip install tokenizers==0.10.2 transformers==4.6.0
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-small")
model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-small", pad_token_id=tokenizer.eos_token_id)
prompt_text = "אני אוהב שוקולד ועוגות"
max_len = 512
sample_output_num = 3
seed = 1000
import numpy as np
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
print(f"device: {device}, n_gpu: {n_gpu}")
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(seed)
model.to(device)
encoded_prompt = tokenizer.encode(
prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
print("input_ids = " + str(input_ids))
if input_ids != None:
max_len += len(encoded_prompt[0])
if max_len > 2048:
max_len = 2048
print("Updated max_len = " + str(max_len))
stop_token = "<|endoftext|>"
new_lines = "\n\n\n"
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=max_len,
top_k=50,
top_p=0.95,
num_return_sequences=sample_output_num
)
print(100 * '-' + "\n\t\tOutput\n" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
text = tokenizer.decode(sample_output, skip_special_tokens=True)
# Remove all text after the stop token
text = text[: text.find(stop_token) if stop_token else None]
# Remove all text after 3 newlines
text = text[: text.find(new_lines) if new_lines else None]
print("\n{}: {}".format(i, text))
print("\n" + 100 * '-')
```
|
LiYuan/amazon-query-product-ranking | LiYuan | 2022-04-28T13:09:08Z | 631 | 11 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-27T14:12:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an [Amazon shopping query dataset](https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search). The code for the fine-tuning process can be found
[here](https://github.com/vanderbilt-data-science/sna). This model is uncased: it does
not make a difference between english and English.
It achieves the following results on the evaluation set:
- Loss: 0.8244
- Accuracy: 0.6617
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. We replaced its head with our shopping relevance category to fine-tune it on 571,223 rows of training set while validate it on 142,806 rows of dev set. Finally, we evaluated our model performance on a held-out test set: 79,337 rows.
## Intended uses & limitations
DistilBERT is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification, or question answering. This fine-tuned version of DistilBERT is used to predict the relevance between one query and one product description. It also can be used to rerank the relevance order of products given one query for the amazon platform or other e-commerce platforms.
The limitations are this trained model is focusing on queries and products on Amazon. If you apply this model to other domains, it may perform poorly.
## How to use
You can use this model directly by downloading the trained weights and configurations like the below code snippet:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LiYuan/amazon-query-product-ranking")
model = AutoModelForSequenceClassification.from_pretrained("LiYuan/amazon-query-product-ranking")
```
## Training and evaluation data
Download all the raw [dataset](https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search/dataset_files) from the Amazon KDD Cup website.
1. Concatenate the all product attributes from the product dataset
2. Join it with a training query dataset
3. Stratified Split the merged data into 571,223-row training, 142,806-row validation, 79,337-row test set
4. Train on the full training set
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8981 | 1.0 | 35702 | 0.8662 | 0.6371 |
| 0.7837 | 2.0 | 71404 | 0.8244 | 0.6617 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
LTP/legacy | LTP | 2022-09-19T06:35:53Z | 631 | 3 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-08-14T05:05:24Z | 


| Language | version |
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Python](python/interface/README.md) | [](https://pypi.org/project/ltp) [](https://pypi.org/project/ltp-core) [](https://pypi.org/project/ltp-extension) |
| [Rust](rust/ltp/README.md) | [](https://crates.io/crates/ltp) |
# LTP 4
LTP(Language Technology Platform) 提供了一系列中文自然语言处理工具,用户可以使用这些工具对于中文文本进行分词、词性标注、句法分析等等工作。
## 引用
如果您在工作中使用了 LTP,您可以引用这篇论文
```bibtex
@article{che2020n,
title={N-LTP: A Open-source Neural Chinese Language Technology Platform with Pretrained Models},
author={Che, Wanxiang and Feng, Yunlong and Qin, Libo and Liu, Ting},
journal={arXiv preprint arXiv:2009.11616},
year={2020}
}
```
**参考书:**
由哈工大社会计算与信息检索研究中心(HIT-SCIR)的多位学者共同编著的《[自然语言处理:基于预训练模型的方法](https://item.jd.com/13344628.html)
》(作者:车万翔、郭江、崔一鸣;主审:刘挺)一书现已正式出版,该书重点介绍了新的基于预训练模型的自然语言处理技术,包括基础知识、预训练词向量和预训练模型三大部分,可供广大LTP用户学习参考。
### 更新说明
- 4.2.0
- \[结构性变化\] 将 LTP 拆分成 2 个部分,维护和训练更方便,结构更清晰
- \[Legacy 模型\] 针对广大用户对于**推理速度**的需求,使用 Rust 重写了基于感知机的算法,准确率与 LTP3 版本相当,速度则是 LTP v3 的 **3.55** 倍,开启多线程更可获得 **17.17** 倍的速度提升,但目前仅支持分词、词性、命名实体三大任务
- \[深度学习模型\] 即基于 PyTorch 实现的深度学习模型,支持全部的6大任务(分词/词性/命名实体/语义角色/依存句法/语义依存)
- \[其他改进\] 改进了模型训练方法
- \[共同\] 提供了训练脚本和训练样例,使得用户能够更方便地使用私有的数据,自行训练个性化的模型
- \[深度学习模型\] 采用 hydra 对训练过程进行配置,方便广大用户修改模型训练参数以及对 LTP 进行扩展(比如使用其他包中的 Module)
- \[其他变化\] 分词、依存句法分析 (Eisner) 和 语义依存分析 (Eisner) 任务的解码算法使用 Rust 实现,速度更快
- \[新特性\] 模型上传至 [Huggingface Hub](https://huggingface.co/LTP),支持自动下载,下载速度更快,并且支持用户自行上传自己训练的模型供LTP进行推理使用
- \[破坏性变更\] 改用 Pipeline API 进行推理,方便后续进行更深入的性能优化(如SDP和SDPG很大一部分是重叠的,重用可以加快推理速度),使用说明参见[Github快速使用部分](https://github.com/hit-scir/ltp)
- 4.1.0
- 提供了自定义分词等功能
- 修复了一些bug
- 4.0.0
- 基于Pytorch 开发,原生 Python 接口
- 可根据需要自由选择不同速度和指标的模型
- 分词、词性、命名实体、依存句法、语义角色、语义依存6大任务
## 快速使用
### [Python](python/interface/README.md)
```bash
pip install -U ltp ltp-core ltp-extension -i https://pypi.org/simple # 安装 ltp
```
**注:** 如果遇到任何错误,请尝试使用上述命令重新安装 ltp,如果依然报错,请在 Github issues 中反馈。
```python
import torch
from ltp import LTP
ltp = LTP("LTP/small") # 默认加载 Small 模型
# 将模型移动到 GPU 上
if torch.cuda.is_available():
# ltp.cuda()
ltp.to("cuda")
output = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "pos", "ner", "srl", "dep", "sdp"])
# 使用字典格式作为返回结果
print(output.cws) # print(output[0]) / print(output['cws']) # 也可以使用下标访问
print(output.pos)
print(output.sdp)
# 使用感知机算法实现的分词、词性和命名实体识别,速度比较快,但是精度略低
ltp = LTP("LTP/legacy")
# cws, pos, ner = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "ner"]).to_tuple() # error: NER 需要 词性标注任务的结果
cws, pos, ner = ltp.pipeline(["他叫汤姆去拿外衣。"], tasks=["cws", "pos", "ner"]).to_tuple() # to tuple 可以自动转换为元组格式
# 使用元组格式作为返回结果
print(cws, pos, ner)
```
**[详细说明](python/interface/docs/quickstart.rst)**
### [Rust](rust/ltp/README.md)
```rust
use std::fs::File;
use itertools::multizip;
use ltp::{CWSModel, POSModel, NERModel, ModelSerde, Format, Codec};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let file = File::open("data/legacy-models/cws_model.bin")?;
let cws: CWSModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?;
let file = File::open("data/legacy-models/pos_model.bin")?;
let pos: POSModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?;
let file = File::open("data/legacy-models/ner_model.bin")?;
let ner: NERModel = ModelSerde::load(file, Format::AVRO(Codec::Deflate))?;
let words = cws.predict("他叫汤姆去拿外衣。")?;
let pos = pos.predict(&words)?;
let ner = ner.predict((&words, &pos))?;
for (w, p, n) in multizip((words, pos, ner)) {
println!("{}/{}/{}", w, p, n);
}
Ok(())
}
```
## 模型性能以及下载地址
| 深度学习模型 | 分词 | 词性 | 命名实体 | 语义角色 | 依存句法 | 语义依存 | 速度(句/S) |
| :---------------------------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :-----: |
| [Base](https://huggingface.co/LTP/base) | 98.7 | 98.5 | 95.4 | 80.6 | 89.5 | 75.2 | 39.12 |
| [Base1](https://huggingface.co/LTP/base1) | 99.22 | 98.73 | 96.39 | 79.28 | 89.57 | 76.57 | --.-- |
| [Base2](https://huggingface.co/LTP/base2) | 99.18 | 98.69 | 95.97 | 79.49 | 90.19 | 76.62 | --.-- |
| [Small](https://huggingface.co/LTP/small) | 98.4 | 98.2 | 94.3 | 78.4 | 88.3 | 74.7 | 43.13 |
| [Tiny](https://huggingface.co/LTP/tiny) | 96.8 | 97.1 | 91.6 | 70.9 | 83.8 | 70.1 | 53.22 |
| 感知机算法 | 分词 | 词性 | 命名实体 | 速度(句/s) | 备注 |
| :-----------------------------------------: | :---: | :---: | :---: | :------: | :------------------------: |
| [Legacy](https://huggingface.co/LTP/legacy) | 97.93 | 98.41 | 94.28 | 21581.48 | [性能详情](rust/ltp/README.md) |
**注:感知机算法速度为开启16线程速度**
## 构建 Wheel 包
```shell script
make bdist
```
## 其他语言绑定
**感知机算法**
- [Rust](rust/ltp)
- [C/C++](rust/ltp-cffi)
**深度学习算法**
- [Rust](https://github.com/HIT-SCIR/libltp/tree/master/ltp-rs)
- [C++](https://github.com/HIT-SCIR/libltp/tree/master/ltp-cpp)
- [Java](https://github.com/HIT-SCIR/libltp/tree/master/ltp-java)
## 作者信息
- 冯云龙 \<\<[[email protected]](mailto:[email protected])>>
## 开源协议
1. 语言技术平台面向国内外大学、中科院各研究所以及个人研究者免费开放源代码,但如上述机构和个人将该平台用于商业目的(如企业合作项目等)则需要付费。
2. 除上述机构以外的企事业单位,如申请使用该平台,需付费。
3. 凡涉及付费问题,请发邮件到 [email protected] 洽商。
4. 如果您在 LTP 基础上发表论文或取得科研成果,请您在发表论文和申报成果时声明“使用了哈工大社会计算与信息检索研究中心研制的语言技术平台(LTP)”.
同时,发信给[email protected],说明发表论文或申报成果的题目、出处等。
|
keremberke/yolov5m-garbage | keremberke | 2023-01-05T15:23:41Z | 631 | 10 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/garbage-object-detection",
"model-index",
"region:us"
] | object-detection | 2023-01-05T15:22:35Z |
---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.7
inference: false
datasets:
- keremberke/garbage-object-detection
model-index:
- name: keremberke/yolov5m-garbage
results:
- task:
type: object-detection
dataset:
type: keremberke/garbage-object-detection
name: keremberke/garbage-object-detection
split: validation
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.42718523764996413 # min: 0.0 - max: 1.0
name: [email protected]
---
<div align="center">
<img width="640" alt="keremberke/yolov5m-garbage" src="https://huggingface.co/keremberke/yolov5m-garbage/resolve/main/sample_visuals.jpg">
</div>
### How to use
- Install [yolov5](https://github.com/fcakyon/yolov5-pip):
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('keremberke/yolov5m-garbage')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights keremberke/yolov5m-garbage --epochs 10
```
**More models available at: [awesome-yolov5-models](https://github.com/keremberke/awesome-yolov5-models)**
|
timm/hrnet_w44.ms_in1k | timm | 2023-04-24T21:31:05Z | 631 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1908.07919",
"license:mit",
"region:us"
] | image-classification | 2023-04-24T21:29:57Z | ---
tags:
- image-classification
- timm
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for hrnet_w44.ms_in1k
A HRNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 67.1
- GMACs: 14.9
- Activations (M): 26.9
- Image size: 224 x 224
- **Papers:**
- Deep High-Resolution Representation Learning for Visual Recognition: https://arxiv.org/abs/1908.07919
- **Original:** https://github.com/HRNet/HRNet-Image-Classification
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('hrnet_w44.ms_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w44.ms_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hrnet_w44.ms_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{WangSCJDZLMTWLX19,
title={Deep High-Resolution Representation Learning for Visual Recognition},
author={Jingdong Wang and Ke Sun and Tianheng Cheng and
Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and
Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
journal = {TPAMI}
year={2019}
}
```
|
KietZer0/ViT_LFW_Model4 | KietZer0 | 2023-06-17T08:35:52Z | 631 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-06-17T06:15:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: ViT_LFW_Model4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT_LFW_Model4
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1287
- Accuracy: 0.9705
- Precision: 0.9054
- Recall: 0.9583
- F1: 0.8838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 3.4756 | 0.41 | 100 | 2.8779 | 0.6015 | 0.8461 | 0.3406 | 0.2698 |
| 2.6524 | 0.83 | 200 | 1.8112 | 0.7749 | 0.8298 | 0.5915 | 0.5064 |
| 1.6994 | 1.24 | 300 | 1.1829 | 0.8450 | 0.8065 | 0.7112 | 0.6160 |
| 1.3097 | 1.66 | 400 | 0.6849 | 0.9225 | 0.8808 | 0.8486 | 0.7908 |
| 0.5976 | 2.07 | 500 | 0.4778 | 0.9336 | 0.9015 | 0.8803 | 0.8293 |
| 0.412 | 2.49 | 600 | 0.4110 | 0.9299 | 0.8555 | 0.8988 | 0.8000 |
| 0.3165 | 2.9 | 700 | 0.3295 | 0.9262 | 0.8108 | 0.8787 | 0.7350 |
| 0.1537 | 3.32 | 800 | 0.2427 | 0.9520 | 0.8792 | 0.9333 | 0.8405 |
| 0.087 | 3.73 | 900 | 0.2373 | 0.9520 | 0.8989 | 0.9308 | 0.8562 |
| 0.0728 | 4.15 | 1000 | 0.2068 | 0.9483 | 0.8815 | 0.9264 | 0.8297 |
| 0.0305 | 4.56 | 1100 | 0.1759 | 0.9557 | 0.8692 | 0.9391 | 0.8279 |
| 0.0277 | 4.98 | 1200 | 0.1879 | 0.9446 | 0.8328 | 0.9197 | 0.7856 |
| 0.0126 | 5.39 | 1300 | 0.1759 | 0.9594 | 0.87 | 0.9333 | 0.8193 |
| 0.0137 | 5.81 | 1400 | 0.1595 | 0.9631 | 0.8771 | 0.9440 | 0.8396 |
| 0.0083 | 6.22 | 1500 | 0.1287 | 0.9705 | 0.9054 | 0.9583 | 0.8838 |
| 0.0078 | 6.64 | 1600 | 0.1295 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0064 | 7.05 | 1700 | 0.1322 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0062 | 7.47 | 1800 | 0.1299 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0053 | 7.88 | 1900 | 0.1307 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0049 | 8.3 | 2000 | 0.1295 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0041 | 8.71 | 2100 | 0.1302 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0036 | 9.13 | 2200 | 0.1310 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0037 | 9.54 | 2300 | 0.1311 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0028 | 9.96 | 2400 | 0.1301 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0031 | 10.37 | 2500 | 0.1308 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0026 | 10.79 | 2600 | 0.1304 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0023 | 11.2 | 2700 | 0.1299 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0024 | 11.62 | 2800 | 0.1315 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0022 | 12.03 | 2900 | 0.1321 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.002 | 12.45 | 3000 | 0.1321 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.002 | 12.86 | 3100 | 0.1332 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0017 | 13.28 | 3200 | 0.1327 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0016 | 13.69 | 3300 | 0.1328 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0015 | 14.11 | 3400 | 0.1336 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0015 | 14.52 | 3500 | 0.1343 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0015 | 14.94 | 3600 | 0.1345 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0014 | 15.35 | 3700 | 0.1344 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0013 | 15.77 | 3800 | 0.1354 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0013 | 16.18 | 3900 | 0.1357 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0012 | 16.6 | 4000 | 0.1365 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0011 | 17.01 | 4100 | 0.1357 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.001 | 17.43 | 4200 | 0.1361 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.001 | 17.84 | 4300 | 0.1364 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.001 | 18.26 | 4400 | 0.1379 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.001 | 18.67 | 4500 | 0.1375 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0009 | 19.09 | 4600 | 0.1374 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0009 | 19.5 | 4700 | 0.1374 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0009 | 19.92 | 4800 | 0.1382 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0008 | 20.33 | 4900 | 0.1385 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0007 | 20.75 | 5000 | 0.1389 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0007 | 21.16 | 5100 | 0.1391 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0007 | 21.58 | 5200 | 0.1392 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0007 | 21.99 | 5300 | 0.1397 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0007 | 22.41 | 5400 | 0.1401 | 0.9668 | 0.8910 | 0.9511 | 0.8592 |
| 0.0007 | 22.82 | 5500 | 0.1404 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0006 | 23.24 | 5600 | 0.1404 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0006 | 23.65 | 5700 | 0.1402 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0006 | 24.07 | 5800 | 0.1411 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0006 | 24.48 | 5900 | 0.1411 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0006 | 24.9 | 6000 | 0.1413 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 25.31 | 6100 | 0.1418 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0006 | 25.73 | 6200 | 0.1420 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 26.14 | 6300 | 0.1421 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 26.56 | 6400 | 0.1423 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 26.97 | 6500 | 0.1424 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0004 | 27.39 | 6600 | 0.1428 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 27.8 | 6700 | 0.1429 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 28.22 | 6800 | 0.1428 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 28.63 | 6900 | 0.1430 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 29.05 | 7000 | 0.1430 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 29.46 | 7100 | 0.1430 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
| 0.0005 | 29.88 | 7200 | 0.1430 | 0.9705 | 0.8963 | 0.9550 | 0.8666 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Yntec/526 | Yntec | 2023-11-02T21:52:44Z | 631 | 2 | diffusers | [
"diffusers",
"safetensors",
"General Purpose",
"Futuristic",
"Nature",
"526christian",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-11-02T18:04:19Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General Purpose
- Futuristic
- Nature
- 526christian
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# 526 Mix v12
Original page: https://civitai.com/models/15022?modelVersionId=19790
Sample and prompt:

Pretty CUTE girl. Fashion shoes. in the style of kyoani. By wlop |
second-state/Orca-2-13B-GGUF | second-state | 2024-03-20T07:47:50Z | 631 | 7 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"orca",
"orca2",
"microsoft",
"base_model:microsoft/Orca-2-13b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-21T06:54:56Z | ---
base_model: microsoft/Orca-2-13b
inference: false
library_name: transformers
license: other
license_name: microsoft-research-license
license_link: LICENSE
model_creator: TinyLlama
model_name: Tinyllama 1.1B Chat v1.0
model_type: tinyllama
tags:
- orca
- orca2
- microsoft
pipeline_tag: text-generation
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Orca-2-13B-GGUF
## Original Model
[microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `5120`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Orca-2-13b-Q5_K_M.gguf llama-api-server.wasm -p chatml
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Orca-2-13b-Q5_K_M.gguf llama-chat.wasm -p chatml
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Orca-2-13b-Q2_K.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q2_K.gguf) | Q2_K | 2 | 5.43 GB| smallest, significant quality loss - not recommended for most purposes |
| [Orca-2-13b-Q3_K_L.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| small, substantial quality loss |
| [Orca-2-13b-Q3_K_M.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| very small, high quality loss |
| [Orca-2-13b-Q3_K_S.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| very small, high quality loss |
| [Orca-2-13b-Q4_0.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Orca-2-13b-Q4_K_M.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| medium, balanced quality - recommended |
| [Orca-2-13b-Q4_K_S.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| small, greater quality loss |
| [Orca-2-13b-Q5_0.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Orca-2-13b-Q5_K_M.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| large, very low quality loss - recommended |
| [Orca-2-13b-Q5_K_S.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| large, low quality loss - recommended |
| [Orca-2-13b-Q6_K.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q6_K.gguf) | Q6_K | 6 | 10.7 GB| very large, extremely low quality loss |
| [Orca-2-13b-Q8_0.gguf](https://huggingface.co/second-state/Orca-2-13B-GGUF/blob/main/Orca-2-13b-Q8_0.gguf) | Q8_0 | 8 | 13.8 GB| very large, extremely low quality loss - not recommended |
|
OwenArli/Awanllm-Llama-3-8B-Instruct-DPO-v0.1 | OwenArli | 2024-05-03T15:09:51Z | 631 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-03T14:53:47Z | ---
license: llama3
---
Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
DPO fine tuning method using the following datasets:
- https://huggingface.co/datasets/Intel/orca_dpo_pairs
- https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo
- https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2
- https://huggingface.co/datasets/M4-ai/prm_dpo_pairs_cleaned
- https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
We are happy for anyone to try it out and give some feedback and we will have the model up on https://awanllm.com on our LLM API if it is popular.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants:
FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1
GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1-GGUF |
Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2 | Bluckr | 2024-05-17T04:13:31Z | 631 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"nlp",
"code",
"phi-3",
"chat",
"function-call",
"conversational",
"es",
"dataset:Bluckr/function-calling-assistant-spanish-pofi-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-12T16:17:36Z | ---
license: mit
language:
- es
pipeline_tag: text-generation
tags:
- nlp
- code
- phi-3
- chat
- function-call
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: '### Input: Que sabes hacer? ### Response:'
datasets:
- Bluckr/function-calling-assistant-spanish-pofi-v2
---
<div style="text-align: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64beeb8f4b4ff0d5097ddcfc/HF124f84-X7L_rPynRa4n.gif" alt="Pofi" width="300" style="display: block; margin: 0 auto;" />
</div>
Phi 3 adjusted to behave like assistant Pofi, training data works with the function calling method.
is a fine-tuned version of ["unsloth/Phi-3-mini-4k-instruct"](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct)
Pofi can:
| Utilities |
|-----------------------------|
| Setting alarms |
| Connecting to the web |
| Sending files |
| Sending messages |
| Saving strings of characters|
| Opening applications |
| Creating files |
| Manipulating the system |
## Simple Inference API
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2"
headers = {"Authorization": "Bearer %s"%token_id}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
print(response.json())
prompt = """### Input: cómo te llamas? ### Response:"""
output = query({
"inputs": prompt
})
```
# Response
```python
[{'generated_text': '### Input: cómo te llamas? ### Response: soy Pofi.'}]
```
## Unsloth Inference
```python
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes
```
```python
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
```
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "Bluckr/Phi-3-mini-4k-instruct-function-calling-assistant-spanish-pofi-v2",
max_seq_length = 2048,
dtype = None,
load_in_4bit = True,
)
FastLanguageModel.for_inference(model)
```
```python
inputs = tokenizer(
[
alpaca_prompt.format(
"""""functions":[{'name': 'fnt_programa', 'description': 'el usuario solicita un programa.', 'parameters': [{'description': 'nombre del programa solicitado.', 'name': 'programa', 'required': True, 'type': 'string'}]},
{'name': 'fnt_buscar_web', 'description': 'el usuario solicita una busqueda en internet.', 'parameters': [{'description': 'busqueda especifica.', 'name': 'busqueda', 'required': False, 'type': 'string'}, {'description': 'página especifica para la busqueda', 'name': 'sitio', 'required': False, 'type': 'string'}]},
{'name': 'fnt_buscar_lugares', 'description': 'el usuario solicita la ubicación de un lugar.', 'parameters': [{'description': 'lugar especifico.', 'name': 'lugar', 'required': True, 'type': 'string'}, {'description': 'ubicación del lugar', 'name': 'ubicación', 'required': False, 'type': 'string'}]},
{'name': 'fnt_enviar_mensajes', 'description': 'el usuario desea enviar un mensaje.', 'parameters': [{'description': 'el usuario especifica a quien enviar el mensaje.', 'name': 'destinatario', 'required': True, 'type': 'string'}, {'description': 'contenido que desea enviar el usuario', 'name': 'mensaje', 'required': True, 'type': 'string'}]},
{'name': 'fnt_crear_archivo', 'description': 'el usuario desea crear un archivo.', 'parameters': [{'description': 'el usuario especifica el nombre del archivo.', 'name': 'nombre', 'required': False, 'type': 'string'}, {'description': 'ubicación donde se creará el archivo', 'name': 'ubicación', 'required': False, 'type': 'string'}, {'description': 'extensión del archivo', 'name': 'extensión', 'required': False, 'type': 'string'}]},
{'name': 'fnt_establecer_alarma', 'description': 'el usuario desea una alarma o recordatorio', 'parameters': [{'description': 'el usuario especifica el nombre de la alarma.', 'name': 'nombre', 'required': False, 'type': 'string'}, {'description': 'hora de la alarma', 'name': 'hora', 'required': True, 'type': 'string'}, {'description': 'día que se activará la alarma', 'name': 'día', 'required': False, 'type': 'string'}]},
{'name': 'fnt_enviar_archivos', 'description': 'el usuario solicita el envio de archivos.', 'parameters': [{'description': 'archivos especificos.', 'name': 'archivos', 'required': True, 'type': 'string'}, {'description': 'destino donde llegarán los archivos', 'name': 'destino', 'required': True, 'type': 'string'}]},
{'name': 'fnt_guardar_valores', 'description': 'el usuario solicita almacenar valores.', 'parameters': [{'description': 'valor a almacenar.', 'name': 'valor', 'required': True, 'type': 'string'}, {'description': 'lugar de almacenamiento', 'name': 'lugar', 'required': False, 'type': 'string'}]},
{'name': 'fnt_hora', 'description': 'el usuario solicita la hora', 'parameters': [{'description': 'ubicación donde la hora es solicitada.', 'name': 'ubicacion', 'required': True, 'type': 'string'}]},
{'name': 'fnt_clima', 'description': 'el usuario solicita el clima', 'parameters': [{'description': 'ubicación donde se solicita el clima.', 'name': 'ubicacion', 'required': True, 'type': 'string'}]},
{'name': 'fnt_significado', 'description': 'el usuario solicita el significado de una palabra', 'parameters': [{'description': 'palabra solicitada.', 'name': 'palabra', 'required': True, 'type': 'string'}]},""", # instruction
"Pofi envia el archivo de selfie.jpg a drive", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
```
# Response
```python
Response:\nEnviando el archivo de selfie.jpg a drive.{"function_call":{"name":"fnt_enviar_archivos","arguments":{"archivos":"selfie.jpg","destino":"drive"}}}<|endoftext|>']
``` |
lmstudio-community/AlchemistCoder-L-7B-GGUF | lmstudio-community | 2024-05-31T00:28:15Z | 631 | 2 | null | [
"gguf",
"code generation",
"text-generation",
"arxiv:2405.19265",
"base_model:internlm/AlchemistCoder-L-7B",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-05-31T00:20:23Z | ---
license: apache-2.0
tags:
- code generation
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 7b
use_case: coding
release_date: 29-05-2024
model_creator: InternLM
prompt_template: Alpaca
system_prompt: none
base_model: Llama 2
original_repo: internlm/AlchemistCoder-L-7B
base_model: internlm/AlchemistCoder-L-7B
---
## 💫 Community Model> AlchemistCoder L 7B by InternLM
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [InternLM](https://huggingface.co/internlm)<br>
**Original model**: [AlchemistCoder-L-7B](https://huggingface.co/internlm/AlchemistCoder-L-7B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3024](https://github.com/ggerganov/llama.cpp/releases/tag/b3024)<br>
## Model Summary:
AlchemistCoder is a series of coding models by InternLM.<br>
This model is tuned from Llama 2, and should excel at all coding related tasks.
## Prompt template:
Choose the `MetaAI Llama 2 Chat` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
[INST]<<SYS>>
{System}
<</SYS>>[/INST]
[INST]
{User}
[/INST]
{Assistant}
```
## Technical Details
Training details:
- **AlchemistPrompts**: Designed as data-specific prompts for harmonizing inherent conflicts in multi-source data and mitigating the instruction/response misalignment at a fined-grained level.
- **Code Comprehenstion Tasks**: Sourced from the process of data construction, consisting of instruction evolution, data filtering, and code review.
- **Harmonized Multi-source Data**: Instruction tuned on 200M tokens, including 6 types of high-quality data.
- **Superior Model Performance**: Surpassing all the open-source models of the same size (6.7/7B), and rivaling or even beating larger models (15B/33B/70B/ChatGPT) on 6 code benchmarks.
- **Advanced generic capabilities**: Demonstrated by the significant improvements on MMLU, BBH, and GSM8K.
For more information, check out their paper here: https://arxiv.org/abs/2405.19265
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio. |
mradermacher/AI-M3-10.7Bv2-GGUF | mradermacher | 2024-06-04T20:16:42Z | 631 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.3",
"en",
"base_model:sydonayrex/AI-M3-10.7Bv2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T18:51:35Z | ---
base_model: sydonayrex/AI-M3-10.7Bv2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sydonayrex/AI-M3-10.7Bv2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AI-M3-10.7Bv2-GGUF/resolve/main/AI-M3-10.7Bv2.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/airoboros-dpo-70b-3.3-i1-GGUF | mradermacher | 2024-06-09T06:08:26Z | 631 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"en",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:lmsys/lmsys-chat-1m",
"base_model:jondurbin/airoboros-dpo-70b-3.3",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-08T12:15:55Z | ---
base_model: jondurbin/airoboros-dpo-70b-3.3
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- lmsys/lmsys-chat-1m
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
license_name: llama3
quantized_by: mradermacher
tags:
- llama-3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-dpo-70b-3.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/airoboros-dpo-70b-3.3-i1-GGUF/resolve/main/airoboros-dpo-70b-3.3.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mradermacher/Mermaid-Flow-MoE-Expert2-GGUF | mradermacher | 2024-06-10T22:03:08Z | 631 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TroyDoesAI/Mermaid-Flow-MoE-Expert2",
"endpoints_compatible",
"region:us"
] | null | 2024-06-09T00:58:23Z | ---
base_model: TroyDoesAI/Mermaid-Flow-MoE-Expert2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TroyDoesAI/Mermaid-Flow-MoE-Expert2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Flow-MoE-Expert2-GGUF/resolve/main/Mermaid-Flow-MoE-Expert2.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Dendrite-8x7Bv1-i1-GGUF | mradermacher | 2024-06-17T05:53:02Z | 631 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Envoid/Dendrite-8x7Bv1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-14T04:06:20Z | ---
base_model: Envoid/Dendrite-8x7Bv1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Envoid/Dendrite-8x7Bv1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF/resolve/main/Dendrite-8x7Bv1.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
nsi319/legal-led-base-16384 | nsi319 | 2021-03-01T12:33:48Z | 630 | 10 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"summarization",
"en",
"license:mit",
"autotrain_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: en
tags: summarization
metrics:
- rouge
- precision
inference: false
license: mit
---
## LED for legal summarization of documents
This is a Longformer Encoder Decoder ([led-base-16384](https://huggingface.co/allenai/led-base-16384)) model for the **legal domain**, trained for **long document abstractive summarization** task. The length of the document can be upto 16,384 tokens.
## Training data
The **legal-led-base-16384** model was trained on [sec-litigation-releases](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints.
## How to use
```Python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-led-base-16384")
model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-led-base-16384")
padding = "max_length"
text="""On March 2, 2018, the Securities and Exchange Commission announced securities fraud charges against a U.K.-based broker-dealer and its investment manager in connection with manipulative trading in the securities of HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View's securities as well as the securities of another microcap issuer, West Coast Ventures Group Corp. The SEC further announced the institution of an order suspending trading in the securities of HD View.These charges arise in part from an undercover operation by the Federal Bureau of Investigation, which also resulted in related criminal prosecutions against these defendants by the Office of the United States Attorney for the Eastern District of New York.In a complaint filed in the U.S. District Court for the Eastern District of New York, the SEC alleges that Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, manipulated the market for HD View's common stock. The scheme involved an undercover FBI agent who described his business as manipulating U.S. stocks through pump-and-dump schemes. Kyriacou and the agent discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit.The SEC's complaint against Beaufort and Kyriacou alleges that they:opened brokerage accounts for the undercover agent in the names of nominees in order to conceal his identity and his connection to the anticipated trading activity in the accounts suggested that the undercover agent could create the false appearance that HD View's stock was liquid in advance of a pump-and-dump by "gam[ing] the market" through matched trades executed multiple purchase orders of HD View shares with the understanding that Beaufort's client had arranged for an associate to simultaneously offer an equivalent number of shares at the same priceA second complaint filed by the SEC in the U.S. District Court for the Eastern District of New York alleges that in a series of recorded telephone conversations with the undercover agent, HD View CEO Dennis Mancino and William T. Hirschy agreed to manipulate HD View's common stock by using the agent's network of brokers to generate fraudulent retail demand for the stock in exchange for a kickback from the trading proceeds. According to the complaint, the three men agreed that Mancino and Hirschy would manipulate HD View stock to a higher price before using the agent's brokers to liquidate their positions at an artificially inflated price. The SEC's complaint also alleges that Mancino and Hirschy executed a "test trade" on Jan. 31, 2018, coordinated by the agent, consisting of a sell order placed by the defendants filled by an opposing purchase order placed by a broker into an account at Beaufort. Unbeknownst to Mancino and Hirschy, the Beaufort account used for this trade was a nominal account that was opened and funded by the agent. The SEC's complaint also alleges that, prior to their contact with the undercover agent, Mancino and Hirschy manipulated the market for HD View and for West Coast by using brokerage accounts that they owned, controlled, or were associated with –including TJM Investments Inc., DJK Investments 10 Inc., WT Consulting Group LLC – to effect manipulative "matched trades."The SEC's complaint against Beaufort and Kyriacou charges the defendants with violating Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The SEC also charged Hirschy, Mancino, and their corporate entities with violating Section 17(a)(1) of the Securities Act of 1933, Sections 9(a)(1), 9(a)(2), and 10(b) of the Exchange Act and Rules 10b-5(a) and (c) thereunder. The SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, and penny stock bars from Beaufort and Kyriacou. With respect to Hirschy, Mancino, and their corporate entities, the SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, penny stock bars, and an officer-and-director bar against Mancino.The investigation was conducted in the SEC's New York Regional Office by Tejal Shah and Joseph Darragh, Lorraine Collazo, and Michael D. Paley of the Microcap Fraud Task Force and supervised by Lara S. Mehraban, and in Washington, D.C. by Patrick L. Feeney, Robert Nesbitt, and Kevin Guerrero, and supervised by Antonia Chion. Preethi Krishnamurthy and Ms. Shah will lead the SEC's litigation against Beaufort and Kyriacou. Ann H. Petalas and Mr. Feeney, under the supervision of Cheryl Crumpton, will handle the SEC's litigation against Mancino, Hirschy, and their entities. The SEC appreciates the assistance of the Office of the United States Attorney for the Eastern District of New York, the Federal Bureau of Investigation, the Internal Revenue Service, the Alberta Securities Commission, the Ontario Securities Commission, the Financial Conduct Authority of the United Kingdom, and the Financial Industry Regulatory Authority.The Commission's investigation in this matter is continuing."""
input_tokenized = tokenizer.encode(text, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True)
summary_ids = model.generate(input_tokenized,
num_beams=4,
no_repeat_ngram_size=3,
length_penalty=2,
min_length=350,
max_length=500)
summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
### Summary Output
# On March 2, 2018, the Securities and Exchange Commission charged Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, with manipulating the market for HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View through pump-and-dump schemes. According to the SEC's complaint, the defendants discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit. In a parallel action, the United States Attorney's Office for the Eastern District of New York announced criminal charges against the defendants. On March 4, the SEC announced the entry of an order suspending trading in the securities of HD View and for West Coast, pending the outcome of a parallel criminal action by the Federal Bureau of Investigation. Following the announcement of the suspension, HD View stock prices and volume increased significantly, and the defendants agreed to pay over $1.5 million in disgorgement, prejudgment interest, penalties, and an officer and director bar. Beaufort agreed to settle the charges without admitting or denying the allegations of the complaint, and to pay a $1 million civil penalty. The SEC's investigation, which is continuing, has been conducted by Patrick McCluskey and Cheryl Crumpton of the SEC Enforcement Division's Market Abuse Unit in the New York Regional Office. The SEC appreciates the assistance of the Financial Industry Regulatory Authority of the United Kingdom, the Canadian Securities Commission, the Alberta Securities Commission and the Ontario Securities Commission.
```
## Evaluation results
When the model is used for summarizing legal documents, it achieves the following results:
| Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| legal-led-base-16384 | **55.69** | **61.73** | **29.03** | **36.68** | **32.65** | **40.43** |
| led-base-16384 | 29.19 | 30.43 | 15.23 | 16.27 | 16.32 | 16.58 |
|
thingsu/koDPR_question | thingsu | 2021-05-24T02:47:00Z | 630 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | fintuned the kykim/bert-kor-base model as a dense passage retrieval context encoder by KLUE dataset
this link is experiment result. https://wandb.ai/thingsu/DenseRetrieval
Corpus : Korean Wikipedia Corpus
Trained Strategy :
- Pretrained Model : kykim/bert-kor-base
- Inverse Cloze Task : 16 Epoch, by korquad v 1.0, KLUE MRC dataset
- In-batch Negatives : 12 Epoch, by KLUE MRC dataset, random sampling between Sparse Retrieval(TF-IDF) top 100 passage per each query
You must need to use Korean wikipedia corpus
<pre>
<code>
from Transformers import AutoTokenizer, BertPreTrainedModel, BertModel
class BertEncoder(BertPreTrainedModel):
def __init__(self, config):
super(BertEncoder, self).__init__(config)
self.bert = BertModel(config)
self.init_weights()
def forward(self, input_ids, attention_mask=None, token_type_ids=None):
outputs = self.bert(input_ids, attention_mask, token_type_ids)
pooled_output = outputs[1]
return pooled_output
model_name = 'kykim/bert-kor-base'
tokenizer = AutoTokenizer.from_pretrained(model_name)
q_encoder = BertEncoder.from_pretrained("thingsu/koDPR_question")
p_encoder = BertEncoder.from_pretrained("thingsu/koDPR_context")
</code>
</pre> |
kyryl0s/gpt2-uk-zno-edition | kyryl0s | 2022-05-18T11:40:06Z | 630 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"uk",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-05-02T18:41:02Z | ---
license: afl-3.0
language: uk
---
## GPT2 trained to generate ЗНО (Ukrainian exam SAT type of thing) essays
Generated texts are not very cohesive yet but I'm working on it. <br />
The Hosted inference API outputs (on the right) are too short for some reason. Trying to fix it. <br />
Use the code from the example below. The model takes "ZNOTITLE: your essay title" inputs.
### Example of usage:
```python
from transformers import AlbertTokenizer, GPT2LMHeadModel
tokenizer = AlbertTokenizer.from_pretrained("kyryl0s/gpt2-uk-zno-edition")
model = GPT2LMHeadModel.from_pretrained("kyryl0s/gpt2-uk-zno-edition")
input_ids = tokenizer.encode("ZNOTITLE: За яку працю треба більше поважати людину - за фізичну чи інтелектуальну?", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=1,
max_length=250
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
``` |
lllyasviel/sd-controlnet-normal | lllyasviel | 2023-04-24T22:30:34Z | 630 | 25 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-02-24T07:07:02Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
- image-to-image
---
# Controlnet - *Normal Map Version*
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
This checkpoint corresponds to the ControlNet conditioned on **Normal Map Estimation**.
It can be used in combination with [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img).

## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Released Checkpoints
The authors released 8 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
|[lllyasviel/sd-controlnet_openpose](https://huggingface.co/lllyasviel/sd-controlnet-openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
|[lllyasviel/sd-controlnet_scribble](https://huggingface.co/lllyasviel/sd-controlnet-scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
|[lllyasviel/sd-controlnet_seg](https://huggingface.co/lllyasviel/sd-controlnet-seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> |
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
1. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers accelerate
```
2. Run code:
```py
from PIL import Image
from transformers import pipeline
import numpy as np
import cv2
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from diffusers.utils import load_image
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-normal/resolve/main/images/toy.png").convert("RGB")
depth_estimator = pipeline("depth-estimation", model ="Intel/dpt-hybrid-midas" )
image = depth_estimator(image)['predicted_depth'][0]
image = image.numpy()
image_depth = image.copy()
image_depth -= np.min(image_depth)
image_depth /= np.max(image_depth)
bg_threhold = 0.4
x = cv2.Sobel(image, cv2.CV_32F, 1, 0, ksize=3)
x[image_depth < bg_threhold] = 0
y = cv2.Sobel(image, cv2.CV_32F, 0, 1, ksize=3)
y[image_depth < bg_threhold] = 0
z = np.ones_like(x) * np.pi * 2.0
image = np.stack([x, y, z], axis=2)
image /= np.sum(image ** 2.0, axis=2, keepdims=True) ** 0.5
image = (image * 127.5 + 127.5).clip(0, 255).astype(np.uint8)
image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained(
"fusing/stable-diffusion-v1-5-controlnet-normal", torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe("cute toy", image, num_inference_steps=20).images[0]
image.save('images/toy_normal_out.png')
```



### Training
The normal model was trained from an initial model and then a further extended model.
The initial normal model was trained on 25,452 normal-image, caption pairs from DIODE. The image captions were generated by BLIP. The model was trained for 100 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model.
The extended normal model further trained the initial normal model on "coarse" normal maps. The coarse normal maps were generated using Midas to compute a depth map and then performing normal-from-distance. The model was trained for 200 GPU-hours with Nvidia A100 80G using the initial normal model as a base model.
### Blog post
For more information, please also have a look at the [official ControlNet Blog Post](https://huggingface.co/blog/controlnet). |
TheBloke/StableBeluga-13B-GGUF | TheBloke | 2023-09-27T12:48:09Z | 630 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"dataset:conceptofmind/cot_submix_original",
"dataset:conceptofmind/flan2021_submix_original",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"arxiv:2307.09288",
"arxiv:2306.02707",
"base_model:stabilityai/StableBeluga-13B",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-06T00:31:34Z | ---
language:
- en
license: llama2
datasets:
- conceptofmind/cot_submix_original
- conceptofmind/flan2021_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
model_name: StableBeluga 13B
base_model: stabilityai/StableBeluga-13B
inference: false
model_creator: Stability AI
model_type: llama
pipeline_tag: text-generation
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# StableBeluga 13B - GGUF
- Model creator: [Stability AI](https://huggingface.co/stabilityai)
- Original model: [StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Stability AI's StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/StableBeluga-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF)
* [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [stablebeluga-13b.Q2_K.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [stablebeluga-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [stablebeluga-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [stablebeluga-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [stablebeluga-13b.Q4_0.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [stablebeluga-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [stablebeluga-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [stablebeluga-13b.Q5_0.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [stablebeluga-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [stablebeluga-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [stablebeluga-13b.Q6_K.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [stablebeluga-13b.Q8_0.gguf](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF/blob/main/stablebeluga-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/StableBeluga-13B-GGUF and below it, a specific filename to download, such as: stablebeluga-13b.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/StableBeluga-13B-GGUF stablebeluga-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/StableBeluga-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/StableBeluga-13B-GGUF stablebeluga-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m stablebeluga-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/StableBeluga-13B-GGUF", model_file="stablebeluga-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Stability AI's StableBeluga 13B
# Stable Beluga 13B
Use [Stable Chat (Research Preview)](https://chat.stability.ai/chat) to test Stability AI's best language models for free
## Model Description
`Stable Beluga 13B` is a Llama2 13B model finetuned on an Orca style Dataset
## Usage
Start chatting with `Stable Beluga 13B` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga-13B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga-13B", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_prompt = "### System:\nYou are Stable Beluga 13B, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Stable Beluga 13B should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of Stable Beluga 13B
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: Stable Beluga 13B is an auto-regressive language model fine-tuned on Llama2 13B.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`Stable Beluga 13B`) is licensed under the [STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT](https://huggingface.co/stabilityai/StableBeluga-13B/blob/main/LICENSE.txt)
* **Contact**: For questions and comments about the model, please email `[email protected]`
### Training Dataset
` Stable Beluga 13B` is trained on our internal Orca-style dataset
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:
| Dataset | Batch Size | Learning Rate |Learning Rate Decay| Warm-up | Weight Decay | Betas |
|-------------------|------------|---------------|-------------------|---------|--------------|-------------|
| Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
| Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
## Ethical Considerations and Limitations
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
## Citations
```bibtext
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtext
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- original-model-card end -->
|
FL33TW00D-HF/whisper-small | FL33TW00D-HF | 2024-05-15T15:16:11Z | 630 | 0 | transformers | [
"transformers",
"gguf",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-12T17:00:05Z | ---
license: apache-2.0
---
# Model Card for Ratchet + Whisper Small
<!-- Provide a quick summary of what the model is/does. -->
This is a conversion from the GGML format of [openai/whisper-small](https://huggingface.co/openai/whisper-small) into the Ratchet custom format.
## Model Card Contact
[[email protected]](mailto:[email protected]) |
JohnJumon/fluency_accuracy | JohnJumon | 2024-03-15T16:58:48Z | 630 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-15T16:25:01Z | ---
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fluency_accuracy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fluency_accuracy
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5218
- Accuracy: 0.827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.4664 | 0.814 |
| No log | 2.0 | 250 | 0.4250 | 0.823 |
| No log | 3.0 | 375 | 0.5218 | 0.827 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Felladrin/gguf-multi-qa-MiniLM-L6-cos-v1 | Felladrin | 2024-04-30T16:27:44Z | 630 | 0 | null | [
"gguf",
"base_model:sentence-transformers/multi-qa-MiniLM-L6-cos-v1",
"region:us"
] | null | 2024-04-30T14:55:05Z | ---
base_model: sentence-transformers/multi-qa-MiniLM-L6-cos-v1
---
GGUF version of [sentence-transformers/multi-qa-MiniLM-L6-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1). |
mradermacher/HelpingAI-9B-GGUF | mradermacher | 2024-06-16T22:59:31Z | 630 | 0 | transformers | [
"transformers",
"gguf",
"HelpingAI",
"Emotionally Intelligent",
"EQ",
"en",
"dataset:OEvortex/SentimentSynth",
"dataset:OEvortex/EmotionalIntelligence-10K",
"base_model:OEvortex/HelpingAI-9B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T00:53:57Z | ---
base_model: OEvortex/HelpingAI-9B
datasets:
- OEvortex/SentimentSynth
- OEvortex/EmotionalIntelligence-10K
language:
- en
library_name: transformers
license: other
license_link: LICENSE.md
license_name: helpingai
quantized_by: mradermacher
tags:
- HelpingAI
- Emotionally Intelligent
- EQ
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/OEvortex/HelpingAI-9B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-GGUF/resolve/main/HelpingAI-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lucianosb/boto-7B-v1.2-GGUF | lucianosb | 2024-05-23T11:48:13Z | 630 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-23T04:03:51Z | ---
language:
- pt
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_Model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
---
# Boto 7B 1.2 - GGUF
- Criador do Modelo: [Luciano Santa Brígida](https://lucianosb.com.br/)
- Modelo Original: [Boto-7B v1.2](https://huggingface.co/lucianosb/boto-7B-v1.2)
Boto-7B é um modelo de linguagem de 7 bilhões de parâmetros, otimizado a partir do Mistral-7B-intruct-v0.3.
Confira os [presets](https://huggingface.co/lucianosb/boto-7B-GGUF/tree/main/presets) para usar com [LM Studio](https://lmstudio.ai/).
## Arquivos Incluídos
| Nome | Método Quant | Bits | Tamanho | Desc |
| ---- | ---- | ---- | ---- | ----- |
| [boto-7B-v1.2-GGUF-unsloth.Q2_K.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q2_K.gguf) | q2_K | 2 | 2.72 GB | Quantização em 2-bit. Significativa perda de qualidade. Não-recomendado. |
| [boto-7B-v1.2-GGUF-unsloth.Q3_K_M.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q3_K_M.gguf) | q3_K_M| 3 | 3.52 GB | Quantização em 3-bit. |
| [boto-7B-v1.2-GGUF-unsloth.Q3_K_S.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q3_K_S.gguf) | q3_K_S | 3 | 3.17 GB | Quantização em 3-bit. |
| [boto-7B-v1.2-GGUF-unsloth.Q4_0.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_0.gguf) | q4_0 | 4 | 4.11 GB | Quantização em 4-bit. Prefira usar o Q3_K_M|
| [boto-7B-v1.2-GGUF-unsloth.Q4_K_S.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_K_S.gguf) | q4_K_S | 4 | 4.14 GB | Quantização em 4-bit. |
| [boto-7B-v1.2-GGUF-unsloth.Q3_K_L.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q3_K_L.gguf) | q3_K_L | 3 | 3.83 GB | Quantização em 3-bit com menor perda de qualidade. |
| [boto-7B-v1.2-GGUF-unsloth.Q4_K_M.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_K_M.gguf) | q4_K_M | 4 | 4.37 GB | Quantização em 4-bit. |
| [boto-7B-v1.2-GGUF-unsloth.Q4_1.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q4_1.gguf) | q4_1 | 4 | 4.56 GB | Quantização em 4-bit. Acurácia maior que q4_0 mas não tão boa quanto q5_0. Inferência mais rápida que os modelos q5. |
| [boto-7B-v1.2-GGUF-unsloth.Q5_0.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_0.gguf) | q5_0 | 5 | 5 GB | Quantização em 5-bit. Melhor acurácia, maior uso de recursos, inferência mais lenta. |
| [boto-7B-v1.2-GGUF-unsloth.Q5_1.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_1.gguf) | q5_1 | 5 | 5.45 GB | Quantização em 5-bit. Ainda Melhor acurácia, maior uso de recursos, inferência mais lenta. |
| [boto-7B-v1.2-GGUF-unsloth.Q5_K_M.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_K_M.gguf) | q5_K_M | 5 | 5.14 GB | Quantização em 5-bit. Melhor performance. Recomendado. |
| [boto-7B-v1.2-GGUF-unsloth.Q5_K_S.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q5_K_S.gguf) | q5_K_S | 5 | 5 GB | Quantização em 5-bit. |
| [boto-7B-v1.2-GGUF-unsloth.Q6_K.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q6_K.gguf) | q6_K | 6 | 5.95 GB | Quantização em 6-bit. |
| [boto-7B-v1.2-GGUF-unsloth.Q8_0.gguf](https://huggingface.co/lucianosb/boto-7B-v1.2-GGUF/blob/main/boto-7B-v1.2-GGUF-unsloth.Q8_0.gguf) | q8_0 | 8 | 7.7 GB | Quantização em 8-bit. Quase indistinguível do float16. Usa muitos recursos e é mais lento. |
**Observação**: os valores de RAM acima não pressupõem descarregamento de GPU. Se as camadas forem descarregadas para a GPU, isso reduzirá o uso de RAM e usará VRAM.
## Template
````
### Instrução:
{prompt}
### Resposta:
````
# Uploaded model
- **Developed by:** lucianosb
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
CHE-72/Qwen2-7B-Instruct-Q3_K_S-GGUF | CHE-72 | 2024-06-21T18:55:17Z | 630 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-06-21T18:54:54Z | ---
base_model: Qwen/Qwen2-7B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen2-7B-Instruct-Q3_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_S-GGUF --hf-file qwen2-7b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_S-GGUF --hf-file qwen2-7b-instruct-q3_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_S-GGUF --hf-file qwen2-7b-instruct-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_S-GGUF --hf-file qwen2-7b-instruct-q3_k_s.gguf -c 2048
```
|
CHE-72/Phi-3-medium-128k-instruct-Q3_K_L-GGUF | CHE-72 | 2024-06-22T06:26:29Z | 630 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"region:us"
] | text-generation | 2024-06-22T06:25:58Z | ---
base_model: microsoft/Phi-3-medium-128k-instruct
language:
- multilingual
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# CHE-72/Phi-3-medium-128k-instruct-Q3_K_L-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q3_K_L-GGUF --hf-file phi-3-medium-128k-instruct-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q3_K_L-GGUF --hf-file phi-3-medium-128k-instruct-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q3_K_L-GGUF --hf-file phi-3-medium-128k-instruct-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q3_K_L-GGUF --hf-file phi-3-medium-128k-instruct-q3_k_l.gguf -c 2048
```
|
CoprolaliacPress/Asuka-Q4_K_M-GGUF | CoprolaliacPress | 2024-07-01T10:11:10Z | 630 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:CoprolaliacPress/Asuka",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T10:10:42Z | ---
base_model: CoprolaliacPress/Asuka
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# CoprolaliacPress/Asuka-Q4_K_M-GGUF
This model was converted to GGUF format from [`CoprolaliacPress/Asuka`](https://huggingface.co/CoprolaliacPress/Asuka) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CoprolaliacPress/Asuka) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CoprolaliacPress/Asuka-Q4_K_M-GGUF --hf-file asuka-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CoprolaliacPress/Asuka-Q4_K_M-GGUF --hf-file asuka-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CoprolaliacPress/Asuka-Q4_K_M-GGUF --hf-file asuka-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CoprolaliacPress/Asuka-Q4_K_M-GGUF --hf-file asuka-q4_k_m.gguf -c 2048
```
|
sentence-transformers/facebook-dpr-question_encoder-single-nq-base | sentence-transformers | 2024-05-07T15:47:23Z | 629 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# sentence-transformers/facebook-dpr-question_encoder-single-nq-base
This is a port of the [DPR Model](https://github.com/facebookresearch/DPR) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/facebook-dpr-question_encoder-single-nq-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/facebook-dpr-question_encoder-single-nq-base')
model = AutoModel.from_pretrained('sentence-transformers/facebook-dpr-question_encoder-single-nq-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/facebook-dpr-question_encoder-single-nq-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [DPR Model](https://github.com/facebookresearch/DPR) |
timm/vit_base_r50_s16_224.orig_in21k | timm | 2024-02-09T18:10:38Z | 629 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-feature-extraction",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2022-12-23T00:26:33Z | ---
license: apache-2.0
library_name: timm
tags:
- image-feature-extraction
- timm
datasets:
- imagenet-21k
---
# Model card for vit_base_r50_s16_224.orig_in21k
A ResNet - Vision Transformer (ViT) hybrid image classification model. Pretrained on ImageNet-21k in JAX by paper authors, ported to PyTorch by Ross Wightman. This model does not have a classification head, useful for features and fine-tune only.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 97.9
- GMACs: 20.9
- Activations (M): 27.9
- Image size: 224 x 224
- **Papers:**
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_r50_s16_224.orig_in21k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_r50_s16_224.orig_in21k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 197, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
``` |
AIARTCHAN/anidosmixV2 | AIARTCHAN | 2023-03-21T10:06:29Z | 629 | 18 | diffusers | [
"diffusers",
"stable-diffusion",
"aiartchan",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-02-18T04:22:00Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- aiartchan
---
# anidosmixV2
[원본글](https://arca.live/b/aiart/70069660)
~[civitai](https://civitai.com/models/6437/anidosmixv2)~
# Download
~[civitai original 2.13GB](https://civitai.com/api/download/models/11922)~




|
ce-lery/japanese-mistral-300m-base | ce-lery | 2023-12-21T15:13:51Z | 629 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gguf",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:None",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-10T14:38:53Z | ---
base_model: None
tags:
- generated_from_trainer
model-index:
- name: checkpoints-mistral-300M-FA2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# japanese-mistral-300m-base
## Overview
Welcome to my model card!
This Model feature is ...
- Suppression of unknown word generation by using byte fallback in SentencePiece tokenizer and conversion to huggingface Tokenizers format
- Pretrained by wikipedia dataset and cc100 dataset
- Use of [Mistral 300M](https://huggingface.co/ce-lery/japanese-mistral-300m-base/blob/main/config.json)
Yukkuri shite ittene!
## How to use the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
MODEL_NAME = "ce-lery/japanese-mistral-300m-base"
torch.set_float32_matmul_precision('high')
DEVICE = "cuda"
if torch.cuda.is_available():
print("cuda")
DEVICE = "cuda"
else:
print("cpu")
DEVICE = "cpu"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME,use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
trust_remote_code=True,
).to(DEVICE)
# streamer = TextStreamer(tokenizer)
prompt = "大規模言語モデルとは、"
inputs = tokenizer(prompt, add_special_tokens=False,return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=256,
do_sample=True,
early_stopping=False,
top_p=0.95,
top_k=50,
temperature=0.9,
# streamer=streamer,
no_repeat_ngram_size=2,
num_beams=3
)
print(outputs.tolist()[0])
outputs_txt = tokenizer.decode(outputs[0])
print(outputs_txt)
```
## Receipe
If you want to restruct this model, you can refer [this Github repository](https://github.com/ce-lery/japanese-mistral-300m-recipe).
I wrote the receipe for struction this model. For example,
- Preprocess with sentencepiece
- Pretraining with flash attention2 and torch.compile and DeepSpeed
- Fine-tuning with databricks-dolly-15k-ja
If you find my mistake,error,...etc, please create issue.
If you create pulreqest, I'm very happy!
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.95) and epsilon=0.0001
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.2911 | 0.12 | 5000 | 4.2914 |
| 3.9709 | 0.24 | 10000 | 3.9900 |
| 3.8229 | 0.36 | 15000 | 3.8388 |
| 3.7197 | 0.47 | 20000 | 3.7454 |
| 3.652 | 0.59 | 25000 | 3.6739 |
| 3.597 | 0.71 | 30000 | 3.6177 |
| 3.5554 | 0.83 | 35000 | 3.5770 |
| 3.536 | 0.95 | 40000 | 3.5582 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
bartowski/speechless-starcoder2-7b-GGUF | bartowski | 2024-03-10T18:47:20Z | 629 | 3 | transformers | [
"transformers",
"gguf",
"code",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:TokenBender/python_eval_instruct_51k",
"dataset:codefuse-ai/Evol-instruction-66k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T18:35:33Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- teknium/OpenHermes-2.5
- TokenBender/python_eval_instruct_51k
- codefuse-ai/Evol-instruction-66k
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.0
verified: false
quantized_by: bartowski
---
## Llamacpp Quantizations of speechless-starcoder2-7b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2354">b2354</a> for quantization.
Original model: https://huggingface.co/uukuguy/speechless-starcoder2-7b
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [speechless-starcoder2-7b-Q8_0.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q8_0.gguf) | Q8_0 | 7.62GB | Extremely high quality, generally unneeded but max available quant. |
| [speechless-starcoder2-7b-Q6_K.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q6_K.gguf) | Q6_K | 5.89GB | Very high quality, near perfect, *recommended*. |
| [speechless-starcoder2-7b-Q5_K_M.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q5_K_M.gguf) | Q5_K_M | 5.12GB | High quality, very usable. |
| [speechless-starcoder2-7b-Q5_K_S.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q5_K_S.gguf) | Q5_K_S | 4.93GB | High quality, very usable. |
| [speechless-starcoder2-7b-Q5_0.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q5_0.gguf) | Q5_0 | 4.93GB | High quality, older format, generally not recommended. |
| [speechless-starcoder2-7b-Q4_K_M.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q4_K_M.gguf) | Q4_K_M | 4.40GB | Good quality, similar to 4.25 bpw. |
| [speechless-starcoder2-7b-Q4_K_S.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q4_K_S.gguf) | Q4_K_S | 4.12GB | Slightly lower quality with small space savings. |
| [speechless-starcoder2-7b-Q4_0.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q4_0.gguf) | Q4_0 | 4.04GB | Decent quality, older format, generally not recommended. |
| [speechless-starcoder2-7b-Q3_K_L.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q3_K_L.gguf) | Q3_K_L | 3.98GB | Lower quality but usable, good for low RAM availability. |
| [speechless-starcoder2-7b-Q3_K_M.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q3_K_M.gguf) | Q3_K_M | 3.59GB | Even lower quality. |
| [speechless-starcoder2-7b-Q3_K_S.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q3_K_S.gguf) | Q3_K_S | 3.09GB | Low quality, not recommended. |
| [speechless-starcoder2-7b-Q2_K.gguf](https://huggingface.co/bartowski/speechless-starcoder2-7b-GGUF//main/speechless-starcoder2-7b-Q2_K.gguf) | Q2_K | 2.72GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF | mradermacher | 2024-05-27T19:53:19Z | 629 | 1 | transformers | [
"transformers",
"gguf",
"nlp",
"code",
"multilingual",
"base_model:failspy/Phi-3-mini-128k-instruct-abliterated-v3",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T17:48:43Z | ---
base_model: failspy/Phi-3-mini-128k-instruct-abliterated-v3
language:
- multilingual
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
quantized_by: mradermacher
tags:
- nlp
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/failspy/Phi-3-mini-128k-instruct-abliterated-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3-mini-128k-instruct-abliterated-v3-GGUF/resolve/main/Phi-3-mini-128k-instruct-abliterated-v3.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/efficientformerv2_l.snap_dist_in1k | timm | 2024-02-10T23:30:29Z | 628 | 1 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2212.08059",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-02-03T21:08:01Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for efficientformerv2_l.snap_dist_in1k
A EfficientFormer-V2 image classification model. Pretrained with distillation on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 26.3
- GMACs: 2.6
- Activations (M): 18.5
- Image size: 224 x 224
- **Original:** https://github.com/snap-research/EfficientFormer
- **Papers:**
- Rethinking Vision Transformers for MobileNet Size and Speed: https://arxiv.org/abs/2212.08059
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('efficientformerv2_l.snap_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformerv2_l.snap_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformerv2_l.snap_dist_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for efficientformerv2_l:
# torch.Size([2, 40, 56, 56])
# torch.Size([2, 80, 28, 28])
# torch.Size([2, 192, 14, 14])
# torch.Size([2, 384, 7, 7])
print(o.shape)
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 |
|efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 |
|efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 |
|efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 |
|efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 |
|efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 |
|efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 |
## Citation
```bibtex
@article{li2022rethinking,
title={Rethinking Vision Transformers for MobileNet Size and Speed},
author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian},
journal={arXiv preprint arXiv:2212.08059},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
timm/rexnet_300.nav_in1k | timm | 2024-02-10T23:32:20Z | 628 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2007.00992",
"license:mit",
"region:us"
] | image-classification | 2023-03-20T20:36:05Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for rexnet_300.nav_in1k
A ReXNet image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 34.7
- GMACs: 3.4
- Activations (M): 22.4
- Image size: 224 x 224
- **Papers:**
- Rethinking Channel Dimensions for Efficient Model Design: https://arxiv.org/abs/2007.00992
- **Original:** https://github.com/clovaai/rexnet
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('rexnet_300.nav_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_300.nav_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 48, 112, 112])
# torch.Size([1, 116, 56, 56])
# torch.Size([1, 183, 28, 28])
# torch.Size([1, 386, 14, 14])
# torch.Size([1, 554, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'rexnet_300.nav_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 3840, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results)."
|model |top1 |top5 |param_count|img_size|crop_pct|
|-------------------------|------|------|-----------|--------|--------|
|rexnetr_300.sw_in12k_ft_in1k|84.53 |97.252|34.81 |288 |1.0 |
|rexnetr_200.sw_in12k_ft_in1k|83.164|96.648|16.52 |288 |1.0 |
|rexnet_300.nav_in1k |82.772|96.232|34.71 |224 |0.875 |
|rexnet_200.nav_in1k |81.652|95.668|16.37 |224 |0.875 |
|rexnet_150.nav_in1k |80.308|95.174|9.73 |224 |0.875 |
|rexnet_130.nav_in1k |79.478|94.68 |7.56 |224 |0.875 |
|rexnet_100.nav_in1k |77.832|93.886|4.8 |224 |0.875 |
## Citation
```bibtex
@misc{han2021rethinking,
title={Rethinking Channel Dimensions for Efficient Model Design},
author={Dongyoon Han and Sangdoo Yun and Byeongho Heo and YoungJoon Yoo},
year={2021},
eprint={2007.00992},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Locutusque/gpt2-medium-conversational | Locutusque | 2023-08-01T04:23:40Z | 628 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:Locutusque/ColumnedChatCombined",
"dataset:tatsu-lab/alpaca",
"doi:10.57967/hf/1029",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-16T04:57:35Z | ---
license: openrail
datasets:
- Locutusque/ColumnedChatCombined
- tatsu-lab/alpaca
language:
- en
metrics:
- bleu
- perplexity
- loss
- reward
- penalty
pipeline_tag: text-generation
---
# Model Card
## Model Details
- Model Name: gpt2-medium-conversational (prototype)
- Model Type: Language Modeling
- Task: Generating Conversational Responses
- Hardware: 1x RTX 3060
- Description: This model is trained on a dataset of conversations between a user and an AI assistant, with the goal of generating a coherent and relevant response to the user's input. It uses the GPT-2 architecture, a state-of-the-art transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The model is fine-tuned on the conversational data using maximum likelihood estimation, and is evaluated based on its ability to generate responses that are both grammatically correct and semantically relevant to the user's input.
- Unfortunately, this is not the full model. The full model had much better performance but no longer exists due to a data loss incident.
## Intended Use
This model is intended to be used for generating conversational responses in a variety of contexts, such as chatbots, virtual assistants, and customer service applications. It is designed to provide natural and engaging responses to user input, with a focus on maintaining a consistent tone and style throughout the conversation. The model is suitable for use in both text-based and voice-based interfaces, and can be easily integrated into existing applications using the PyTorch and Transformers frameworks.
## Training Data
The model is trained on a large dataset of conversational data, consisting of interactions between users and an AI assistant. The data is preprocessed to remove any sensitive information and is formatted in a way that is suitable for training a language model. The training data is split into a training set and a validation set, with the training set used to update the model parameters and the validation set used to evaluate the model performance. The model was trained on 302,000 examples over 502,505 steps, it achieved decent metrics.
## Model Architecture
The model architecture used in this model is GPT-2, a transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The GPT-2 architecture consists of a multi-layered decoder-only transformer, with self-attention mechanisms that allow the model to capture long-term dependencies and generate coherent text.
## Evaluation Metrics
The model is evaluated based on several metrics, including loss, reward, penalty, BLEU score, and perplexity. The loss metric is calculated during training and reflects the difference between the predicted output and the actual output. The reward metric is based on the number of correct words generated by the model, while the penalty metric penalizes the model for repeating words consecutively. The BLEU score measures the similarity between the generated text and the ground truth text, while the perplexity metric measures how well the model is able to predict the next word in a sequence. During validation, the model achieved the following metrics:
- BLEU score: 9.7
- perplexity: 5
- loss: 1.2
## Limitations and Bias
This model is not suitable for all use cases due to its limited training time on a weak computer. As a result, it may produce irrelevant or nonsensical responses. Additionally, it has not been fine-tuned to remember the chat history, is unable to provide follow-up responses, and it does not know the answer to many questions (it was only fine-tuned to respond in a conversational way). For optimal performance, we recommend using a GPU with at least 8GB of VRAM and downloading the model manually instead of using the Transformers library. Here's how you should deploy the model:
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
start_token = "<|ASSISTANT|>"
end_token = "<|"
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2LMHeadModel.from_pretrained('gpt2-medium')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.add_special_tokens({'eos_token': '<|End|>'})
special_tokens = {
"additional_special_tokens": ["<|USER|>", "<|SYSTEM|>", "<|ASSISTANT|>"]
}
tokenizer.add_special_tokens(special_tokens)
model.resize_token_embeddings(len(tokenizer))
model.load_state_dict(torch.load("path/to/model"))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def generate_text(model, tokenizer, prompt, max_length=256):
prompt = f'<|USER|> {prompt} <|ASSISTANT|> '
input_ids = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt").to(device)
attention_mask = torch.ones_like(input_ids).to(device)
output = model.generate(input_ids,
max_length=max_length,
do_sample=True,
top_k=35,
top_p=0.80,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
attention_mask=attention_mask)
output_ids = tokenizer.decode(output[0], skip_special_tokens=False)
return output_ids
# Loop to interact with the model
while True:
prompt = input("Enter a prompt (or 'q' to quit): ")
if prompt == "q":
break
output_text = generate_text(model, tokenizer, prompt)
text_between_tokens = output_text[output_text.find(start_token) + len(start_token):]
out = text_between_tokens[:text_between_tokens.find(end_token)]
print(out)
```
## Deploying and training the model
The model has been fine-tuned on a specific input format that goes like this ```"<|USER|> {user prompt} <|ASSISTANT|> {model prediction} <|End|>".``` For the best performance from the model the input text should be as follows ```<|USER|> {dataset prompt} <|ASSISTANT|> ``` and the target/label should be as follows ```<|USER|> {dataset prompt} <|ASSISTANT|> {dataset output} <|End|>``` |
Yntec/Reddit | Yntec | 2023-09-01T07:17:35Z | 628 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"nutbutter",
"acheong08",
"license:creativeml-openrail-m",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-26T11:20:49Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- nutbutter
- acheong08
inference: false
---
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
# Reddit
A mix of RedditAlpha and REV 1.0, with the Color101VAE baked in.
Sample and prompt:

cute pretty girl, sitting, detailed chibi eyes, holding super soaker, beautiful detailed legs, cowgirl, gorgeous detailed hair, cowboy hat, magazine ad, iconic, 1943, from the movie, sharp focus. visible brushstrokes by kyoani and clay mann
Original page:
https://civitai.com/models/5216?modelVersionId=6048
# RedditOmega
A model made by mistake by using Weighted Sum 0.3 instead of 0.7, but it's a nice model still.

# RedditAlpha
A mix of F222 wih subreddit-v3 (many attempts were done to implement subreddit-v4 to v6 but all of them failed.) This is an unsafe model and should only be be used for research purposes.
# Recipes
Weighted Sum 0.5 F222 + subreddit-v3 = RedditBeta
Add Difference 1.0 sd-1.5 + (RedditBeta - sd-1.4) = RedditAlpha
Weighted Sum 0.3 REV + RedditAlpha = RedditOmega
Weighted Sum 0.7 REV + RedditAlpha = RedditZeta
Bake VAE Color 101 = Reddit |
nickprock/mmarco-sentence-flare-it | nickprock | 2023-12-03T16:25:19Z | 628 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"it",
"dataset:unicamp-dl/mmarco",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2023-09-28T12:13:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
license: apache-2.0
datasets:
- unicamp-dl/mmarco
language:
- it
library_name: sentence-transformers
model-index:
- name: mmarco-sentence-flare-it
results:
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (it)
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 22.299932750504368
- type: f1
value: 20.147804322480262
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (it)
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 27.40753194351042
- type: f1
value: 25.187141587127705
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (it)
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 30.67175493186678
- type: cos_sim_spearman
value: 37.92638638971281
- type: euclidean_pearson
value: 37.47072224334179
- type: euclidean_spearman
value: 39.23036609148336
- type: manhattan_pearson
value: 42.92657347688227
- type: manhattan_spearman
value: 43.93955531904934
---
# mmarco-sentence-flare-it
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "Quante persone vivono a Londra?"
docs = ["A Londra vivono circa 9 milioni di persone", "Londra è conosciuta per il suo quartiere finanziario"]
#Load the model
model = SentenceTransformer('nickprock/mmarco-sentence-flare-it')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
return embeddings
# Sentences we want sentence embeddings for
query = "Quante persone vivono a Londra?"
docs = ["A Londra vivono circa 9 milioni di persone", "Londra è conosciuta per il suo quartiere finanziario"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("nickprock/mmarco-sentence-flare-it")
model = AutoModel.from_pretrained("nickprock/mmarco-sentence-flare-it")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
print("Query:", query)
for doc, score in doc_score_pairs:
print(score, doc)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7500 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1500,
"warmup_steps": 7500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
More information about the [base model here](https://huggingface.co/osiria/flare-it/) |
bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF | bartowski | 2024-05-26T18:04:29Z | 628 | 1 | null | [
"gguf",
"text-generation",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"license:llama3",
"region:us"
] | text-generation | 2024-05-26T17:47:14Z | ---
license: llama3
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-Alpha-Centauri-v0.1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3001">b3001</a> for quantization.
Original model: https://huggingface.co/fearlessdots/Llama-3-Alpha-Centauri-v0.1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Alpha-Centauri-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-Alpha-Centauri-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-Alpha-Centauri-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-Alpha-Centauri-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-Alpha-Centauri-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-Alpha-Centauri-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-Alpha-Centauri-v0.1-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-Alpha-Centauri-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-Alpha-Centauri-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-Alpha-Centauri-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-Alpha-Centauri-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-Alpha-Centauri-v0.1-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-Alpha-Centauri-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-Alpha-Centauri-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-Alpha-Centauri-v0.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-Alpha-Centauri-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-Alpha-Centauri-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-Alpha-Centauri-v0.1-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Alpha-Centauri-v0.1-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Alpha-Centauri-v0.1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-Alpha-Centauri-v0.1-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-Alpha-Centauri-v0.1-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF/blob/main/Llama-3-Alpha-Centauri-v0.1-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF --include "Llama-3-Alpha-Centauri-v0.1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-Alpha-Centauri-v0.1-GGUF --include "Llama-3-Alpha-Centauri-v0.1-Q8_0.gguf/*" --local-dir Llama-3-Alpha-Centauri-v0.1-Q8_0
```
You can either specify a new local-dir (Llama-3-Alpha-Centauri-v0.1-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
melmoth/ru-rope-t5-small-instruct | melmoth | 2024-06-01T17:28:55Z | 628 | 20 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"ru",
"en",
"dataset:Vikhrmodels/Flan_translated_300k",
"dataset:d0rj/OpenOrca-ru",
"arxiv:2205.05131",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-05-30T13:45:18Z | ---
library_name: transformers
license: apache-2.0
datasets:
- Vikhrmodels/Flan_translated_300k
- d0rj/OpenOrca-ru
language:
- ru
- en
---
# Model Card for ru-rope-t5-small-instruct
The Russian Rotary Position Embedding T5 model of small version after instruct tuning
## Model Details
The model was trained in a Russian corpus with a mix of English using the [Mixture-Of-Denoisers](https://arxiv.org/abs/2205.05131v1) pre-training method by [UL2](https://huggingface.co/google/ul2) on 1024 length sequences.
Training using Flash Attention 2 is available because of the replacement of bias with rotary encoding.
- **Model type:** [RoPE T5](https://huggingface.co/melmoth/ru-rope-t5-small-instruct/blob/main/t5.py)
- **Language(s) (NLP):** Russian, English
## Uses
Finetuning for downstream tasks
## Bias, Risks, and Limitations
Despite the instructional tuning, it is not recommended to use in zero-shot mode due to the small size
## Training Details
### Training Data
A corpus of Russian texts from [Vikhr](https://huggingface.co/Vikhrmodels) filtered by [FRED-T5-1.7B](https://huggingface.co/ai-forever/FRED-T5-1.7B) perplexy. Instructions are translated English set
### Training Procedure
Using AdamWScale instead of Adafactor for stable learning without loss explosions
#### Metrics

## Model Card Contact
[@TheMelmoth](https://t.me/TheMelmoth) |
RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf | RichardErkhov | 2024-06-02T21:50:05Z | 628 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-02T12:39:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
psyonic-cetacean-20B - GGUF
- Model creator: https://huggingface.co/jebcarter/
- Original model: https://huggingface.co/jebcarter/psyonic-cetacean-20B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [psyonic-cetacean-20B.Q2_K.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q2_K.gguf) | Q2_K | 6.91GB |
| [psyonic-cetacean-20B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [psyonic-cetacean-20B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [psyonic-cetacean-20B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [psyonic-cetacean-20B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [psyonic-cetacean-20B.Q3_K.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q3_K.gguf) | Q3_K | 9.04GB |
| [psyonic-cetacean-20B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [psyonic-cetacean-20B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [psyonic-cetacean-20B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [psyonic-cetacean-20B.Q4_0.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q4_0.gguf) | Q4_0 | 10.52GB |
| [psyonic-cetacean-20B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [psyonic-cetacean-20B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [psyonic-cetacean-20B.Q4_K.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q4_K.gguf) | Q4_K | 11.22GB |
| [psyonic-cetacean-20B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [psyonic-cetacean-20B.Q4_1.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q4_1.gguf) | Q4_1 | 11.67GB |
| [psyonic-cetacean-20B.Q5_0.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q5_0.gguf) | Q5_0 | 12.83GB |
| [psyonic-cetacean-20B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [psyonic-cetacean-20B.Q5_K.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q5_K.gguf) | Q5_K | 13.18GB |
| [psyonic-cetacean-20B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [psyonic-cetacean-20B.Q5_1.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q5_1.gguf) | Q5_1 | 13.98GB |
| [psyonic-cetacean-20B.Q6_K.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q6_K.gguf) | Q6_K | 15.28GB |
| [psyonic-cetacean-20B.Q8_0.gguf](https://huggingface.co/RichardErkhov/jebcarter_-_psyonic-cetacean-20B-gguf/blob/main/psyonic-cetacean-20B.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: other
license_name: microsoft-research-license
tags:
- storywriting
- text adventure
- not-for-all-audiences
---

---
Presenting the FP16 files for Psyonic-Cetacean-20B! This is an experimental Llama2-based stack merge based on the models and recipe below:
- [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2-GGUF)
- [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
```yaml
slices:
- sources:
- model: Orca2flat
layer_range: [0, 16]
- sources:
- model: LLaMA2-13B-Psyfighter2 (FP16 not yet available)
layer_range: [8, 24]
- sources:
- model: Orca2flat
layer_range: [17, 32]
- sources:
- model: LLaMA2-13B-Psyfighter2 (FP16 not yet available)
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
```
Note: while we did run an inverted merge the output was not satisfactory and will not be released.
We first flatted the additional ChatML vocabulary tokens out of Orca-2-13B, then performed a stack merge with Psyfighter-2-13B. The results surprised us with their vividness, freshness of prose, obedience to instruction prompting, and formatting cohesion.
This model is focused on storywriting and text adventure, with a side order of Assistant and Chat functionality. Like its ancestor Psyfighter-2 this model will function better if you let it improvise and riff on your concepts rather than feeding it an excess of detail.
Additionally, either the removal of the ChatML vocab or the stack merging process itself has resulted in not only an uncensored model but an actively anti-censored model, so please be aware that this model can and will kill you during adventures or output NSFW material if prompted accordingly.
During testing, the model exhibited an especially strong affinity for science fiction and space opera writing, while handling fantasy elements quite well and horror elements slightly less so. Refer to the Psyfighter-2 model card for best prompting practices.
Despite that, we have tested the model out to 16000 context via Rope scaling and the model does not drive towards NSFW on its own. It will follow your tone and style very well.
Please enjoy, and if you encounter anything exciting or weird, please reach out to me at [[email protected]].
Special thanks as always to the KoboldAI crew who provided the mergebox, testing, and feedback on this model, and to gelukuMLG for the model mascot!
|
QuantFactory/Hathor_RP-v.01-L3-8B-GGUF | QuantFactory | 2024-06-18T05:43:59Z | 628 | 0 | null | [
"gguf",
"text-generation",
"en",
"base_model:Nitral-AI/Hathor_RP-v.01-L3-8B",
"license:other",
"region:us"
] | text-generation | 2024-06-14T06:38:26Z | ---
license: other
language:
- en
pipeline_tag: text-generation
base_model: Nitral-AI/Hathor_RP-v.01-L3-8B
---
# QuantFactory/Hathor_RP-v.01-L3-8B-GGUF
This is quzntized version of [Nitral-AI/Hathor_RP-v.01-L3-8B](https://huggingface.co/Nitral-AI/Hathor_RP-v.01-L3-8B) created using llama.cpp
# Model Description

# "Hathor-v0.1 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction."
# Recomended ST Presets: [Hathor Presets](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.01/tree/main/Hathor%20Presets)
# Notes: Hathor is trained on 3 epochs of private rp data, synthetic opus instructons, a mix of light/classical novel data. (Heavily wip) |
THUDM/cogvlm-grounding-generalist-hf | THUDM | 2023-12-11T02:05:59Z | 627 | 14 | transformers | [
"transformers",
"safetensors",
"text-generation",
"custom_code",
"arxiv:2311.03079",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-11-17T12:34:00Z | # CogVLM
**CogVLM** 是一个强大的开源视觉语言模型(VLM)。CogVLM-17B 拥有 100 亿视觉参数和 70 亿语言参数,在 10 个经典跨模态基准测试上取得了 SOTA 性能,包括 NoCaps、Flicker30k captioning、RefCOCO、RefCOCO+、RefCOCOg、Visual7W、GQA、ScienceQA、VizWiz VQA 和 TDIUC,而在 VQAv2、OKVQA、TextVQA、COCO captioning 等方面则排名第二,超越或与 PaLI-X 55B 持平。您可以通过线上 [demo](http://36.103.203.44:7861/) 体验 CogVLM 多模态对话。
**CogVLM** is a powerful **open-source visual language model** (**VLM**). CogVLM-17B has 10 billion vision parameters and 7 billion language parameters. CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and rank the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., **surpassing or matching PaLI-X 55B**. CogVLM can also [chat with you](http://36.103.203.44:7861/) about images.
<div align="center">
<img src="https://github.com/THUDM/CogVLM/raw/main/assets/metrics-min.png" alt="img" style="zoom: 50%;" />
</div>
# 快速开始(Qiuckstart)
```python
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained('lmsys/vicuna-7b-v1.5')
model = AutoModelForCausalLM.from_pretrained(
'THUDM/cogvlm-grounding-generalist-hf',
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to('cuda').eval()
query = 'Can you provide a description of the image and include the coordinates [[x0,y0,x1,y1]] for each mentioned object?'
image = Image.open(requests.get('https://github.com/THUDM/CogVLM/blob/main/examples/4.jpg?raw=true', stream=True).raw).convert('RGB')
inputs = model.build_conversation_input_ids(tokenizer, query=query, images=[image])
inputs = {
'input_ids': inputs['input_ids'].unsqueeze(0).to('cuda'),
'token_type_ids': inputs['token_type_ids'].unsqueeze(0).to('cuda'),
'attention_mask': inputs['attention_mask'].unsqueeze(0).to('cuda'),
'images': [[inputs['images'][0].to('cuda').to(torch.bfloat16)]],
}
gen_kwargs = {"max_length": 2048, "do_sample": False}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0]))
```
# 方法(Method)
CogVLM 模型包括四个基本组件:视觉变换器(ViT)编码器、MLP适配器、预训练的大型语言模型(GPT)和一个**视觉专家模块**。更多细节请参见[Paper](https://github.com/THUDM/CogVLM/blob/main/assets/cogvlm-paper.pdf)。
CogVLM model comprises four fundamental components: a vision transformer (ViT) encoder, an MLP adapter, a pretrained large language model (GPT), and a **visual expert module**. See [Paper](https://github.com/THUDM/CogVLM/blob/main/assets/cogvlm-paper.pdf) for more details.
<div align="center">
<img src="https://github.com/THUDM/CogVLM/raw/main/assets/method-min.png" style="zoom:50%;" />
</div>
# 许可(License)
此存储库中的代码是根据 [Apache-2.0 许可](https://github.com/THUDM/CogVLM/raw/main/LICENSE) 开放源码,而使用 CogVLM 模型权重必须遵循 [模型许可](https://github.com/THUDM/CogVLM/raw/main/MODEL_LICENSE)。
The code in this repository is open source under the [Apache-2.0 license](https://github.com/THUDM/CogVLM/raw/main/LICENSE), while the use of the CogVLM model weights must comply with the [Model License](https://github.com/THUDM/CogVLM/raw/main/MODEL_LICENSE).
# 引用(Citation)
If you find our work helpful, please consider citing the following papers
```
@article{wang2023cogvlm,
title={CogVLM: Visual Expert for Pretrained Language Models},
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
year={2023},
eprint={2311.03079},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
solidrust/LemonadeRP-4.5.3-AWQ | solidrust | 2024-04-18T23:31:15Z | 627 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"roleplay",
"en",
"license:cc-by-4.0",
"text-generation-inference",
"awq",
"region:us"
] | text-generation | 2024-04-18T16:36:04Z | ---
license: cc-by-4.0
language:
- en
library_name: transformers
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- roleplay
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# KatyTheCutie/LemonadeRP-4.5.3 AWQ
- Model creator: [KatyTheCutie](https://huggingface.co/KatyTheCutie)
- Original model: [LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)

## Model Summary
8192 context length. - Reports of context up-to 32K working!
7B roleplay focused model, creativity and less cliché is the focus of this merge.
SillyTavern settings:


Models used in merge:
- NeverSleep/Noromaid-7B-0.4-DPO
- cgato/Thespis-7b-v0.5-SFTTest-2Epoch
- NurtureAI/neural-chat-7b-v3-1-16k
- cgato/Thespis-CurtainCall-7b-v0.2.2
- tavtav/eros-7b-test
|
pengHTYX/MacLab-Era3D-512-6view | pengHTYX | 2024-05-28T16:44:13Z | 627 | 12 | diffusers | [
"diffusers",
"safetensors",
"license:apache-2.0",
"diffusers:StableUnCLIPImg2ImgPipeline",
"region:us"
] | null | 2024-05-28T12:15:14Z | ---
license: apache-2.0
---
|
mradermacher/Solstice-11B-v1-GGUF | mradermacher | 2024-06-05T07:25:54Z | 627 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Himitsui/Lewd-Assistant-v1",
"base_model:Sao10K/Solstice-11B-v1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T21:46:56Z | ---
base_model: Sao10K/Solstice-11B-v1
datasets:
- Himitsui/Lewd-Assistant-v1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/Solstice-11B-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Solstice-11B-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.IQ3_XS.gguf) | IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Solstice-11B-v1-GGUF/resolve/main/Solstice-11B-v1.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf | RichardErkhov | 2024-06-05T11:55:52Z | 627 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-05T11:46:05Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distilgpt2-emailgen-V2 - GGUF
- Model creator: https://huggingface.co/postbot/
- Original model: https://huggingface.co/postbot/distilgpt2-emailgen-V2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [distilgpt2-emailgen-V2.Q2_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q2_K.gguf) | Q2_K | 0.06GB |
| [distilgpt2-emailgen-V2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
| [distilgpt2-emailgen-V2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.IQ3_S.gguf) | IQ3_S | 0.07GB |
| [distilgpt2-emailgen-V2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
| [distilgpt2-emailgen-V2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.IQ3_M.gguf) | IQ3_M | 0.07GB |
| [distilgpt2-emailgen-V2.Q3_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q3_K.gguf) | Q3_K | 0.07GB |
| [distilgpt2-emailgen-V2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
| [distilgpt2-emailgen-V2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
| [distilgpt2-emailgen-V2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [distilgpt2-emailgen-V2.Q4_0.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q4_0.gguf) | Q4_0 | 0.08GB |
| [distilgpt2-emailgen-V2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
| [distilgpt2-emailgen-V2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
| [distilgpt2-emailgen-V2.Q4_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q4_K.gguf) | Q4_K | 0.08GB |
| [distilgpt2-emailgen-V2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
| [distilgpt2-emailgen-V2.Q4_1.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q4_1.gguf) | Q4_1 | 0.08GB |
| [distilgpt2-emailgen-V2.Q5_0.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q5_0.gguf) | Q5_0 | 0.09GB |
| [distilgpt2-emailgen-V2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q5_K_S.gguf) | Q5_K_S | 0.09GB |
| [distilgpt2-emailgen-V2.Q5_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q5_K.gguf) | Q5_K | 0.09GB |
| [distilgpt2-emailgen-V2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
| [distilgpt2-emailgen-V2.Q5_1.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q5_1.gguf) | Q5_1 | 0.09GB |
| [distilgpt2-emailgen-V2.Q6_K.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q6_K.gguf) | Q6_K | 0.1GB |
| [distilgpt2-emailgen-V2.Q8_0.gguf](https://huggingface.co/RichardErkhov/postbot_-_distilgpt2-emailgen-V2-gguf/blob/main/distilgpt2-emailgen-V2.Q8_0.gguf) | Q8_0 | 0.12GB |
Original model description:
---
license: apache-2.0
tags:
- generated_from_trainer
- distilgpt2
- email generation
- email
datasets:
- aeslc
- postbot/multi-emails-100k
widget:
- text: "Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus will be on the exam"
example_title: "email to prof"
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and ask about office hours"
example_title: "office hours"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning Harold,\n\nI was wondering when the next"
example_title: "event"
- text: "URGENT - I need the TPS reports"
example_title: "URGENT"
- text: "Hi Archibald,\n\nI hope this email finds you extremely well."
example_title: "emails that find you"
- text: "Hello there.\n\nI just wanted to reach out and check in to"
example_title: "checking in"
- text: "Hello <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if you've enjoyed your time with us"
example_title: "work well"
- text: "Hi <NAME>,\n\nI hope this email finds you well. I wanted to reach out and see if we could catch up"
example_title: "catch up"
- text: "I'm <NAME> and I just moved into the area and wanted to reach out and get some details on where I could get groceries and"
example_title: "grocery"
parameters:
min_length: 4
max_length: 128
length_penalty: 0.8
no_repeat_ngram_size: 2
do_sample: False
num_beams: 8
early_stopping: True
repetition_penalty: 5.5
---
# distilgpt2-emailgen: V2
[](https://colab.research.google.com/gist/pszemraj/d1c2d88b6120cca4ca7df078ea1d1e50/scratchpad.ipynb)
Why write the rest of your email when you can generate it?
```python
from transformers import pipeline
model_tag = "postbot/distilgpt2-emailgen-V2"
generator = pipeline(
'text-generation',
model=model_tag,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
result = generator(
prompt,
max_length=64,
do_sample=False,
early_stopping=True,
) # generate
print(result[0]['generated_text'])
```
## Model description
This model is a fine-tuned version of `distilgpt2` on the postbot/multi-emails-100k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9126
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters (run 1/2)
TODO
### Training hyperparameters (run 2/2)
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9045 | 1.0 | 789 | 2.0006 |
| 1.8115 | 2.0 | 1578 | 1.9557 |
| 1.8501 | 3.0 | 2367 | 1.9110 |
| 1.7376 | 4.0 | 3156 | 1.9126 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__distilgpt2-emailgen-V2)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.59 |
| ARC (25-shot) | 20.99 |
| HellaSwag (10-shot) | 26.78 |
| MMLU (5-shot) | 25.53 |
| TruthfulQA (0-shot) | 46.51 |
| Winogrande (5-shot) | 52.01 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.31 |
|
howdi2000/dell_v7-gguf | howdi2000 | 2024-06-25T06:58:33Z | 627 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T06:52:51Z | ---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** howdi2000
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yntec/SCMIX_NightSkyMeina | Yntec | 2024-06-30T10:07:50Z | 627 | 0 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"abyssorangemix",
"Pastel",
"Getsc",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-06-30T08:24:54Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- abyssorangemix
- Pastel
- Getsc
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# SCMIX_NightSkyMeina
Safetensors version of scmix_NSM with the kl-f8-anime2 VAE baked in. Original page: https://civitai.com/models/19809?modelVersionId=23539
Comparison:

(Click for larger)
Samples and prompts:

(Click for larger)
Top left: 1girl, solo, japanese clothes, tray, apron, purple hair, open mouth, kimono, flower, cup, smile, sandals, floral print, ahoge, holding, purple eyes, full body, hair ornament, wa maid, holding tray, teacup, short hair, hair bun, hair up, hair flower, standing, tabi,looking at viewer,
Top right: retro videogames, robert jordan pepperoni pizza, josephine wall winner, hidari, roll20 illumination, radiant light, sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED CHIBI EYES, Magazine ad, iconic, 1943, Cartoon, sharp focus, 4k, towel. comic art on canvas by kyoani and ROSSDRAWS and watched
Bottom left: Full body picture of a pretty cute little girl making cake in school, detailed brown eyes, short smile, beautiful and aesthetic, intricate, neat hair, highly detailed, detailed face, smooth, sharp focus, chiaroscuro, magazine ad, 1949, 2D Game Art, anime on canvas, rossdraws, clay mann, CHIBI ART, light novel cover art
Bottom right: 1990 movie screenshot. beautiful wife with young husband and daughter. festive scene at a copper brewery with a wooden keg of beer in the center. sitting cute little girl. Display mugs of dark beer. faces. accompanied Shirley by halloween ingredients
(it doesn't pass the husbando test...) |
timm/coatnet_rmlp_1_rw_224.sw_in1k | timm | 2023-05-10T23:47:37Z | 626 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"arxiv:2111.09883",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-01-20T21:27:22Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for coatnet_rmlp_1_rw_224.sw_in1k
A timm specific CoAtNet (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 41.7
- GMACs: 7.8
- Activations (M): 35.5
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('coatnet_rmlp_1_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_1_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'coatnet_rmlp_1_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 768, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
katanaml-org/invoices-donut-model-v1 | katanaml-org | 2023-05-11T17:57:22Z | 626 | 35 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-to-text",
"en",
"dataset:katanaml-org/invoices-donut-data-v1",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-03-13T20:51:57Z | ---
license: mit
language:
- en
pipeline_tag: image-to-text
datasets:
- katanaml-org/invoices-donut-data-v1
---
## Sparrow - Data extraction from documents with ML
This model is finetuned Donut ML base model on invoices data. Model aims to verify how well Donut performs on enterprise docs.
Mean accuracy on test set: 0.96
Inference:

Training loss:

Sparrow on [GitHub](https://github.com/katanaml/sparrow)
Sample invoice [docs](https://github.com/katanaml/sparrow/tree/main/sparrow-ui/docs/images) to use for inference (docs up to 500 were used for fine-tuning, use docs from 500 for inference)
Our website [KatanaML](https://www.katanaml.io)
On [Twitter](https://twitter.com/katana_ml) |
timm/regnety_160.sw_in12k_ft_in1k | timm | 2024-02-10T23:33:49Z | 626 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-12k",
"arxiv:2003.13678",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-03-21T06:45:18Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
- imagenet-12k
---
# Model card for regnety_160.sw_in12k_ft_in1k
A RegNetY-16GF image classification model. Pretrained on ImageNet-12k and fine-tuned on ImageNet-1k by Ross Wightman in `timm`.
The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including:
* stochastic depth
* gradient checkpointing
* layer-wise LR decay
* configurable output stride (dilation)
* configurable activation and norm layers
* option for a pre-activation bottleneck block used in RegNetV variant
* only known RegNetZ model definitions with pretrained weights
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 83.6
- GMACs: 16.0
- Activations (M): 23.0
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- Designing Network Design Spaces: https://arxiv.org/abs/2003.13678
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('regnety_160.sw_in12k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_160.sw_in12k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 224, 56, 56])
# torch.Size([1, 448, 28, 28])
# torch.Size([1, 1232, 14, 14])
# torch.Size([1, 3024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'regnety_160.sw_in12k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 3024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`.
|model |img_size|top1 |top5 |param_count|gmacs|macts |
|-------------------------|--------|------|------|-----------|-----|------|
|[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 |
|[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 |
|[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 |
|[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 |
|[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 |
|[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 |
|[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49|
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 |
|[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 |
|[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 |
|[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 |
|[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 |
|[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83|
|[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 |
|[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 |
|[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 |
|[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 |
|[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 |
|[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 |
|[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 |
|[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 |
|[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 |
|[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 |
|[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 |
|[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 |
|[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 |
|[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 |
|[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 |
|[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 |
|[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 |
|[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 |
|[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 |
|[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 |
|[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 |
|[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 |
|[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 |
|[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 |
|[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 |
|[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 |
|[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 |
|[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 |
|[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 |
|[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 |
|[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 |
|[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 |
|[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 |
|[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 |
|[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 |
|[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 |
|[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 |
|[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 |
|[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 |
|[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 |
|[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 |
|[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 |
|[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 |
|[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 |
|[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 |
|[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 |
|[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 |
|[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 |
|[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 |
|[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 |
|[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 |
|[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 |
|[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 |
|[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 |
|[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 |
## Citation
```bibtex
@InProceedings{Radosavovic2020,
title = {Designing Network Design Spaces},
author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r},
booktitle = {CVPR},
year = {2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
VityaVitalich/bert-tiny-sst2 | VityaVitalich | 2023-10-02T13:24:17Z | 626 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sst2",
"base_model:M-FAC/bert-tiny-finetuned-sst2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-10-02T13:13:18Z | ---
base_model: M-FAC/bert-tiny-finetuned-sst2
tags:
- generated_from_trainer
datasets:
- sst2
metrics:
- accuracy
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sst2
type: sst2
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8279816513761468
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert Tiny for SST2
This model is a fine-tuned version of [M-FAC/bert-tiny-finetuned-sst2](https://huggingface.co/M-FAC/bert-tiny-finetuned-sst2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4771
- Accuracy: 0.8280
## Usage Example
```python
from transformers import BertTokenizer, BertForSequenceClassification, TrainingArguments, Trainer, DataCollatorWithPadding
import datasets
model = BertForSequenceClassification.from_pretrained('VityaVitalich/bert-tiny-sst2')
tokenizer = BertTokenizer.from_pretrained('VityaVitalich/bert-tiny-sst2')
def create_data(tokenizer):
train_set = datasets.load_dataset('sst2', split='train').remove_columns(['idx'])
val_set = datasets.load_dataset('sst2', split='validation').remove_columns(['idx'])
def tokenize_func(examples):
return tokenizer(examples["sentence"], max_length=128, padding='max_length', truncation=True)
encoded_dataset_train = train_set.map(tokenize_func, batched=True)
encoded_dataset_test = val_set.map(tokenize_func, batched=True)
data_collator = DataCollatorWithPadding(tokenizer)
return encoded_dataset_train, encoded_dataset_test, data_collator
encoded_dataset_train, encoded_dataset_test, data_collator = create_data(tokenizer)
training_args = TrainingArguments(
output_dir='./results',
learning_rate=3e-5,
per_device_train_batch_size=128,
per_device_eval_batch_size=128,
load_best_model_at_end=True,
num_train_epochs=5,
weight_decay=0.1,
fp16=True,
fp16_full_eval=True,
evaluation_strategy="epoch",
seed=42,
save_strategy = "epoch",
save_total_limit=5,
logging_strategy="epoch",
report_to="all",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=encoded_dataset_train,
eval_dataset=encoded_dataset_test,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.evaluate(encoded_dataset_test)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2313 | 1.0 | 527 | 0.4771 | 0.8280 |
| 0.2057 | 2.0 | 1054 | 0.4937 | 0.8257 |
| 0.1949 | 3.0 | 1581 | 0.5121 | 0.8177 |
| 0.1904 | 4.0 | 2108 | 0.5100 | 0.8200 |
| 0.1879 | 5.0 | 2635 | 0.5137 | 0.8211 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
PowerInfer/prosparse-llama-2-7b-gguf | PowerInfer | 2024-03-13T10:03:02Z | 626 | 2 | transformers | [
"transformers",
"gguf",
"sparsellama",
"feature-extraction",
"custom_code",
"en",
"arxiv:2402.13516",
"license:llama2",
"region:us"
] | feature-extraction | 2024-02-20T08:34:00Z | ---
license: llama2
language:
- en
---
# ProSparse-LLaMA-2-7B-GGUF
- Original model: [SparseLLM/ProSparse-LLaMA-2-7B](https://huggingface.co/SparseLLM/prosparse-llama-2-7b)
- Converted & distributed by: [THUNLP](https://nlp.csai.tsinghua.edu.cn/), [ModelBest](modelbest.cn), and [PowerInfer](https://huggingface.co/PowerInfer)
This model is the downstream distribution of [SparseLLM/ProSparse-LLaMA-2-7B](https://huggingface.co/SparseLLM/prosparse-llama-2-7b) in PowerInfer GGUF format consisting of the LLM model weights and predictor weights.
Note: `prosparse-llama-2-7b-clip15.gguf` is a variant GGUF version with the same model but different activation predictors, which are trained with data only reserving top 15% activation values. Compared with `prosparse-llama-2-7b.gguf`, this variant has higher predicted sparsity and inference speed, but suffering from relatively lower activation recall.
### Citation
Please kindly cite using the following BibTeX:
```bibtex
@article{song2024prosparse,
title={{ProSparse}: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models},
author={Song, Chenyang and Han, Xu and Zhang, Zhengyan and Hu, Shengding and Shi, Xiyu and Li, Kuai and Chen, Chen and Liu, Zhiyuan and Li, Guangli and Yang, Tao and Sun, Maosong},
year={2024},
journal={arXiv preprint arXiv:2402.13516},
url={https://arxiv.org/pdf/2402.13516.pdf}
}
```
|
adventureshin/Bingsu-concept | adventureshin | 2024-03-03T06:52:07Z | 626 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vision-text-dual-encoder",
"feature-extraction",
"clip",
"ko",
"arxiv:2004.09813",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-03T05:09:23Z | ---
tags:
- clip
language: ko
license: mit
---
# vitB32_bert_ko_small_clip
[openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) + [lassl/bert-ko-small](https://huggingface.co/lassl/bert-ko-small) CLIP Model
[training code(github)](https://github.com/Bing-su/KoCLIP_training_code)
## Train
SBERT의 [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813)를 참고하여, `openai/clip-vit-base-patch32` 텍스트 모델의 가중치를 `lassl/bert-ko-small`로 복제하였습니다. 논문과는 달리 mean pooling을 사용하지 않고, huggingface모델의 기본 pooling을 그대로 사용하였습니다.
사용한 데이터: [Aihub 한국어-영어 번역(병렬) 말뭉치](https://aihub.or.kr/aidata/87)
## How to Use
#### 1.
```python
import requests
from PIL import Image
from transformers import VisionTextDualEncoderProcessor, VisionTextDualEncoderModel # or Auto...
model = VisionTextDualEncoderModel.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
processor = VisionTextDualEncoderProcessor.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["고양이 두 마리", "개 두 마리"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
```
```pycon
>>> probs
tensor([[0.9756, 0.0244]], grad_fn=<SoftmaxBackward0>)
```
#### 2.
```python
from transformers import AutoModel, AutoProcessor, pipeline
model = AutoModel.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
processor = AutoProcessor.from_pretrained("Bingsu/vitB32_bert_ko_small_clip")
pipe = pipeline("zero-shot-image-classification", model=model, feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
result = pipe(images=url, candidate_labels=["고양이 한 마리", "고양이 두 마리", "고양이 두 마리와 리모컨 두 개"], hypothesis_template="{}")
```
```pycon
>>> result
[{'score': 0.871887743473053, 'label': '고양이 두 마리와 리모컨 두 개'},
{'score': 0.12316706776618958, 'label': '고양이 두 마리'},
{'score': 0.004945191089063883, 'label': '고양이 한 마리'}]
```
|
RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf | RichardErkhov | 2024-06-05T17:00:25Z | 626 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-05T16:48:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
distilgpt2-HC3 - GGUF
- Model creator: https://huggingface.co/pszemraj/
- Original model: https://huggingface.co/pszemraj/distilgpt2-HC3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [distilgpt2-HC3.Q2_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q2_K.gguf) | Q2_K | 0.06GB |
| [distilgpt2-HC3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
| [distilgpt2-HC3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.IQ3_S.gguf) | IQ3_S | 0.07GB |
| [distilgpt2-HC3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
| [distilgpt2-HC3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.IQ3_M.gguf) | IQ3_M | 0.07GB |
| [distilgpt2-HC3.Q3_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q3_K.gguf) | Q3_K | 0.07GB |
| [distilgpt2-HC3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q3_K_M.gguf) | Q3_K_M | 0.07GB |
| [distilgpt2-HC3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q3_K_L.gguf) | Q3_K_L | 0.07GB |
| [distilgpt2-HC3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.IQ4_XS.gguf) | IQ4_XS | 0.07GB |
| [distilgpt2-HC3.Q4_0.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q4_0.gguf) | Q4_0 | 0.08GB |
| [distilgpt2-HC3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.IQ4_NL.gguf) | IQ4_NL | 0.08GB |
| [distilgpt2-HC3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q4_K_S.gguf) | Q4_K_S | 0.08GB |
| [distilgpt2-HC3.Q4_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q4_K.gguf) | Q4_K | 0.08GB |
| [distilgpt2-HC3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q4_K_M.gguf) | Q4_K_M | 0.08GB |
| [distilgpt2-HC3.Q4_1.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q4_1.gguf) | Q4_1 | 0.08GB |
| [distilgpt2-HC3.Q5_0.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q5_0.gguf) | Q5_0 | 0.09GB |
| [distilgpt2-HC3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q5_K_S.gguf) | Q5_K_S | 0.09GB |
| [distilgpt2-HC3.Q5_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q5_K.gguf) | Q5_K | 0.09GB |
| [distilgpt2-HC3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q5_K_M.gguf) | Q5_K_M | 0.09GB |
| [distilgpt2-HC3.Q5_1.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q5_1.gguf) | Q5_1 | 0.09GB |
| [distilgpt2-HC3.Q6_K.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q6_K.gguf) | Q6_K | 0.1GB |
| [distilgpt2-HC3.Q8_0.gguf](https://huggingface.co/RichardErkhov/pszemraj_-_distilgpt2-HC3-gguf/blob/main/distilgpt2-HC3.Q8_0.gguf) | Q8_0 | 0.12GB |
Original model description:
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- generated_from_trainer
- chatgpt
- HC3
datasets:
- pszemraj/HC3-textgen-qa
metrics:
- accuracy
widget:
- text: 'Review: Best cast iron skillet you will ever buy. Is this review positive
or negative? <answer>'
example_title: Sentiment analysis
- text: Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
He chose her because <answer>
example_title: Coreference resolution
- text: 'On a shelf, there are five books: a gray book, a red book, a purple book,
a blue book, and a black book. Here''s the puzzle, <answer>'
example_title: Logic puzzles
- text: The two men running to become New York City's next mayor will face off in
their first debate Wednesday night <answer>
example_title: Reading comprehension
- text: Is it true that if I have five 5-hour energy drinks in a single 24-hour period,
I get 25 hours of energy and spontaneously explode? <answer>
example_title: 5 hour energy
- text: what happens if you train a smaller model on a dataset of reinforcement-learning
optimized model responses? <answer>
example_title: deep learning advice
inference:
parameters:
temperature: 0.6
max_length: 96
no_repeat_ngram_size: 4
repetition_penalty: 1.5
eta_cutoff: 0.0008
renormalize_logits: true
pipeline_tag: text-generation
model-index:
- name: distilgpt2-HC3
results: []
---
# distilgpt2-HC3
> what happens if you train a smaller model on a dataset of chatGPT responses?
This happens.

## Model description
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the "chatgpt answers" column of the `Hello-SimpleAI/HC3` dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9983
- Accuracy: 0.5441
## Intended uses & limitations
Despite how it sounds, this model only has 80m parameters and will likely not be factually accurate most of the time.
## Training and evaluation data
Modifications made w.r.t. original dataset:
- drop all rows that did not have a chatGPT answer
- if a row (_i.e. ELI5 question, etc_) had more than one response (_from chatGPT_), randomly choose one of the responses as the answer to the question
- the "question" and chatGPT answer were combined into a single string for that row as follows: `QUESTION_TEXT <answer> CHATGPT_ANSWER_TEXT <end_answer>`
- `<answer>` and `<end_answer>` serve as added tokens to help the model learn "turns" in the conversation
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 3208
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 6.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2485 | 0.98 | 41 | 2.1457 | 0.5158 |
| 2.0757 | 1.98 | 82 | 2.0584 | 0.5304 |
| 1.966 | 2.98 | 123 | 2.0210 | 0.5376 |
| 1.8602 | 3.98 | 164 | 2.0012 | 0.5422 |
| 1.8089 | 4.98 | 205 | 1.9977 | 0.5436 |
| 1.7698 | 5.98 | 246 | 1.9983 | 0.5441 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__distilgpt2-HC3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |28.18|
|AI2 Reasoning Challenge (25-Shot)|24.66|
|HellaSwag (10-Shot) |27.99|
|MMLU (5-Shot) |23.95|
|TruthfulQA (0-shot) |42.10|
|Winogrande (5-shot) |50.36|
|GSM8k (5-shot) | 0.00|
|
google/t5-efficient-mini | google | 2023-01-24T16:48:02Z | 625 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-MINI (Deep-Narrow version)
T5-Efficient-MINI is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-mini** - is of model type **Mini** with no variations.
It has **31.23** million parameters and thus requires *ca.* **124.92 MB** of memory in full precision (*fp32*)
or **62.46 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
lansinuote/diffsion_from_scratch.params | lansinuote | 2023-04-14T05:03:29Z | 625 | 1 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-04-14T05:01:23Z | Entry not found |
kpyu/video-blip-opt-2.7b-ego4d | kpyu | 2023-05-17T21:04:01Z | 625 | 14 | transformers | [
"transformers",
"pytorch",
"blip-2",
"text2text-generation",
"vision",
"image-to-text",
"video-to-text",
"image-captioning",
"video-captioning",
"visual-question-answering",
"en",
"arxiv:2301.12597",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-05-17T17:15:11Z | ---
language: en
license: mit
tags:
- vision
- image-to-text
- video-to-text
- image-captioning
- video-captioning
- visual-question-answering
pipeline_tag: image-to-text
---
# VideoBLIP, OPT-2.7b, fine-tuned on Ego4D
VideoBLIP model, leveraging [BLIP-2](https://arxiv.org/abs/2301.12597) with [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters) as its LLM backbone.
## Model description
VideoBLIP is an augmented BLIP-2 that can handle videos.
## Bias, Risks, Limitations, and Ethical Considerations
VideoBLIP-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
VideoBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, please refer to the [official repository](https://github.com/yukw777/VideoBLIP). |
TheBloke/sqlcoder2-GGUF | TheBloke | 2023-10-12T06:15:51Z | 625 | 25 | transformers | [
"transformers",
"gguf",
"starcoder",
"code",
"text-generation",
"en",
"base_model:defog/sqlcoder2",
"license:other",
"region:us"
] | text-generation | 2023-10-10T13:11:17Z | ---
base_model: defog/sqlcoder2
inference: false
language:
- en
license: other
model_creator: Defog.ai
model_name: Sqlcoder2
model_type: starcoder
pipeline_tag: text-generation
prompt_template: "## Task\nGenerate a SQL query to answer the following question:\n\
`{prompt}`\n\n### Database Schema\nThis query will run on a database whose schema\
\ is represented in this string:\nCREATE TABLE products (\n product_id INTEGER\
\ PRIMARY KEY, -- Unique ID for each product\n name VARCHAR(50), -- Name of the\
\ product\n price DECIMAL(10,2), -- Price of each unit of the product\n quantity\
\ INTEGER -- Current quantity in stock\n);\n\nCREATE TABLE sales (\n sale_id INTEGER\
\ PRIMARY KEY, -- Unique ID for each sale\n product_id INTEGER, -- ID of product\
\ sold\n customer_id INTEGER, -- ID of customer who made purchase\n salesperson_id\
\ INTEGER, -- ID of salesperson who made the sale\n sale_date DATE, -- Date the\
\ sale occurred\n quantity INTEGER -- Quantity of product sold\n);\n\n-- sales.product_id\
\ can be joined with products.product_id\n\n### SQL\nGiven the database schema,\
\ here is the SQL query that answers `{prompt}`:\n```sql\n"
quantized_by: TheBloke
tags:
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Sqlcoder2 - GGUF
- Model creator: [Defog.ai](https://huggingface.co/defog)
- Original model: [Sqlcoder2](https://huggingface.co/defog/sqlcoder2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Defog.ai's Sqlcoder2](https://huggingface.co/defog/sqlcoder2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/sqlcoder2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/sqlcoder2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/sqlcoder2-GGUF)
* [Defog.ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/defog/sqlcoder2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Sqlcoder
```
## Task
Generate a SQL query to answer the following question:
`{prompt}`
### Database Schema
This query will run on a database whose schema is represented in this string:
CREATE TABLE products (
product_id INTEGER PRIMARY KEY, -- Unique ID for each product
name VARCHAR(50), -- Name of the product
price DECIMAL(10,2), -- Price of each unit of the product
quantity INTEGER -- Current quantity in stock
);
CREATE TABLE sales (
sale_id INTEGER PRIMARY KEY, -- Unique ID for each sale
product_id INTEGER, -- ID of product sold
customer_id INTEGER, -- ID of customer who made purchase
salesperson_id INTEGER, -- ID of salesperson who made the sale
sale_date DATE, -- Date the sale occurred
quantity INTEGER -- Quantity of product sold
);
-- sales.product_id can be joined with products.product_id
### SQL
Given the database schema, here is the SQL query that answers `{prompt}`:
```sql
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sqlcoder2.Q2_K.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q2_K.gguf) | Q2_K | 2 | 6.73 GB| 9.23 GB | smallest, significant quality loss - not recommended for most purposes |
| [sqlcoder2.Q3_K_S.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q3_K_S.gguf) | Q3_K_S | 3 | 6.93 GB| 9.43 GB | very small, high quality loss |
| [sqlcoder2.Q3_K_M.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q3_K_M.gguf) | Q3_K_M | 3 | 8.18 GB| 10.68 GB | very small, high quality loss |
| [sqlcoder2.Q4_0.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q4_0.gguf) | Q4_0 | 4 | 8.99 GB| 11.49 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sqlcoder2.Q4_K_S.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q4_K_S.gguf) | Q4_K_S | 4 | 9.06 GB| 11.56 GB | small, greater quality loss |
| [sqlcoder2.Q3_K_L.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q3_K_L.gguf) | Q3_K_L | 3 | 9.08 GB| 11.58 GB | small, substantial quality loss |
| [sqlcoder2.Q4_K_M.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q4_K_M.gguf) | Q4_K_M | 4 | 9.96 GB| 12.46 GB | medium, balanced quality - recommended |
| [sqlcoder2.Q5_0.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q5_0.gguf) | Q5_0 | 5 | 10.93 GB| 13.43 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sqlcoder2.Q5_K_S.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q5_K_S.gguf) | Q5_K_S | 5 | 10.93 GB| 13.43 GB | large, low quality loss - recommended |
| [sqlcoder2.Q5_K_M.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q5_K_M.gguf) | Q5_K_M | 5 | 11.54 GB| 14.04 GB | large, very low quality loss - recommended |
| [sqlcoder2.Q6_K.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q6_K.gguf) | Q6_K | 6 | 12.99 GB| 15.49 GB | very large, extremely low quality loss |
| [sqlcoder2.Q8_0.gguf](https://huggingface.co/TheBloke/sqlcoder2-GGUF/blob/main/sqlcoder2.Q8_0.gguf) | Q8_0 | 8 | 16.82 GB| 19.32 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/sqlcoder2-GGUF and below it, a specific filename to download, such as: sqlcoder2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/sqlcoder2-GGUF sqlcoder2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/sqlcoder2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/sqlcoder2-GGUF sqlcoder2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m sqlcoder2.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "## Task\nGenerate a SQL query to answer the following question:\n`{prompt}`\n\n### Database Schema\nThis query will run on a database whose schema is represented in this string:\nCREATE TABLE products (\n product_id INTEGER PRIMARY KEY, -- Unique ID for each product\n name VARCHAR(50), -- Name of the product\n price DECIMAL(10,2), -- Price of each unit of the product\n quantity INTEGER -- Current quantity in stock\n);\n\nCREATE TABLE sales (\n sale_id INTEGER PRIMARY KEY, -- Unique ID for each sale\n product_id INTEGER, -- ID of product sold\n customer_id INTEGER, -- ID of customer who made purchase\n salesperson_id INTEGER, -- ID of salesperson who made the sale\n sale_date DATE, -- Date the sale occurred\n quantity INTEGER -- Quantity of product sold\n);\n\n-- sales.product_id can be joined with products.product_id\n\n### SQL\nGiven the database schema, here is the SQL query that answers `{prompt}`:\n```sql"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/sqlcoder2-GGUF", model_file="sqlcoder2.Q4_K_M.gguf", model_type="starcoder", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Defog.ai's Sqlcoder2
# Defog SQLCoder
Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries.
[Interactive Demo](https://defog.ai/sqlcoder-demo/) | [🤗 HF Repo](https://huggingface.co/defog/sqlcoder2) | [♾️ Colab](https://colab.research.google.com/drive/1z4rmOEiFkxkMiecAWeTUlPl0OmKgfEu7?usp=sharing) | [🐦 Twitter](https://twitter.com/defogdata)
## TL;DR
SQLCoder is a 15B parameter model that outperforms `gpt-3.5-turbo` for natural language to SQL generation tasks on our [sql-eval](https://github.com/defog-ai/sql-eval) framework, and significantly outperforms all popular open-source models. When fine-tuned on a given schema, it also outperforms `gpt-4`
SQLCoder is fine-tuned on a base StarCoder model.
## Results on novel datasets not seen in training
| model | perc_correct |
|-|-|
| gpt4-2023-10-04 | 82.0 |
| defog-sqlcoder2 | 74.5 |
| gpt4-2023-08-28 | 74.0 |
| defog-sqlcoder-7b | 71.0 |
| gpt-3.5-2023-10-04 | 66.0 |
| claude-2 | 64.5 |
| gpt-3.5-2023-08-28 | 61.0 |
| claude_instant_1 | 61.0 |
| text-davinci-003 | 52.5 |
## License
The code in this repo (what little there is of it) is Apache-2 licensed. The model weights have a `CC BY-SA 4.0` license, with additional responsible use restrictions added. The TL;DR is that you can use and modify the model for any purpose – including commercial use. However, if you modify the weights (for example, by fine-tuning), you must open-source your modified weights under the same license terms.
## Training
Defog was trained on more than 20,000 human-curated questions. These questions were based on 10 different schemas. None of the schemas in the training data were included in our evaluation framework.
You can read more about our [training approach](https://defog.ai/blog/open-sourcing-sqlcoder2-7b/) and [evaluation framework](https://defog.ai/blog/open-sourcing-sqleval/).
## Results by question category
We classified each generated question into one of 5 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
| query_category | gpt-4 | sqlcoder2-15b | sqlcoder-7b | gpt-3.5 | claude-2 | claude-instant | gpt-3 |
|:-----------------|--------:|----------------:|--------------:|----------:|-----------:|-----------------:|--------:|
| date | 72 | 76 | 64 | 68 | 52 | 48 | 32 |
| group_by | 91.4 | 80 | 82.9 | 77.1 | 71.4 | 71.4 | 71.4 |
| order_by | 82.9 | 77.1 | 74.3 | 68.6 | 74.3 | 74.3 | 68.6 |
| ratio | 80 | 60 | 54.3 | 37.1 | 57.1 | 45.7 | 25.7 |
| join | 82.9 | 77.1 | 74.3 | 71.4 | 65.7 | 62.9 | 57.1 |
| where | 80 | 77.1 | 74.3 | 74.3 | 62.9 | 60 | 54.3 |
## Using SQLCoder
You can use SQLCoder via the `transformers` library by downloading our model weights from the Hugging Face repo. We have added sample code for [inference](./inference.py) on a [sample database schema](./metadata.sql).
```bash
python inference.py -q "Question about the sample database goes here"
# Sample question:
# Do we get more revenue from customers in New York compared to customers in San Francisco? Give me the total revenue for each city, and the difference between the two.
```
You can also use a demo on our website [here](https://defog.ai/sqlcoder-demo), or run SQLCoder in Colab [here](https://colab.research.google.com/drive/13BIKsqHnPOBcQ-ba2p77L5saiepTIwu0#scrollTo=ZpbVgVHMkJvC)
## Hardware Requirements
SQLCoder has been tested on an A100 40GB GPU with `bfloat16` weights. You can also load an 8-bit and 4-bit quantized version of the model on consumer GPUs with 20GB or more of memory – like RTX 4090, RTX 3090, and Apple M2 Pro, M2 Max, or M2 Ultra Chips with 20GB or more of memory.
## Todo
- [x] Open-source the v1 model weights
- [x] Train the model on more data, with higher data variance
- [ ] Tune the model further with Reward Modelling and RLHF
- [ ] Pretrain a model from scratch that specializes in SQL analysis
<!-- original-model-card end -->
|
mmnga/japanese-stablelm-base-gamma-7b-gguf | mmnga | 2023-10-25T13:32:59Z | 625 | 3 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2023-10-25T04:09:50Z | ---
license: apache-2.0
---
# japanese-stablelm-base-gamma-7b-gguf
[stabilityaiさんが公開しているjapanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b)のggufフォーマット変換版です。
他のモデルはこちら
3bモデル
[mmnga/japanese-stablelm-3b-4e1t-base-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-base-gguf)
[mmnga/japanese-stablelm-3b-4e1t-instruct-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-instruct-gguf)
7bモデル
[mmnga/japanese-stablelm-base-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-base-gamma-7b-gguf)
[mmnga/japanese-stablelm-instruct-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-instruct-gamma-7b-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'japanese-stablelm-base-gamma-7b-q4_0.gguf' -n 128 -p 今夜の晩御飯のレシピを紹介します。'
```
|
knowledgator/UTC-DeBERTa-large | knowledgator | 2024-05-15T13:39:19Z | 625 | 13 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"NER",
"token classification",
"information extraction",
"question answering",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-01-14T09:10:28Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: token-classification
tags:
- NER
- token classification
- information extraction
- question answering
---
**UTC-DeBERTa-large** - universal token classifier
***🚀 Meet the first prompt-tuned universal token classification model 🚀***
This is a model based on [DeBERTaV3-large](https://huggingface.co/microsoft/deberta-v3-large) that was trained on multiple token classification tasks or tasks that can be represented in this way.
Such multi-task fine-tuning enabled better generalization; even small models can be used for zero-shot named entity recognition and demonstrate good performance on reading comprehension tasks.
The model can be used for the following tasks:
* Named entity recognition (NER);
* Question answering;
* Relation extraction;
* Coreference resolution;
* Text cleaning;
* Summarization;
#### How to use
We recommend to use the model with transformers `ner` pipeline:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
def process(text, prompt, treshold=0.5):
"""
Processes text by preparing prompt and adjusting indices.
Args:
text (str): The text to process
prompt (str): The prompt to prepend to the text
Returns:
list: A list of dicts with adjusted spans and scores
"""
# Concatenate text and prompt for full input
input_ = f"{prompt}\n{text}"
results = nlp(input_) # Run NLP on full input
processed_results = []
prompt_length = len(prompt) # Get prompt length
for result in results:
# check whether score is higher than treshold
if result['score']<treshold:
continue
# Adjust indices by subtracting prompt length
start = result['start'] - prompt_length
# If indexes belongs to the prompt - continue
if start<0:
continue
end = result['end'] - prompt_length
# Extract span from original text using adjusted indices
span = text[start:end]
# Create processed result dict
processed_result = {
'span': span,
'start': start,
'end': end,
'score': result['score']
}
processed_results.append(processed_result)
return processed_results
tokenizer = AutoTokenizer.from_pretrained("knowledgator/UTC-DeBERTa-large")
model = AutoModelForTokenClassification.from_pretrained("knowledgator/UTC-DeBERTa-large")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy = 'first')
```
To use the model for **zero-shot named entity recognition**, we recommend to utilize the following prompt:
```python
prompt = """Identify the following entity classes in the text:
computer
Text:
"""
text = """Apple was founded as Apple Computer Company on April 1, 1976, by Steve Wozniak, Steve Jobs (1955–2011) and Ronald Wayne to develop and sell Wozniak's Apple I personal computer.
It was incorporated by Jobs and Wozniak as Apple Computer, Inc. in 1977. The company's second computer, the Apple II, became a best seller and one of the first mass-produced microcomputers.
Apple went public in 1980 to instant financial success."""
results = process(text, prompt)
print(results)
```
To try the model in **question answering**, just specify question and text passage:
```python
question = """Who are the founders of Microsoft?"""
text = """Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800.
During his career at Microsoft, Gates held the positions of chairman, chief executive officer, president and chief software architect, while also being the largest individual shareholder until May 2014."""
input_ = f"{question} {text}"
results = process(text, question)
print(results)
```
For the **text cleaning**, please, specify the following prompt, it will recognize the part of the text that should be erased:
```python
prompt = """Clean the following text extracted from the web matching not relevant parts:"""
text = """The mechanism of action was characterized using native mass spectrometry, the thermal shift-binding assay, and enzymatic kinetic studies (Figure ). In the native mass spectrometry binding assay, compound 23R showed dose-dependent binding to SARS-CoV-2 Mpro, similar to the positive control GC376, with a binding stoichiometry of one drug per monomer (Figure A).
Similarly, compound 23R showed dose-dependent stabilization of the SARS-CoV-2 Mpro in the thermal shift binding assay with an apparent Kd value of 9.43 μM, a 9.3-fold decrease compared to ML188 (1) (Figure B). In the enzymatic kinetic studies, 23R was shown to be a noncovalent inhibitor with a Ki value of 0.07 μM (Figure C, D top and middle panels). In comparison, the Ki for the parent compound ML188 (1) is 2.29 μM.
The Lineweaver–Burk or double-reciprocal plot with different compound concentrations yielded an intercept at the Y-axis, suggesting that 23R is a competitive inhibitor similar to ML188 (1) (Figure C, D bottom panel). Buy our T-shirts for the lowerst prices you can find!!! Overall, the enzymatic kinetic studies confirmed that compound 23R is a noncovalent inhibitor of SARS-CoV-2 Mpro."""
results = process(text, prompt)
print(results)
```
It's possible to use the model for **relation extraction**, it allows in N*C operations to extract all relations between entities, where N - number of entities and C - number of classes:
```python
rex_prompt="""
Identify target entity given the following relation: "{}" and the following source entity: "{}"
Text:
"""
text = """Dr. Paul Hammond, a renowned neurologist at Johns Hopkins University, has recently published a paper in the prestigious journal "Nature Neuroscience". """
entity = "Paul Hammond"
relation = "worked at"
prompt = rex_prompt.format(relation, entity)
results = process(text, prompt)
print(results)
```
To **find similar entities** in the text, consider the following example:
```python
ent_prompt = "Find all '{}' mentions in the text:"
text = """Several studies have reported its pharmacological activities, including anti-inflammatory, antimicrobial, and antitumoral effects. The effect of E-anethole was studied in the osteosarcoma MG-63 cell line, and the antiproliferative activity was evaluated by an MTT assay. It showed a GI50 value of 60.25 μM with apoptosis induction through the mitochondrial-mediated pathway. Additionally, it induced cell cycle arrest at the G0/G1 phase, up-regulated the expression of p53, caspase-3, and caspase-9, and down-regulated Bcl-xL expression. Moreover, the antitumoral activity of anethole was assessed against oral tumor Ca9-22 cells, and the cytotoxic effects were evaluated by MTT and LDH assays. It demonstrated a LD50 value of 8 μM, and cellular proliferation was 42.7% and 5.2% at anethole concentrations of 3 μM and 30 μM, respectively. It was reported that it could selectively and in a dose-dependent manner decrease cell proliferation and induce apoptosis, as well as induce autophagy, decrease ROS production, and increase glutathione activity. The cytotoxic effect was mediated through NF-kB, MAP kinases, Wnt, caspase-3 and -9, and PARP1 pathways. Additionally, treatment with anethole inhibited cyclin D1 oncogene expression, increased cyclin-dependent kinase inhibitor p21WAF1, up-regulated p53 expression, and inhibited the EMT markers."""
entity = "anethole"
prompt = ent_prompt.format(entity)
results = process(text, prompt)
print(results)
```
Currently **summarization** with UTC model works purely, however, we want to highlight the potential of such approach and use cases when it's beneficial:
```python
prompt = "Summarize the following text, highlighting the most important sentences:"
text = """Apple was founded as Apple Computer Company on April 1, 1976, by Steve Wozniak, Steve Jobs (1955–2011) and Ronald Wayne to develop and sell Wozniak's Apple I personal computer. It was incorporated by Jobs and Wozniak as Apple Computer, Inc. in 1977. The company's second computer, the Apple II, became a best seller and one of the first mass-produced microcomputers. Apple went public in 1980 to instant financial success. The company developed computers featuring innovative graphical user interfaces, including the 1984 original Macintosh, announced that year in a critically acclaimed advertisement called "1984". By 1985, the high cost of its products, and power struggles between executives, caused problems. Wozniak stepped back from Apple and pursued other ventures, while Jobs resigned and founded NeXT, taking some Apple employees with him.
Apple Inc. is an American multinational technology company headquartered in Cupertino, California. Apple is the world's largest technology company by revenue, with US$394.3 billion in 2022 revenue. As of March 2023, Apple is the world's biggest company by market capitalization. As of June 2022, Apple is the fourth-largest personal computer vendor by unit sales and the second-largest mobile phone manufacturer in the world. It is considered one of the Big Five American information technology companies, alongside Alphabet (parent company of Google), Amazon, Meta Platforms, and Microsoft.
As the market for personal computers expanded and evolved throughout the 1990s, Apple lost considerable market share to the lower-priced duopoly of the Microsoft Windows operating system on Intel-powered PC clones (also known as "Wintel"). In 1997, weeks away from bankruptcy, the company bought NeXT to resolve Apple's unsuccessful operating system strategy and entice Jobs back to the company. Over the next decade, Jobs guided Apple back to profitability through a number of tactics including introducing the iMac, iPod, iPhone and iPad to critical acclaim, launching the "Think different" campaign and other memorable advertising campaigns, opening the Apple Store retail chain, and acquiring numerous companies to broaden the company's product portfolio. When Jobs resigned in 2011 for health reasons, and died two months later, he was succeeded as CEO by Tim Cook"""
results = process(text, prompt)
print(results)
```
### Benchmarking
Below is a table that highlights the performance of UTC models on the [CrossNER](https://huggingface.co/datasets/DFKI-SLT/cross_ner) dataset. The values represent the Micro F1 scores, with the estimation done at the word level.
| Model | AI | Literature | Music | Politics | Science |
|----------------------|--------|------------|--------|----------|---------|
| UTC-DeBERTa-small | 0.8492 | 0.8792 | 0.864 | 0.9008 | 0.85 |
| UTC-DeBERTa-base | 0.8452 | 0.8587 | 0.8711 | 0.9147 | 0.8631 |
| UTC-DeBERTa-large | 0.8971 | 0.8978 | 0.9204 | 0.9247 | 0.8779 |
### Future reading
Check our blogpost - ["As GPT4 but for token classification"](https://medium.com/p/9b5a081fbf27), where we highlighted possible use-cases of the model and why next-token prediction is not the only way to achive amazing zero-shot capabilites.
While most of the AI industry is focused on generative AI and decoder-based models, we are committed to developing encoder-based models.
We aim to achieve the same level of generalization for such models as their decoder brothers. Encoders have several wonderful properties, such as bidirectional attention, and they are the best choice for many information extraction tasks in terms of efficiency and controllability.
### Feedback
We value your input! Share your feedback and suggestions to help us improve our models.
Fill out the feedback [form](https://forms.gle/5CPFFuLzNWznjcpL7)
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models.
Join [Discord](https://discord.gg/dkyeAgs9DG)
|
MBZUAI/MobiLlama-05B | MBZUAI | 2024-02-28T05:17:17Z | 625 | 35 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"nlp",
"code",
"custom_code",
"en",
"dataset:LLM360/AmberDatasets",
"arxiv:2402.16840",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T16:34:16Z | ---
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
datasets:
- LLM360/AmberDatasets
---
# MobiLlama-05B
<center><img src="MobileLLaMa.png" alt="mobillama logo" width="300"/></center>
MobiLlama-05B is a Small Language Model with **0.5 billion** parameters. It was trained using the Amber data sources [Amber-Dataset](https://huggingface.co/datasets/LLM360/AmberDatasets).
## Model Summary
"Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development. However, LLMs do not suit well for scenarios that require on-device processing, energy efficiency, low memory footprint, and response efficiency. These requisites are crucial for privacy, security, and sustainable deployment. This paper explores the ‘less is more’ paradigm by addressing the challenge of designing accurate yet efficient Small Language Models (SLMs) for resource-constrained devices. Our primary contribution is the introduction of an accurate and fully transparent open-source 0.5 billion (0.5B) parameter SLM, named MobiLlama, catering to the specific needs of resource-constrained computing with an emphasis on enhanced performance with reduced resource demands. MobiLlama is a SLM design that initiates from a larger model and applies a careful parameter sharing scheme to reduce both the pre-training and the deployment cost. Our work strives to not only bridge the gap in open-source SLMs but also ensures full transparency, where complete training data pipeline, training code, model weights, and over 300 checkpoints along with evaluation codes are available on our [Github](https://github.com/mbzuai-oryx/MobiLlama).
[Arxiv Paper Link](https://arxiv.org/abs/2402.16840)
## Model Description
- **Model type:** Small Language Model (SLM) built using the architecture design of LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
- [Training Code](https://github.com/mbzuai-oryx/MobiLlama)
- [Data Preparation](https://github.com/LLM360/amber-data-prep)
- [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets)
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("MBZUAI/MobiLlama-05B", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("MBZUAI/MobiLlama-05B", trust_remote_code=True)
model.to('cuda')
text = "I was walking towards the river when "
input_ids = tokenizer(text, return_tensors="pt").to('cuda').input_ids
outputs = model.generate(input_ids, max_length=1000, repetition_penalty=1.2, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
```
## Training DataMix
| Subset | Tokens (Billion) |
| ----------- | ----------- |
| Arxiv | 30.00 |
| Book | 28.86 |
| C4 | 197.67 |
| Refined-Web | 665.01 |
| StarCoder | 291.92 |
| StackExchange | 21.75 |
| Wikipedia | 23.90 |
| Total | 1259.13 |
## Hyperparameters
| Hyperparameter | Value |
| ----------- | ----------- |
| Total Parameters | 0.52B |
| Hidden Size | 2048 |
| Intermediate Size (MLPs) | 5632 |
| Number of Attention Heads | 32 |
| Number of Hidden Lyaers | 22 |
| RMSNorm ɛ | 1e^-5 |
| Max Seq Length | 2048 |
| Vocab Size | 32000 |
## Evaluation
| Evaluation Benchmark | MobiLlama-0.5B | MobiLlama-0.8B | MobiLlama-1.2B |
| ----------- | ----------- | ----------- | ----------- |
| HellaSwag | 52.52 | 54.09 | 62.99 |
| MMLU | 26.45 | 26.92 | 24.23 |
| Arc Challenge | 29.52 | 30.20 | 34.55 |
| TruthfulQA | 38.05 | 38.48 | 35.57 |
| CrowsPairs | 64.03 | 64.82 | 68.12 |
| PIQA | 72.03 | 73.17 | 75.29 |
| Race | 33.68 | 33.37 | 35.31 |
| SIQA | 40.22 | 41.60 | 41.96 |
| Winogrande | 57.53 | 57.45 | 61.08 |
## Citation
**BibTeX:**
```bibtex
@misc{thawakar2024mobillama,
title={MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT},
author={Omkar Thawakar and Ashmal Vayani and Salman Khan and Hisham Cholakkal and Rao Muhammad Anwer and Michael Felsberg and Timothy Baldwin and Eric P. Xing and Fahad Shahbaz Khan},
year={2024},
eprint={2402.16840},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
PrunaAI/stablelm-2-12b-GGUF-smashed | PrunaAI | 2024-04-22T03:55:13Z | 625 | 1 | null | [
"gguf",
"pruna-ai",
"region:us"
] | null | 2024-04-22T02:32:59Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/stablelm-2-12b-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download PrunaAI/stablelm-2-12b-GGUF-smashed-smashed stablelm-2-12b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download PrunaAI/stablelm-2-12b-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/stablelm-2-12b-GGUF-smashed-smashed stablelm-2-12b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m stablelm-2-12b.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./stablelm-2-12b.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {prompt} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./stablelm-2-12b.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Kkonjeong/wav2vec2-base-korean | Kkonjeong | 2024-06-20T16:03:03Z | 625 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"ko",
"dataset:kresnik/zeroth_korean",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-30T14:12:09Z |
---
library_name: transformers
datasets:
- kresnik/zeroth_korean
language:
- ko
metrics:
- cer
---
# Model Card for wav2vec2-base-korean
## Model Details
### Model Description
This model is a fine-tuned version of Facebook's wav2vec2-base model, adapted for Korean language recognition using the Zeroth-Korean dataset. The model has been trained to transcribe Korean speech into text, specifically utilizing the unique jamo characters of the Korean language.
- **Developed by:** [jeonghyeon Park, Jaeyoung Kim]
- **Model type:** Speech-to-Text
- **Language(s) (NLP):** Korean
- **License:** Apache 2.0
- **Finetuned from model [optional]:** facebook/wav2vec2-base
### Model Sources
- **Repository:** [github.com/KkonJJ/wav2vec2-base-korean]
## Uses
### Direct Use
The model can be directly used for transcribing Korean speech to text without additional fine-tuning. It is particularly useful for applications requiring accurate Korean language recognition such as voice assistants, transcription services, and language learning tools.
### Downstream Use [optional]
This model can be integrated into larger systems that require speech recognition capabilities, such as automated customer service, voice-controlled applications, and more.
### Out-of-Scope Use
This model is not suitable for recognizing languages other than Korean or for tasks that require understanding context beyond the transcription of spoken Korean.
## Bias, Risks, and Limitations
### Recommendations
Users should be aware of the limitations of the model, including potential biases in the training data which may affect the accuracy for certain dialects or speakers. It is recommended to evaluate the model's performance on a representative sample of the intended application domain.
## How to Get Started with the Model
To get started with the model, use the code below:
```python
!pip install transformers[torch] accelerate -U
!pip install datasets torchaudio -U
!pip install jiwer jamo
!pip install tensorboard
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torchaudio
from jamo import h2j, j2hcj
model_name = "Kkonjeong/wav2vec2-base-korean"
model = Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2Processor.from_pretrained(model_name)
model.to("cuda")
model.eval()
def load_and_preprocess_audio(file_path):
speech_array, sampling_rate = torchaudio.load(file_path)
if sampling_rate != 16000:
resampler = torchaudio.transforms.Resample(sampling_rate, 16000)
speech_array = resampler(speech_array)
input_values = processor(speech_array.squeeze().numpy(), sampling_rate=16000).input_values[0]
return input_values
def predict(file_path):
input_values = load_and_preprocess_audio(file_path)
input_values = torch.tensor(input_values).unsqueeze(0).to("cuda")
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)[0]
return transcription
audio_file_path = "your_audio_file.wav"
transcription = predict(audio_file_path)
print("Transcription:", transcription)
```
## Training Details
### Training Data
The model was trained using the Zeroth-Korean dataset, a collection of Korean speech data. This dataset includes audio recordings and their corresponding transcriptions.
### Training Procedure
#### Preprocessing
Special characters were removed from the transcriptions, and the text was converted to jamo characters to better align with the Korean language's phonetic structure.
#### Training Hyperparameters
- **Training regime:** Mixed precision (fp16)
- **Batch size:** 32
- **Learning rate:** 1e-4
- **Number of epochs:** 10
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was evaluated using the test split of the Zeroth-Korean dataset.
#### Metrics
The primary evaluation metric used was the Character Error Rate (CER), which measures the percentage of characters that are incorrect in the transcription compared to the reference text.
### Results
- **Final CER:** 0.073
#### Summary
The model achieved a CER of 7.3%, indicating good performance on the Zeroth-Korean dataset.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
- **Hardware Type:** NVIDIA A100
- **Hours used:** Approximately 8hours
## Technical Specifications
### Model Architecture and Objective
The model architecture is based on wav2vec2.0, designed to convert audio input into text output by modeling the phonetic structure of speech.
### Compute Infrastructure
#### Hardware
- **GPUs:** NVIDIA A100
#### Software
- **Framework:** PyTorch
- **Libraries:** Transformers, Datasets, Torchaudio, Jiwer, Jamo
**BibTeX:**
```bibtex
@misc{your_bibtex_key,
author = {Your Name},
title = {wav2vec2-base-korean},
year = {2024},
publisher = {Hugging Face},
note = {https://huggingface.co/Kkonjeong/wav2vec2-base-korean}
}
```
**APA:**
Your Name. (2024). wav2vec2-base-korean. Hugging Face. https://huggingface.co/Kkonjeong/wav2vec2-base-korean
## Model Card Authors [optional]
[jeonghyeon Park, Jaeyoung Kim]
## Model Card Contact
For more information, contact [[email protected], [email protected]].
|
lpiccinelli/unidepth-v2-vits14 | lpiccinelli | 2024-06-12T12:50:18Z | 625 | 0 | UniDepth | [
"UniDepth",
"pytorch",
"safetensors",
"monocular-metric-depth-estimation",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"region:us"
] | null | 2024-06-12T12:49:57Z | ---
library_name: UniDepth
tags:
- monocular-metric-depth-estimation
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/lpiccinelli-eth/UniDepth
- Docs: [More Information Needed] |
BaunRobotics/merged-tinybaun-k12-hf | BaunRobotics | 2024-06-14T10:47:13Z | 625 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-06-14T10:43:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
just1nseo/mapo_1e-6 | just1nseo | 2024-06-27T12:56:51Z | 625 | 0 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-06-27T07:24:26Z | Entry not found |
huawei-noah/TernaryBERT_SST-2 | huawei-noah | 2020-10-16T03:16:54Z | 624 | 0 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | Entry not found |
Abdou/vit-swin-base-224-gpt2-image-captioning | Abdou | 2023-04-29T08:55:48Z | 624 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"image-to-text",
"en",
"dataset:coco",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-02-05T09:22:39Z | ---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: vit-swin-base-224-gpt2-image-captioning
results: []
license: mit
language:
- en
pipeline_tag: image-to-text
---
# vit-swin-base-224-gpt2-image-captioning
This model is a fine-tuned [VisionEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) model on 60% of the [COCO2014](https://huggingface.co/datasets/HuggingFaceM4/COCO) dataset.
It achieves the following results on the testing set:
- Loss: 0.7989
- Rouge1: 53.1153
- Rouge2: 24.2307
- Rougel: 51.5002
- Rougelsum: 51.4983
- Bleu: 17.7765
## Model description
The model was initialized on [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) as the vision encoder, the [gpt2](https://huggingface.co/gpt2) as the decoder.
## Intended uses & limitations
You can use this model for image captioning only.
## How to use
You can either use the simple pipeline API:
```python
from transformers import pipeline
image_captioner = pipeline("image-to-text", model="Abdou/vit-swin-base-224-gpt2-image-captioning")
# infer the caption
caption = image_captioner("http://images.cocodataset.org/test-stuff2017/000000000019.jpg")[0]['generated_text']
print(f"caption: {caption}")
```
Or initialize everything for more flexibility:
```python
from transformers import VisionEncoderDecoderModel, GPT2TokenizerFast, ViTImageProcessor
import torch
import os
import urllib.parse as parse
from PIL import Image
import requests
# a function to determine whether a string is a URL or not
def is_url(string):
try:
result = parse.urlparse(string)
return all([result.scheme, result.netloc, result.path])
except:
return False
# a function to load an image
def load_image(image_path):
if is_url(image_path):
return Image.open(requests.get(image_path, stream=True).raw)
elif os.path.exists(image_path):
return Image.open(image_path)
# a function to perform inference
def get_caption(model, image_processor, tokenizer, image_path):
image = load_image(image_path)
# preprocess the image
img = image_processor(image, return_tensors="pt").to(device)
# generate the caption (using greedy decoding by default)
output = model.generate(**img)
# decode the output
caption = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
return caption
device = "cuda" if torch.cuda.is_available() else "cpu"
# load the fine-tuned image captioning model and corresponding tokenizer and image processor
model = VisionEncoderDecoderModel.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning").to(device)
tokenizer = GPT2TokenizerFast.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning")
image_processor = ViTImageProcessor.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning")
# target image
url = "http://images.cocodataset.org/test-stuff2017/000000000019.jpg"
# get the caption
caption = get_caption(model, image_processor, tokenizer, url)
print(f"caption: {caption}")
```
Output:
```
Two cows laying in a field with a sky background.
```
## Training procedure
You can check [this guide](https://www.thepythoncode.com/article/image-captioning-with-pytorch-and-transformers-in-python) to learn how this model was fine-tuned.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|
| 1.0018 | 0.38 | 2000 | 0.8860 | 38.6537 | 13.8145 | 35.3932 | 35.3935 | 8.2448 | 11.2946 |
| 0.8827 | 0.75 | 4000 | 0.8395 | 40.0458 | 14.8829 | 36.5321 | 36.5366 | 9.1169 | 11.2946 |
| 0.8378 | 1.13 | 6000 | 0.8140 | 41.2736 | 15.9576 | 37.5504 | 37.5512 | 9.871 | 11.2946 |
| 0.7913 | 1.51 | 8000 | 0.8012 | 41.6642 | 16.1987 | 37.8786 | 37.8891 | 10.0786 | 11.2946 |
| 0.7794 | 1.89 | 10000 | 0.7933 | 41.9119 | 16.3738 | 38.1062 | 38.1292 | 10.288 | 11.2946 |
Total training time: ~5 hours on NVIDIA A100 GPU.
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2 |
timm/dla60.in1k | timm | 2023-04-24T21:13:04Z | 624 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1707.06484",
"license:bsd-3-clause",
"region:us"
] | image-classification | 2023-04-24T19:34:24Z | ---
tags:
- image-classification
- timm
library_name: timm
license: bsd-3-clause
datasets:
- imagenet-1k
---
# Model card for dla60.in1k
A DLA (Deep Layer Aggregation) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 22.0
- GMACs: 4.3
- Activations (M): 10.2
- Image size: 224 x 224
- **Papers:**
- Deep Layer Aggregation: https://arxiv.org/abs/1707.06484
- **Original:** https://github.com/ucbdrive/dla
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('dla60.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla60.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 32, 112, 112])
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'dla60.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1024, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{yu2018deep,
title={Deep layer aggregation},
author={Yu, Fisher and Wang, Dequan and Shelhamer, Evan and Darrell, Trevor},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
year={2018}
}
```
|
jayparmr/icbinp_v8_inpaint_v2 | jayparmr | 2023-06-01T14:51:38Z | 624 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-01T14:49:57Z | Entry not found |
MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF | MaziyarPanahi | 2024-02-28T13:56:01Z | 624 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"pytorch",
"tensorboard",
"safetensors",
"text-generation",
"axolotl",
"7b",
"generated_from_trainer",
"conversational",
"afr",
"amh",
"ara",
"aze",
"bel",
"ben",
"bul",
"cat",
"ceb",
"ces",
"cym",
"dan",
"deu",
"ell",
"eng",
"epo",
"est",
"eus",
"fin",
"fil",
"fra",
"fry",
"gla",
"gle",
"glg",
"guj",
"hat",
"hau",
"heb",
"hin",
"hun",
"hye",
"ibo",
"ind",
"isl",
"ita",
"jav",
"jpn",
"kan",
"kat",
"kaz",
"khm",
"kir",
"kor",
"kur",
"lao",
"lav",
"lat",
"lit",
"ltz",
"mal",
"mar",
"mkd",
"mlg",
"mlt",
"mon",
"mri",
"msa",
"mya",
"nep",
"nld",
"nor",
"nso",
"nya",
"ory",
"pan",
"pes",
"pol",
"por",
"pus",
"ron",
"rus",
"sin",
"slk",
"slv",
"smo",
"sna",
"snd",
"som",
"sot",
"spa",
"sqi",
"srp",
"sun",
"swa",
"swe",
"tam",
"tel",
"tgk",
"tha",
"tur",
"twi",
"ukr",
"urd",
"uzb",
"vie",
"xho",
"yid",
"yor",
"zho",
"zul",
"dataset:CohereForAI/aya_dataset",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/Mistral-7B-Instruct-Aya-101"
] | text-generation | 2024-02-28T13:30:52Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- pytorch
- tensorboard
- safetensors
- mistral
- text-generation
- axolotl
- 7b
- generated_from_trainer
- conversational
- afr
- amh
- ara
- aze
- bel
- ben
- bul
- cat
- ceb
- ces
- cym
- dan
- deu
- ell
- eng
- epo
- est
- eus
- fin
- fil
- fra
- fry
- gla
- gle
- glg
- guj
- hat
- hau
- heb
- hin
- hun
- hye
- ibo
- ind
- isl
- ita
- jav
- jpn
- kan
- kat
- kaz
- khm
- kir
- kor
- kur
- lao
- lav
- lat
- lit
- ltz
- mal
- mar
- mkd
- mlg
- mlt
- mon
- mri
- msa
- mya
- nep
- nld
- nor
- nso
- nya
- ory
- pan
- pes
- pol
- por
- pus
- ron
- rus
- sin
- slk
- slv
- smo
- sna
- snd
- som
- sot
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- tel
- tgk
- tha
- tur
- twi
- ukr
- urd
- uzb
- vie
- xho
- yid
- yor
- zho
- zul
- dataset:CohereForAI/aya_dataset
- base_model:mistralai/Mistral-7B-Instruct-v0.2
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Mistral-7B-Instruct-Aya-101-GGUF
base_model: MaziyarPanahi/Mistral-7B-Instruct-Aya-101
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Mistral-7B-Instruct-Aya-101](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-Aya-101)
## Description
[MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF) contains GGUF format model files for [MaziyarPanahi/Mistral-7B-Instruct-Aya-101](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-Aya-101).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF) and below it, a specific filename to download, such as: Mistral-7B-Instruct-Aya-101-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF Mistral-7B-Instruct-Aya-101-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Mistral-7B-Instruct-Aya-101-GGUF Mistral-7B-Instruct-Aya-101-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Mistral-7B-Instruct-Aya-101-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Mistral-7B-Instruct-Aya-101-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Mistral-7B-Instruct-Aya-101-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
InvestmentResearchAI/LLM-ADE_tiny-v0.001 | InvestmentResearchAI | 2024-06-28T08:43:14Z | 624 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"finance",
"conversational",
"en",
"arxiv:2404.13028",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T09:16:40Z | ---
language:
- en
license: mit
tags:
- finance
pipeline_tag: text-generation
widget:
- example_title: Easy
text: '<|im_start|>user
How do call options benefit the buyer?<|im_end|>
<|im_start|>assistant
'
- example_title: Medium
text: '<|im_start|>user
Why might a trader choose to quickly exit a losing position, even if they still
believe in the original trade idea?<|im_end|>
<|im_start|>assistant
'
- example_title: Hard
text: '<|im_start|>user
In the context of Harry Markowitz''s Portfolio Selection theory, what does an
''efficient'' portfolio refer to?<|im_end|>
<|im_start|>assistant
'
inference:
parameters:
temperature: 0.2
min_new_tokens: 20
max_new_tokens: 250
---
# AlphaBlind Tiny v0.001
Our Proof-of-Concept (POC) for the LLM-ADE framework (https://arxiv.org/abs/2404.13028). A very early, initial version of TinyLlama processing and ingesting llm-ade-fin_data-subset-earnings-10k and other financial data with the LLM-ADE framework.
Note: This model has not been thoroughly tested, and is very small - it can run on a Macbook Pro. Please do not use this version of the model as is. |
kkatiz/THAI-BLIP-2 | kkatiz | 2024-05-06T08:46:35Z | 624 | 6 | transformers | [
"transformers",
"safetensors",
"blip-2",
"visual-question-answering",
"image-to-text",
"th",
"base_model:Salesforce/blip2-opt-2.7b-coco",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2024-04-25T22:00:29Z | ---
library_name: transformers
license: mit
language:
- th
pipeline_tag: image-to-text
base_model: Salesforce/blip2-opt-2.7b-coco
---
## THAI-BLIP-2
fine-tuned for image captioning task from [blip2-opt-2.7b-coco](Salesforce/blip2-opt-2.7b-coco) with MSCOCO2017 thai caption.
## How to use:
```python
from transformers import Blip2ForConditionalGeneration, Blip2Processor
from PIL import Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Blip2Processor.from_pretrained("kkatiz/THAI-BLIP-2")
model = Blip2ForConditionalGeneration.from_pretrained("kkatiz/THAI-BLIP-2", device_map=device, torch_dtype=torch.bfloat16)
img = Image.open("Your image...")
inputs = processor(images=img, return_tensors="pt").to(device, torch.bfloat16)
# Adjust your `max_length`
generated_ids = model.generate(**inputs, max_length=20)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_text)
``` |
ketanthakur603/mental-health-conversation-chaatbot | ketanthakur603 | 2024-05-10T06:38:31Z | 624 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-10T06:33:22Z | Entry not found |
mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF | mradermacher | 2024-06-01T16:28:55Z | 624 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:DataGuard/Llama-3-22B-Instruct-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2024-05-31T07:42:15Z | ---
base_model: DataGuard/Llama-3-22B-Instruct-v0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DataGuard/Llama-3-22B-Instruct-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 8.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 10.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 13.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-22B-Instruct-v0.1-i1-GGUF/resolve/main/Llama-3-22B-Instruct-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 18.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf | RichardErkhov | 2024-06-02T22:32:13Z | 624 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-06-02T12:11:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CleverGirl-20b-Blended - GGUF
- Model creator: https://huggingface.co/athirdpath/
- Original model: https://huggingface.co/athirdpath/CleverGirl-20b-Blended/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CleverGirl-20b-Blended.Q2_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q2_K.gguf) | Q2_K | 6.91GB |
| [CleverGirl-20b-Blended.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [CleverGirl-20b-Blended.IQ3_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [CleverGirl-20b-Blended.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [CleverGirl-20b-Blended.IQ3_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [CleverGirl-20b-Blended.Q3_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q3_K.gguf) | Q3_K | 9.04GB |
| [CleverGirl-20b-Blended.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [CleverGirl-20b-Blended.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [CleverGirl-20b-Blended.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [CleverGirl-20b-Blended.Q4_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q4_0.gguf) | Q4_0 | 10.52GB |
| [CleverGirl-20b-Blended.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.IQ4_NL.gguf) | IQ4_NL | 10.57GB |
| [CleverGirl-20b-Blended.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q4_K_S.gguf) | Q4_K_S | 10.59GB |
| [CleverGirl-20b-Blended.Q4_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q4_K.gguf) | Q4_K | 11.22GB |
| [CleverGirl-20b-Blended.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q4_K_M.gguf) | Q4_K_M | 11.22GB |
| [CleverGirl-20b-Blended.Q4_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q4_1.gguf) | Q4_1 | 11.67GB |
| [CleverGirl-20b-Blended.Q5_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q5_0.gguf) | Q5_0 | 12.83GB |
| [CleverGirl-20b-Blended.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q5_K_S.gguf) | Q5_K_S | 12.83GB |
| [CleverGirl-20b-Blended.Q5_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q5_K.gguf) | Q5_K | 13.18GB |
| [CleverGirl-20b-Blended.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [CleverGirl-20b-Blended.Q5_1.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q5_1.gguf) | Q5_1 | 13.98GB |
| [CleverGirl-20b-Blended.Q6_K.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q6_K.gguf) | Q6_K | 15.28GB |
| [CleverGirl-20b-Blended.Q8_0.gguf](https://huggingface.co/RichardErkhov/athirdpath_-_CleverGirl-20b-Blended-gguf/blob/main/CleverGirl-20b-Blended.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
---
This model is CleverGirl and CleverGirl-Inverted blended together, an experiment on the nature of frankenstein merges. The CleverGirl line is made from Sao10K/Mythical-Destroyer-V2-L2-13B and athirdpath/Orca-2-13b-Alpaca-Uncensored.
She can be a little strange, but lives up to her name:


Looking forward to comparing the leaderboard scores between this and the unblended version, subjectively this model feels both smarter and more creative after my "frankenstein slice normalization".
models:
- model: athirdpath/CleverGirl-20b
- model: athirdpath/CleverGirl-20b-Inverted
merge_method: slerp
base_model: athirdpath/CleverGirl-20b
parameters: t: value: 0.5
dtype: float16
|
Tencent-Hunyuan/HunyuanDiT-Diffusers-Distilled | Tencent-Hunyuan | 2024-06-05T08:03:51Z | 624 | 4 | diffusers | [
"diffusers",
"safetensors",
"en",
"arxiv:2405.08748",
"license:other",
"diffusers:HunyuanDiTPipeline",
"region:us"
] | text-to-image | 2024-06-05T07:32:16Z | ---
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/Tencent-Hunyuan/HunyuanDiT/blob/main/LICENSE.txt
language:
- en
---
<!-- ## **HunyuanDiT** -->
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent/HunyuanDiT/main/asset/logo.png" height=100>
</p>
# Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
# 混元-DiT: 具有细粒度中文理解的多分辨率Diffusion Transformer
[[Arxiv]](https://arxiv.org/abs/2405.08748) [[project page]](https://dit.hunyuan.tencent.com/) [[github]](https://github.com/Tencent/HunyuanDiT)
This repo contains the distilled Hunyuan-DiT in 🤗 [Diffusers](https://github.com/huggingface/diffusers) format.
It supports 25-step text-to-image generation.
## Dependency
Please install PyTorch first, following the instruction in [https://pytorch.org](https://pytorch.org)
Install the latest version of transformers with `pip`:
```
pip install --upgrade transformers
```
Then install the latest github version of 🤗 Diffusers with `pip`:
```
pip install git+https://github.com/huggingface/diffusers.git
```
## Example Usage with 🤗 Diffusers
```py
import torch
from diffusers import HunyuanDiTPipeline
pipe = HunyuanDiTPipeline.from_pretrained("Tencent-Hunyuan/HunyuanDiT-Diffusers-Distilled", torch_dtype=torch.float16)
pipe.to("cuda")
# You may also use English prompt as HunyuanDiT supports both English and Chinese
# prompt = "An astronaut riding a horse"
prompt = "一个宇航员在骑马"
image = pipe(prompt).images[0]
```

## 📈 Comparisons
In order to comprehensively compare the generation capabilities of HunyuanDiT and other models, we constructed a 4-dimensional test set, including Text-Image Consistency, Excluding AI Artifacts, Subject Clarity, Aesthetic. More than 50 professional evaluators performs the evaluation.
<p align="center">
<table>
<thead>
<tr>
<th rowspan="2">Model</th> <th rowspan="2">Open Source</th> <th>Text-Image Consistency (%)</th> <th>Excluding AI Artifacts (%)</th> <th>Subject Clarity (%)</th> <th rowspan="2">Aesthetics (%)</th> <th rowspan="2">Overall (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>SDXL</td> <td> ✔ </td> <td>64.3</td> <td>60.6</td> <td>91.1</td> <td>76.3</td> <td>42.7</td>
</tr>
<tr>
<td>PixArt-α</td> <td> ✔</td> <td>68.3</td> <td>60.9</td> <td>93.2</td> <td>77.5</td> <td>45.5</td>
</tr>
<tr>
<td>Playground 2.5</td> <td>✔</td> <td>71.9</td> <td>70.8</td> <td>94.9</td> <td>83.3</td> <td>54.3</td>
</tr>
<tr>
<td>SD 3</td> <td>✘</td> <td>77.1</td> <td>69.3</td> <td>94.6</td> <td>82.5</td> <td>56.7</td>
</tr>
<tr>
<td>MidJourney v6</td><td>✘</td> <td>73.5</td> <td>80.2</td> <td>93.5</td> <td>87.2</td> <td>63.3</td>
</tr>
<tr>
<td>DALL-E 3</td><td>✘</td> <td>83.9</td> <td>80.3</td> <td>96.5</td> <td>89.4</td> <td>71.0</td>
</tr>
<tr style="font-weight: bold; background-color: #f2f2f2;">
<td>Hunyuan-DiT</td><td>✔</td> <td>74.2</td> <td>74.3</td> <td>95.4</td> <td>86.6</td> <td>59.0</td>
</tr>
</tbody>
</table>
</p>
## 🎥 Visualization
* **Chinese Elements**
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent/HunyuanDiT/main/asset/chinese elements understanding.png" height=220>
</p>
* **Long Text Input**
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent/HunyuanDiT/main/asset/long text understanding.png" height=310>
</p>
## 🔥🔥🔥 Tencent Hunyuan Bot
Welcome to [Tencent Hunyuan Bot](https://hunyuan.tencent.com/bot/chat), where you can explore our innovative products in multi-round conversation! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.