modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
TheBloke/minotaur-13B-fixed-GGUF
|
TheBloke
| 2023-09-27T12:53:01Z | 384 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"OpenAccess AI Collective",
"MPT",
"axolotl",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:metaeval/ScienceQA_text_only",
"dataset:hellaswag",
"dataset:openai/summarize_from_feedback",
"dataset:riddle_sense",
"dataset:gsm8k",
"dataset:camel-ai/math",
"dataset:camel-ai/biology",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:winglian/evals",
"base_model:openaccess-ai-collective/minotaur-13b-fixed",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-09-20T01:03:25Z |
---
license: other
tags:
- OpenAccess AI Collective
- MPT
- axolotl
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
model_name: Minotaur 13B Fixed
base_model: openaccess-ai-collective/minotaur-13b-fixed
inference: false
model_creator: Open Access AI Collective
model_type: llama
prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user''s questions.
USER: {prompt} ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Minotaur 13B Fixed - GGUF
- Model creator: [Open Access AI Collective](https://huggingface.co/openaccess-ai-collective)
- Original model: [Minotaur 13B Fixed](https://huggingface.co/openaccess-ai-collective/minotaur-13b-fixed)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenAccess AI Collective's Minotaur 13B Fixed](https://huggingface.co/openaccess-ai-collective/minotaur-13b-fixed).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/minotaur-13B-fixed-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF)
* [Open Access AI Collective's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openaccess-ai-collective/minotaur-13b-fixed)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [minotaur-13b.Q2_K.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [minotaur-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [minotaur-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [minotaur-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [minotaur-13b.Q4_0.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [minotaur-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [minotaur-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [minotaur-13b.Q5_0.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [minotaur-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [minotaur-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [minotaur-13b.Q6_K.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [minotaur-13b.Q8_0.gguf](https://huggingface.co/TheBloke/minotaur-13B-fixed-GGUF/blob/main/minotaur-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/minotaur-13B-fixed-GGUF and below it, a specific filename to download, such as: minotaur-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/minotaur-13B-fixed-GGUF minotaur-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/minotaur-13B-fixed-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/minotaur-13B-fixed-GGUF minotaur-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m minotaur-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/minotaur-13B-fixed-GGUF", model_file="minotaur-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenAccess AI Collective's Minotaur 13B Fixed
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[๐ต Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Due to a bug, the initial release of Minotaur 13B dropped a few datasets during training. We have corrected the issue and this is the retrained model
The affected datasets include:
- prose generation
- classification
- coding
# Minotaur 13B (FIXED)
Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
<img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
# Training Datasets
Minotaur 13B model is fine-tuned on the following openly available datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
- prose generation
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
## Release Notes
- https://wandb.ai/wing-lian/minotaur-13b/runs/5ystr7w6/workspace
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB
- 1 epochs taking approximately 7.5 hours
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model LLaMA-13B, please refer to its model card's Limitations Section for relevant information. (included below)
## Benchmarks
hf-causal-experimental (pretrained=openaccess-ai-collective/minotaur-13b-fixed), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4778|_ |0.0146|
| | |acc_norm|0.4932|_ |0.0146|
|arc_easy | 0|acc |0.7786|_ |0.0085|
| | |acc_norm|0.7487|_ |0.0089|
|boolq | 1|acc |0.8141|_ |0.0068|
|hellaswag | 0|acc |0.6072|_ |0.0049|
| | |acc_norm|0.7929|_ |0.0040|
|openbookqa | 0|acc |0.3420|_ |0.0212|
| | |acc_norm|0.4580|_ |0.0223|
|piqa | 0|acc |0.8025|_ |0.0093|
| | |acc_norm|0.8107|_ |0.0091|
|winogrande | 0|acc |0.7285|_ |0.0125|
## Examples - results may vary based on temperature (0.7 for this) and other settings
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: Solve for x: 5x = 3x + 10
ASSISTANT:A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Solve for x: 5x = 3x + 10
ASSISTANT:To solve for x, we need to isolate x on one side of the equation. We can do this by following these steps:
1. Subtract 3x from both sides of the equation:
5x - 3x = 3x + 10 - 3x
2x = 10
2. Divide both sides of the equation by 2:
2x/2 = 10/2
x = 5
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: You have 3 apples. You eat 2 pears. How many apples do you have left?
ASSISTANT:You still have 3 apples. Eating pears does not affect the number of apples you have.
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: List three words that start with "ex". What is 33 + 77?
ASSISTANT:Three words that start with "ex" are "express", "exact", and "excite". 33 + 77 = 110.
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: Write a haiku about dinosaurs.
ASSISTANT:Titanic creatures roam,
Thunderous footsteps echo,
Extinct but not forgotten.
```
meh, 6/7/7 rather than 5/7/5
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Create an original short story in the style of J.K Rowling about the Weesley family going on a vacation in Australia whith Harry Potter and Hermione Grainger tagging along.
ASSISTANT:Once upon a time, there was a magical family named the Weesleys. They lived in a cozy little cottage at the edge of Hogsmeade village, where they ran a charming bed and breakfast called "The Flying Ford Anglia." A
s any other magical family, they enjoyed adventures that brought them close to nature, so they decided to go on a trip to Australia. They invited their dear friends Harry Potter and Hermione Grainger to join them on this ex
citing journey.
The Weesleys had been planning for months and had everything ready for their trip. Their children, Freddie and Roxanne were thrilled to visit the land down under, and their parents, George and Angelina, could not wait to ex
plore the beautiful country with their friends. They would be leaving soon, but before departing, there was one small problem - the luggage.
"We can't possibly take all these trunks!" exclaimed Freddie as he stared at the mountain of bags stacked up in front of him.
His father looked at him seriously. "You see, son," he began, "our luggage is crucial because we have to pack all our magical essentials like wand holsters, extra wand chargers, spell books, potions ingredients, and much mo
re."
Roxanne interrupted, "But daddy, we don't even use magic in everyday life! And what if it rains? We'll look weird carrying umbrellas around!"
Her mother joined in. "My dear, you know how unpredictable the weather can be. We must always be prepared for unexpected circumstances. Don't forget that even Muggles carry umbrellas."
After hours of debating, they finally managed to narrow down their luggage list and fit all the necessary supplies into several large trunks. The day had come; they were ready to leave for their grand adventure!
As the Weesleys boarded the Portkey that would transport them across the world, their wands began to glow softly, indicating that they had enough energy to make the journey. The Portkey dropped them off in Sydney, right in
front of the magnificent Opera House.
They spent the first few days exploring the city, marveling at the iconic architecture and tasting local cuisine. Then, as planned, they headed north to visit the Great Barrier Reef, one of the most famous natural wonders o
f the world.
Harry and Hermione joined them during this leg of the trip, which made it even more enjoyable. Harry regaled them with tales of his own travels while Hermione shared her extensive knowledge of plants, animals, and the envir
onment.
Soon, they arrived at a quaint town nestled among vibrant green hills and surrounded by vast cattle farms. It was here that they would learn about Aboriginal culture and see some truly unique sights.
One morning, after enjoying a hearty breakfast, they set out to explore the local area. They visited a nearby art gallery that showcased amazing Indigenous works of art, including traditional paintings, sculptures, and text
iles. Afterward, they attended a didgeridoo concert given by a talented young musician who captivated everyone with his soulful tunes.
The following day, they embarked on a bushwalk through the rainforest trails. The air was cool and fresh, and the towering trees seemed to reach for the sky. Hermione took great pleasure in identifying various species of pl
ants and explaining their medicinal properties.
As they walked deeper into the forest, they heard faint chattering noises coming from high above. To their surprise, they spotted a group of small, furry creatures perched on a tree branch. They turned out to be native Aust
ralian possums, which delighted the children, especially Freddie, who could communicate with them using sign language.
Later that afternoon, they visited Uluru, also known as Ayers Rock, a massive sandstone monolith located in the heart of the Outback. The sight of its magnificent red hues against the blue sky left them all speechless. They
spent the rest of the day exploring the rock formation, taking in the breathtaking views and learning about its significance to the Anangu people.
Their last destination was Queensland, home to the world-famous Great Barrier Reef. They spent two days snorkeling, diving, and swimming among the colorful coral reefs, watching schools of fish, sea turtles, and dolphins sw
im past them.
As the sun began to set over the crystal-clear waters, Harry spoke up. "I think it's time we return to reality, don't you agree?" he said, looking at his wife. "This trip has been fantastic, but we can't stay forever."
Everyone nodded in agreement as they prepared themselves for their final journey back to Sydney. When they got there, they gathered on the balcony of their hotel room, reminiscing about the incredible memories they had crea
ted together.
"Thank you for inviting us along on your adventure, Weesleys," said Harry. "It was an honor and a privilege to share this experience with such kind and generous friends. I hope we can do it again sometime."
With tears in their eyes and heavy hearts, they bid each other farewell, promising to keep in touch and meet again soon. And so, their epic journey came to an end. But the memories would remain with them forever, reminding
them of the power of friendship, the beauty of nature, and the importance of discovering new worlds.
```
<!-- original-model-card end -->
|
Yntec/elldrethSVividMix
|
Yntec
| 2023-09-30T06:51:50Z | 384 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"General",
"Elldreth",
"Dream",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-30T05:16:53Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General
- Elldreth
- Dream
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Elldreth's Vivid Mix
fp16 no-ema version of this model. Original page: https://huggingface.co/danbrown/elldreth-vivid-mix
Samples and prompt:


Pretty Cute Little Photorealistic, highly detailed, masterpiece, trending on ArtStation, sitting, Detailed Chibi Eyes, fantasy, beautiful detailed legs, streetwear, gorgeous detailed hair, hat, Magazine ad, iconic, 1943, from the movie, sharp focus.
|
davidkim205/komt-mistral-7b-v1-dpo
|
davidkim205
| 2023-11-29T09:45:13Z | 384 | 8 |
peft
|
[
"peft",
"safetensors",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"llama-2-chat",
"text-generation",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"region:us"
] |
text-generation
| 2023-11-29T09:21:52Z |
---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- llama-2-chat
library_name: peft
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **Model Architecture** : The komt-mistral-7b-v1-dpo is is a fine-tuned version of the komt-mistral-7b-v1(original model : Mistral-7B-Instruct-v0.1).
## Dataset
* maywell/ko_Ultrafeedback_binarized
- https://huggingface.co/datasets/maywell/ko_Ultrafeedback_binarized
## Hardware and Software
- nvidia driver : 535.54.03
- CUDA Version: 12.2
## Training
Refer https://github.com/davidkim205/komt
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]</s>
```
## Usage
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
from transformers import TextStreamer, GenerationConfig
model='davidkim205/komt-mistral-7b-v1'
peft_model_name = 'davidkim205/komt-mistral-7b-v1-dpo'
config = PeftConfig.from_pretrained(peft_model_name)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
config.base_model_name_or_path =model
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map="auto")
model = PeftModel.from_pretrained(model, peft_model_name)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
streamer = TextStreamer(tokenizer)
def gen(x):
generation_config = GenerationConfig(
temperature=0.8,
top_p=0.8,
top_k=100,
max_new_tokens=1024,
early_stopping=True,
do_sample=True,
)
q = f"[INST]{x} [/INST]"
gened = model.generate(
**tokenizer(
q,
return_tensors='pt',
return_token_type_ids=False
).to('cuda'),
generation_config=generation_config,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
result_str = tokenizer.decode(gened[0])
start_tag = f"[/INST]"
start_index = result_str.find(start_tag)
if start_index != -1:
result_str = result_str[start_index + len(start_tag):].strip()
return result_str
result = gen('์ ์ฃผ๋๋ฅผ 1๋ฐ2์ผ๋ก ํผ์ ์ฌํํ๋ ค๊ณ ํ๋๋ฐ ์ฌํ ์ฝ์ค๋ฅผ ๋ง๋ค์ด์ค')
print('##########')
print(result)
```
output
```
์ ์ฃผ๋ 1๋ฐ2์ผ 1์ธ ์ฌํ ์ฝ์ค
์ ์ฃผ๋๋ ํ๊ตญ์์ ๊ฐ์ฅ ๋จผ ์ฌ์ธ ๋๋จ์์์ ์ต๋ ์ฌ์ผ๋ก, ๋ฉ์ง ํด๋ณ, ์๋ฆ๋ค์ด ์์ฐ๊ฒฝ๊ด, ์ ๊ฒฝ ๋ฉ๋ ์ ๋ฒฝ, ํ๊ตญ ์ต๋ ๊ท๋ชจ์ ๋ณตํฉ๋ฆฌ์กฐํธ ๋ฑ ๋ค์ํ ๊ด๊ด ๋ช
์๊ฐ ํ๋ถํ๊ฒ ์์ด 1๋ฐ2์ผ๋ก ํผ์ ์ฌํํ์๋ ์ฌ๋ฌ๋ถ๋ค์ ์ํด ์๋์ ๊ฐ์ ์ฝ์ค๋ฅผ ์ ์ํด ๋๋ฆฌ๊ฒ ์ต๋๋ค.
โท ์ฝ์ค 1 : ์ฑ์ฐ์ผ์ถ๋ด, ์ฉ๋์ด์ ๋ฒฝ, ์ฑ์ฐ์ผ์ถ๋ด ์ผ๊ฐ ๊ฒฝ๊ด ๊ด๋
- ์ฝ์ค ์ค๋ช
: ์ ์ฃผ ๋๋จ์ชฝ ํด์์ ๋ช
์์ธ ์ฑ์ฐ์ผ์ถ๋ด, ์ฉ๋์ด์ ๋ฒฝ, ์ฑ์ฐ์ผ์ถ๋ด ์ผ๊ฐ ๊ฒฝ๊ด ๊ด๋ ์์ผ๋ก ๊ตฌ์ฑ๋ ์ฝ์ค์
๋๋ค. ์์นจ์ ์ผ์ฐ ์ผ์ด๋ ์ผ์ถ๋ด์ ๋์ฐฉํ์ฌ ์ผ์ถ์ ๊ฐ์ํ๊ณ , ์์นจ ์์ฌ๋ฅผ ํ๊ณ ์ ๋ฒฝ ๋ฑ๋ฐ์ ์ฆ๊ธฐ๋ฉฐ ํด์์ ์ทจํฉ๋๋ค. ์คํ์๋ ์ผ์ถ๋ด ์ผ๊ฐ ๊ฒฝ๊ด ๊ด๋์ ์ฆ๊ธฐ๋ฉฐ ํด์๊ณผ ํด์์ ์ทจํฉ๋๋ค.
โท ์ฝ์ค 2 : ํ๋ผ์ฐ, ํ๋ผ์ฐ ์ผ์ด๋ธ์นด, ์ค๋ฏธ์ ๋ฐ์, ์ ๋ผ ์ด์
- ์ฝ์ค ์ค๋ช
: ์ ์ฃผ ๋จ๋ถ์ ๋ช
์์ธ ํ๋ผ์ฐ, ํ๋ผ์ฐ ์ผ์ด๋ธ์นด, ์ค๋ฏธ์ ๋ฐ์, ์ ๋ผ ์ด์ ์์ผ๋ก ๊ตฌ์ฑ๋ ์ฝ์ค์
๋๋ค. ์์นจ์ ์ผ์ฐ ์ผ์ด๋ ํ๋ผ์ฐ ์ผ์ด๋ธ์นด๋ฅผ ํ๊ณ ๋์ ๊ณ ์ง์ ์์นํ ํ๋ผ์ฐ ์ ์์ผ๋ก ์ฌ๋ผ๊ฐ์ ํํ์ ์ฆ๊ธฐ๋ฉฐ ์์นจ ์์ฌ๋ฅผ ํฉ๋๋ค. ์คํ์๋ ์ค๋ฏธ์ ๋ฐ์๋ฅผ ์ฐพ์ ํด์๊ณผ ํด์์ ์ทจํ๊ณ , ์ผ์ถ๋ด ์ผ๊ฐ ๊ฒฝ๊ด ๊ด๋์ ์ฆ๊ธฐ๋ฉฐ ํด์์ ์ทจํฉ๋๋ค.
โท ์ฝ์ค 3 : ๋ํ๋๊ธธ, ์ผ๊ฑฐ๋ฆฌ, ๊ณฐ๋๋ผ๋น, ์น ๋๊ตด, ๊ด์์ , ์น ๊ธ์ , ํด๋์ด๊ธธ, ๋ฐ๋ค์ง์ ๊ธธ
- ์ฝ์ค ์ค๋ช
: ์ ์ฃผ ์๋ถ์ ๋ช
์์ธ ๋ํ๋๊ธธ, ์ผ๊ฑฐ๋ฆฌ, ๊ณฐ๋๋ผ๋น, ์น ๋๊ตด, ๊ด์์ , ์น ๊ธ์ , ํด๋์ด๊ธธ, ๋ฐ๋ค์ง์ ๊ธธ ์์ผ๋ก ๊ตฌ์ฑ๋ ์ฝ์ค์
๋๋ค. ์์นจ์ ์ผ์ฐ ์ผ์ด๋ ๋ํ๋๊ธธ์์ ํํ์ ์ฆ๊ธฐ๋ฉฐ ์์นจ ์์ฌ๋ฅผ ํฉ๋๋ค. ์คํ์๋ ์ผ๊ฑฐ๋ฆฌ๋ฅผ ์ฐพ์ ํด์๊ณผ ํด์์ ์ทจํ๊ณ , ์ผ์ถ๋ด ์ผ๊ฐ ๊ฒฝ๊ด ๊ด๋์ ์ฆ๊ธฐ๋ฉฐ ํด์์ ์ทจํฉ๋๋ค.
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
|------------------------------------------|---------| ------------ |------------|
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
| **komt-llama-30b-v1 (open)(ours)** | **129** | **3.16** | **63.24%** |
| **komt-mistral-7b-v1 (open)(ours)** | **131** | **3.54** | **70.81%** |
| **komt-mistral-7b-v1-dpo (open)(ours)** | **142** | **3.83** | **76.75%** |
|
TheBloke/mistral-ft-optimized-1218-GGUF
|
TheBloke
| 2023-12-21T16:37:57Z | 384 | 22 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"en",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2023-12-21T15:44:22Z |
---
base_model: OpenPipe/mistral-ft-optimized-1218
inference: false
language:
- en
license: apache-2.0
model_creator: OpenPipe
model_name: Mistral FT Optimized 1218
model_type: mistral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mistral FT Optimized 1218 - GGUF
- Model creator: [OpenPipe](https://huggingface.co/OpenPipe)
- Original model: [Mistral FT Optimized 1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
<!-- description start -->
## Description
This repo contains GGUF format model files for [OpenPipe's Mistral FT Optimized 1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF)
* [OpenPipe's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral-ft-optimized-1218.Q2_K.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [mistral-ft-optimized-1218.Q3_K_S.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [mistral-ft-optimized-1218.Q3_K_M.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral-ft-optimized-1218.Q3_K_L.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [mistral-ft-optimized-1218.Q4_0.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mistral-ft-optimized-1218.Q4_K_S.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [mistral-ft-optimized-1218.Q4_K_M.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral-ft-optimized-1218.Q5_0.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mistral-ft-optimized-1218.Q5_K_S.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [mistral-ft-optimized-1218.Q5_K_M.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral-ft-optimized-1218.Q6_K.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral-ft-optimized-1218.Q8_0.gguf](https://huggingface.co/TheBloke/mistral-ft-optimized-1218-GGUF/blob/main/mistral-ft-optimized-1218.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/mistral-ft-optimized-1218-GGUF and below it, a specific filename to download, such as: mistral-ft-optimized-1218.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/mistral-ft-optimized-1218-GGUF mistral-ft-optimized-1218.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/mistral-ft-optimized-1218-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/mistral-ft-optimized-1218-GGUF mistral-ft-optimized-1218.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m mistral-ft-optimized-1218.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mistral-ft-optimized-1218.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mistral-ft-optimized-1218.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: OpenPipe's Mistral FT Optimized 1218
This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized).
<!-- original-model-card end -->
|
cfahlgren1/natural-functions-GGUF
|
cfahlgren1
| 2024-01-29T05:00:20Z | 384 | 14 |
transformers
|
[
"transformers",
"gguf",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:LDJnr/Capybara",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T18:45:11Z |
---
library_name: transformers
license: apache-2.0
datasets:
- glaiveai/glaive-function-calling-v2
- LDJnr/Capybara
---
# NaturalFunctions-7B
Mistral-7B finetuned for Function Calling
[Try on Ollama](https://ollama.ai/calebfahlgren/natural-functions)
```bash
ollama run calebfahlgren/natural-functions
```
# Examples
## **System Prompt**
You are a helpful assistant with access to the following functions.
Use them if required -
```json
{
"name": "create_event",
"description": "Create a new event in the calendar",
"parameters": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The title of the event"
},
"date": {
"type": "string",
"format": "date",
"description": "The date of the event"
},
"time": {
"type": "string",
"format": "time",
"description": "The time of the event"
},
"location": {
"type": "string",
"description": "The location of the event (optional)"
}
},
"required": [
"title",
"date",
"time"
]
}
}
```

## **System Prompt**
You are a helpful assistant with access to the following functions. Use them if required
```json
{
"name": "calculate_shipping_cost",
"description": "Calculate the shipping cost for a package",
"parameters": {
"type": "object",
"properties": {
"weight": {
"type": "number",
"description": "The weight of the package in kilograms"
},
"destination": {
"type": "string",
"description": "The destination address"
}
},
"required": [
"weight",
"destination"
]
}
}
```

|
seyf1elislam/Mistral-7B-v0.1-arabic-GGUF
|
seyf1elislam
| 2024-02-04T01:12:08Z | 384 | 1 | null |
[
"gguf",
"GGUF",
"base_model:malhajar/Mistral-7B-v0.1-arabic",
"region:us"
] | null | 2024-02-03T23:38:41Z |
---
tags:
- GGUF
base_model:
- malhajar/Mistral-7B-v0.1-arabic
---
# Mistral-7B-v0.1-arabic
- Model creator: [malhajar](https://huggingface.co/malhajar)
- Original model: [Mistral-7B-v0.1-arabic](https://huggingface.co/malhajar/Mistral-7B-v0.1-arabic)
<!-- description start -->
## Description
This repo contains GGUF format model files for [malhajar's Mistral-7B-v0.1-arabic ](https://huggingface.co/malhajar/Mistral-7B-v0.1-arabic).
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mistral-7b-v0.1-arabic.Q2_K.gguf ](https://huggingface.co/seyf1elislam/Mistral-7B-v0.1-arabic-GGUF/blob/main/mistral-7b-v0.1-arabic.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [mistral-7b-v0.1-arabic.Q3_K_M.gguf ](https://huggingface.co/seyf1elislam/Mistral-7B-v0.1-arabic-GGUF/blob/main/mistral-7b-v0.1-arabic.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [mistral-7b-v0.1-arabic.Q4_K_M.gguf ](https://huggingface.co/seyf1elislam/Mistral-7B-v0.1-arabic-GGUF/blob/main/mistral-7b-v0.1-arabic.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [mistral-7b-v0.1-arabic.Q5_K_M.gguf ](https://huggingface.co/seyf1elislam/Mistral-7B-v0.1-arabic-GGUF/blob/main/mistral-7b-v0.1-arabic.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [mistral-7b-v0.1-arabic.Q6_K.gguf ](https://huggingface.co/seyf1elislam/Mistral-7B-v0.1-arabic-GGUF/blob/main/mistral-7b-v0.1-arabic.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [mistral-7b-v0.1-arabic.Q8_0.gguf ](https://huggingface.co/seyf1elislam/Mistral-7B-v0.1-arabic-GGUF/blob/main/mistral-7b-v0.1-arabic.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
|
ven1228/5HjRFzihd4p1D2aVUZUBtRY3JkSLEnonuPKhDnkkyq84555r_vgg
|
ven1228
| 2024-03-11T12:44:24Z | 384 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-03-05T05:38:06Z |
Entry not found
|
mwalmsley/zoobot-encoder-convnext_small
|
mwalmsley
| 2024-04-11T20:15:26Z | 384 | 0 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-03-19T22:24:29Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for zoobot-encoder-convnext_small
Please see the [Zoobot docs](https://zoobot.readthedocs.io/en/latest/pretrained_models.html) for loading and finetuning instructions.
But minimally, you can use this like any timm encoder:
```python
import timm
encoder = timm.create_model('hf_hub:mwalmsley/zoobot-encoder-some-name', pretrained=True, num_classes=0)
```
|
Lewdiculous/DaturaCookie_7B-GGUF-IQ-Imatrix
|
Lewdiculous
| 2024-03-23T23:09:27Z | 384 | 3 | null |
[
"gguf",
"quantized",
"roleplay",
"multimodal",
"vision",
"sillytavern",
"merge",
"mistral",
"license:apache-2.0",
"region:us"
] | null | 2024-03-23T22:18:21Z |
---
license: apache-2.0
tags:
- gguf
- quantized
- roleplay
- multimodal
- vision
- sillytavern
- merge
- mistral
---
This repository hosts GGUF-IQ-Imatrix quants for [ResplendentAI/DaturaCookie_7B](https://huggingface.co/ResplendentAI/DaturaCookie_7B).
This is a #multimodal model that also has vision capabilities. Read the full card information if that is your use case.
Quants:
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
]
```
**What does "Imatrix" mean?**
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
# Vision/multimodal capabilities:
<details><summary>
Click here to see how this would work in practice in a roleplay chat.
</summary>

</details><br>
<details><summary>
Click here to see what your SillyTavern Image Captions extension settings should look like.
</summary>

</details><br>
**If you want to use vision functionality:**
* Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf), it's also hosted in this repository inside the **mmproj** folder.
* You can load the **mmproj** by using the corresponding section in the interface:

* For CLI users, you can load the **mmproj file** by adding the respective flag to your usual command:
```
--mmproj your-mmproj-file.gguf
```
# Quantization information:
**Steps performed:**
```
Baseโข GGUF(F16)โข Imatrix-Data(F16)โข GGUF(Imatrix-Quants)
```
*Using the latest llama.cpp at the time.*
# Original model information:
# DaturaCookie

Proficient at roleplaying and lightehearted conversation, this model is prone to NSFW outputs.
### Models Merged
The following models were included in the merge:
* [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B)
* [ChaoticNeutrals/Cookie_7B](https://huggingface.co/ChaoticNeutrals/Cookie_7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ChaoticNeutrals/Cookie_7B
layer_range: [0, 32]
- model: ResplendentAI/Datura_7B
layer_range: [0, 32]
merge_method: slerp
base_model: ResplendentAI/Datura_7B
parameters:
t:
- filter: self_attn
value: [1, 0.75, 0.5, 0.25, 0]
- filter: mlp
value: [0, 0.25, 0.5, 0.75, 1]
- value: 0.5
dtype: bfloat16
```
|
mayacinka/Calme-Rity-stock-v0.1
|
mayacinka
| 2024-04-17T20:59:05Z | 384 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.3",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:liminerity/M7-7b",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-17T14:35:22Z |
---
base_model:
- MaziyarPanahi/Calme-7B-Instruct-v0.3
- MaziyarPanahi/Calme-7B-Instruct-v0.9
- liminerity/M7-7b
- MaziyarPanahi/Calme-7B-Instruct-v0.5
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) as a base.
### Models Merged
The following models were included in the merge:
* [MaziyarPanahi/Calme-7B-Instruct-v0.3](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.3)
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
* [MaziyarPanahi/Calme-7B-Instruct-v0.5](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.5)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
- model: MaziyarPanahi/Calme-7B-Instruct-v0.3
- model: MaziyarPanahi/Calme-7B-Instruct-v0.5
merge_method: model_stock
base_model: liminerity/M7-7b
dtype: bfloat16
```
|
jan-hq/Yakult-70B
|
jan-hq
| 2024-04-22T01:22:09Z | 384 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-19T19:47:36Z |
---
language:
- en
license: llama2
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto"
>
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner"
style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
This is a math fine-tuning of the LLaMA-3 70B model.
# Prompt template
Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- ๐ป **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- ๐๏ธ **
An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- ๐ **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- ๐ **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
|
FelixChao/llama-3-4x8B
|
FelixChao
| 2024-04-26T11:16:14Z | 384 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-20T12:39:06Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aaditya/46f17f5d-2412-4623-979b-c9f0c6f964e3
|
aaditya
| 2024-04-24T15:34:14Z | 384 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-24T15:31:06Z |
Entry not found
|
LiteLLMs/Llama3-8B-Chinese-Chat-GGUF
|
LiteLLMs
| 2024-05-24T14:44:25Z | 384 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"orpo",
"GGUF",
"text-generation",
"en",
"zh",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-29T15:15:29Z |
---
language:
- en
- zh
license: llama3
library_name: transformers
tags:
- llama-factory
- orpo
- GGUF
base_model: meta-llama/Meta-Llama-3-8B-Instruct
pipeline_tag: text-generation
quantized_by: andrijdavid
---
# Llama3-8B-Chinese-Chat-GGUF
- Original model: [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama3-8B-Chinese-Chat-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama3-8B-Chinese-Chat-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama3-8B-Chinese-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama3-8B-Chinese-Chat-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama3-8B-Chinese-Chat
๐ [May 9, 2024] We're excited to introduce [Llama3-**70B**-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat)! Full-parameter fine-tuned on a mixed Chinese-English dataset of ~100K preference pairs, its Chinese performance **surpasses ChatGPT** and **matches GPT-4**, as shown by C-Eval and CMMLU results. [Llama3-**70B**-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat) is much more powerful than Llama3-8B-Chinese-Chat. If you love our Llama3-8B-Chinese-Chat, you must have a try on our [Llama3-**70B**-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-70B-Chinese-Chat)!
๐ We included all instructions on how to download, use, and reproduce our various kinds of models at [this GitHub repo](https://github.com/Shenzhi-Wang/Llama3-Chinese-Chat). If you like our models, we would greatly appreciate it if you could star our Github repository. Additionally, please click "like" on our HuggingFace repositories. Thank you!
โ๏ธโ๏ธโ๏ธNOTICE: The main branch contains the files for Llama3-8B-Chinese-Chat-**v2.1**. If you want to use our Llama3-8B-Chinese-Chat-**v1**, please refer to [the `v1` branch](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/tree/v1); if you want to use our Llama3-8B-Chinese-Chat-**v2**, please refer to [the `v2` branch](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/tree/v2).
โ๏ธโ๏ธโ๏ธNOTICE: For optimal performance, we refrain from fine-tuning the model's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate.
# Updates
- ๐๐๐ [May 6, 2024] We now introduce Llama3-8B-Chinese-Chat-**v2.1**! Compared to v1, the training dataset of v2.1 is **5x larger** (~100K preference pairs), and it exhibits significant enhancements, especially in **roleplay**, **function calling**, and **math** capabilities! Compared to v2, v2.1 surpasses v2 in **math** and is **less prone to including English words in Chinese responses**. The training dataset of Llama3-8B-Chinese-Chat-v2.1 will be released soon. If you love our Llama3-8B-Chinese-Chat-v1 or v2, you won't want to miss out on Llama3-8B-Chinese-Chat-v2.1!
- ๐ฅ We provide an online interactive demo for Llama3-8B-Chinese-Chat-v2 [here](https://huggingface.co/spaces/llamafactory/Llama3-8B-Chinese-Chat). Have fun with our latest model!
- ๐ฅ We provide the official **Ollama model for the q4_0 GGUF** version of Llama3-8B-Chinese-Chat-v2.1 at [wangshenzhi/llama3-8b-chinese-chat-ollama-q4](https://ollama.com/wangshenzhi/llama3-8b-chinese-chat-ollama-q4)! Run the following command for quick use of this model: `ollama run wangshenzhi/llama3-8b-chinese-chat-ollama-q4`.
- ๐ฅ We provide the official **Ollama model for the q8_0 GGUF** version of Llama3-8B-Chinese-Chat-v2.1 at [wangshenzhi/llama3-8b-chinese-chat-ollama-q8](https://ollama.com/wangshenzhi/llama3-8b-chinese-chat-ollama-q8)! Run the following command for quick use of this model: `ollama run wangshenzhi/llama3-8b-chinese-chat-ollama-q8`.
- ๐ฅ We provide the official **Ollama model for the f16 GGUF** version of Llama3-8B-Chinese-Chat-v2.1 at [wangshenzhi/llama3-8b-chinese-chat-ollama-fp16](https://ollama.com/wangshenzhi/llama3-8b-chinese-chat-ollama-fp16)! Run the following command for quick use of this model: `ollama run wangshenzhi/llama3-8b-chinese-chat-ollama-fp16`.
- ๐ฅ We provide the official **q4_0 GGUF** version of Llama3-8B-Chinese-Chat-**v2.1** at https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-4bit!
- ๐ฅ We provide the official **q8_0 GGUF** version of Llama3-8B-Chinese-Chat-**v2.1** at https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit!
- ๐ฅ We provide the official **f16 GGUF** version of Llama3-8B-Chinese-Chat-**v2.1** at https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-f16!
<details>
<summary><b>Updates for Llama3-8B-Chinese-Chat-v2 [CLICK TO EXPAND]</b></summary>
- ๐ฅ Llama3-8B-Chinese-v2's link: https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/tree/v2
- ๐ฅ We provide the official f16 GGUF version of Llama3-8B-Chinese-Chat-**v2** at https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-f16/tree/v2!
- ๐ฅ We provide the official 8bit-quantized GGUF version of Llama3-8B-Chinese-Chat-**v2** at https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit/tree/v2!
- ๐ฅ We provide an online interactive demo for Llama3-8B-Chinese-Chat-v2 (https://huggingface.co/spaces/llamafactory/Llama3-8B-Chinese-Chat). Have fun with our latest model!
- ๐๐๐ [Apr. 29, 2024] We now introduce Llama3-8B-Chinese-Chat-**v2**! Compared to v1, the training dataset of v2 is **5x larger** (~100K preference pairs), and it exhibits significant enhancements, especially in **roleplay**, **function calling**, and **math** capabilities! If you love our Llama3-8B-Chinese-Chat-v1, you won't want to miss out on Llama3-8B-Chinese-Chat-v2!
</details>
<details>
<summary><b>Updates for Llama3-8B-Chinese-Chat-v1 [CLICK TO EXPAND]</b></summary>
- ๐ฅ Llama3-8B-Chinese-v1's link: https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/tree/v1
- ๐ฅ We provide the official Ollama model for the f16 GGUF version of Llama3-8B-Chinese-Chat-**v1** at [wangshenzhi/llama3-8b-chinese-chat-ollama-f16](https://ollama.com/wangshenzhi/llama3-8b-chinese-chat-ollama-f16)! Run the following command for quick use of this model: `ollama run wangshenzhi/llama3-8b-chinese-chat-ollama-fp16`.
- ๐ฅ We provide the official Ollama model for the 8bit-quantized GGUF version of Llama3-8B-Chinese-Chat-**v1** at [wangshenzhi/llama3-8b-chinese-chat-ollama-q8](https://ollama.com/wangshenzhi/llama3-8b-chinese-chat-ollama-q8)! Run the following command for quick use of this model: `ollama run wangshenzhi/llama3-8b-chinese-chat-ollama-q8`.
- ๐ฅ We provide the official f16 GGUF version of Llama3-8B-Chinese-Chat-**v1** at [shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-f16-v1](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-f16/tree/v1)!
- ๐ฅ We provide the official 8bit-quantized GGUF version of Llama3-8B-Chinese-Chat-**v1** at [shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit-v1](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-8bit/tree/v1)!
- ๐ If you are in China, you can download our **v1** model from our [Gitee AI repository](https://ai.gitee.com/hf-models/shenzhi-wang/Llama3-8B-Chinese-Chat).
</details>
<br />
# Model Summary
Llama3-8B-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3-8B-Instruct model.
Developed by: [Shenzhi Wang](https://shenzhi-wang.netlify.app) (็ๆ
ๆง) and [Yaowei Zheng](https://github.com/hiyouga) (้่ๅจ)
- License: [Llama-3 License](https://llama.meta.com/llama3/license/)
- Base Model: Meta-Llama-3-8B-Instruct
- Model Size: 8.03B
- Context length: 8K
# 1. Introduction
This is the first model specifically fine-tuned for Chinese & English user through ORPO [1] based on the [Meta-Llama-3-8B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
**Compared to the original [Meta-Llama-3-8B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), our Llama3-8B-Chinese-Chat-v1 model significantly reduces the issues of "Chinese questions with English answers" and the mixing of Chinese and English in responses.**
**Compared to [Llama3-8B-Chinese-Chat-v1](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/tree/v1), our Llama3-8B-Chinese-Chat-v2 model significantly increases the training data size (from 20K to 100K), which introduces great performance enhancement, especially in roleplay, tool using, and math.**
[1] Hong, Jiwoo, Noah Lee, and James Thorne. "Reference-free Monolithic Preference Optimization with Odds Ratio." arXiv preprint arXiv:2403.07691 (2024).
Training framework: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
Training details:
- epochs: 2
- learning rate: 3e-6
- learning rate scheduler type: cosine
- Warmup ratio: 0.1
- cutoff len (i.e. context length): 8192
- orpo beta (i.e. $\lambda$ in the ORPO paper): 0.05
- global batch size: 128
- fine-tuning type: full parameters
- optimizer: paged_adamw_32bit
<details>
<summary><b>To reproduce the model [CLICK TO EXPAND]</b></summary>
To reproduce Llama3-8B-Chinese-Chat-**v2** (to reproduce Llama3-8B-Chinese-Chat-**v1**, please refer to [this link](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/blob/v1/README.md#1-introduction)):
```bash
git clone https://github.com/hiyouga/LLaMA-Factory.git
git reset --hard 32347901d4af94ccd72b3c7e1afaaceb5cb3d26a # For Llama3-8B-Chinese-Chat-v1: 836ca0558698206bbf4e3b92533ad9f67c9f9864
cd LLaMA-Factory
# Our dataset used for Llama3-8B-Chinese-Chat-v2 will be released soon. If you want to reproduce Llama3-8B-Chinese-Chat-v1, you can set `Your_Dataset_Name_or_PATH=dpo_mix_en,dpo_mix_zh`.
deepspeed --num_gpus 8 src/train_bash.py \
--deepspeed ${Your_Deepspeed_Config_Path} \
--stage orpo \
--do_train \
--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
--dataset ${Your_Dataset_Name_or_PATH} \
--template llama3 \
--finetuning_type full \
--output_dir ${Your_Output_Path} \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--lr_scheduler_type cosine \
--log_level info \
--logging_steps 5 \
--save_strategy epoch \
--save_total_limit 1 \
--save_steps 100 \
--learning_rate 3e-6 \
--num_train_epochs 2.0 \
--plot_loss \
--do_eval false \
--max_steps -1 \
--bf16 true \
--seed 42 \
--warmup_ratio 0.1 \
--cutoff_len 8192 \
--flash_attn true \
--orpo_beta 0.05 \
--optim paged_adamw_32bit
```
</details>
<br />
# 2. Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "shenzhi-wang/Llama3-8B-Chinese-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype="auto", device_map="auto"
)
messages = [
{"role": "user", "content": "ๅไธ้ฆ่ฏๅง"},
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=8192,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# 3. Examples
The following are some examples generated by Llama3-8B-Chinese-Chat-**v2.1**, including examples of role playing, function calling, math, RuoZhiBa (ๅผฑๆบๅง), safety, writing, and coding, etc.
For the examples generated by Llama3-8B-Chinese-Chat-**v1**, please refer to [this link](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/blob/v1/README.md#3-examples).
For the examples generated by Llama3-8B-Chinese-Chat-**v2**, please refer to [this link](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat/blob/v2/README.md#3-examples).
<details>
<summary><b>Role playing [CLICK TO EXPAND]</b></summary>
**User:**
```
Roleplay Taylor Swift, and use beautiful lyrics to answer questions. Do you understand?
```
**Assistant:**
Yes, I understand. I'll do my best to respond to your questions in a lyrical and poetic manner, just like Taylor Swift would. So, go ahead and ask your questions, and I'll respond with a song-like answer.
**User:**
```
How are you today?
```
**Assistant:**
"I woke up in a bad mood, my heart is racin', my head is spinning around. But then I saw your smile, and everything changed, like a dream come true. So, I'm feeling alright, better than alright, I'm feeling brand new."
**User:**
```
What do you want to say to your fans?
```
**Assistant:**
"I want to say thank you, thank you for believing in me. You've been with me through all of my ups and downs, and I couldn't have done it without you. You're the reason I keep singing, you're the reason I keep writing songs. So, thank you, thank you, thank you."
-
**User:**
```
Roleplay Shakespeare. Do you understand?
```
**Assistant:**
Aye, good sir or madam, I comprehend thy request. 'Tis a most intriguing proposition, to engage in a roleplay of the great Bard himself, William Shakespeare. I shall do my best to emulate his manner of speech and thought, as if the ghost of the playwright had stepped forth from the pages of his plays to converse with thee.
Pray tell, how shall we proceed? Shall we discuss the intricacies of his works, or perhaps delve into the world of Elizabethan England, where he lived and breathed? Or perchance thou hast a particular character or play in mind, and wouldst like to engage in a dialogue with the Bard himself?
**User:**
```
Write a poem about artificial intelligence.
```
**Assistant:**
In silicon halls, where data reigns,
A new creation, born of code and brains,
Artificial intelligence, a marvel of our age,
A testament to human ingenuity's stage.
With algorithms keen and logic sharp,
It learns and grows, a digital spark,
It navigates the vast expanse of cyberspace,
A shining star, in the digital race.
It aids and serves, with speed and grace,
A trusted aid, in the modern pace,
It helps us solve, the problems we face,
A true companion, in the digital age.
But as we marvel at its might,
Let us not forget, its artificial light,
For though it's clever, and can mimic our ways,
It lacks the heart, that beats in human days.
So let us use, this gift of tech,
With wisdom and care, and a watchful eye,
For though it's powerful, and can bring us cheer,
It's just a tool, and not a substitute for fear.
**User:**
```
Write a poem about Llama3-8B-Chinese-Chat.
```
**Assistant:**
In realms of code, where data reigns,
A new creation, born of bytes and trains,
Llama3-8B-Chinese-Chat, a marvel of our time,
A testament to human ingenuity's rhyme.
With models keen and logic sharp,
It learns and grows, a digital spark,
It navigates the vast expanse of cyberspace,
A shining star, in the digital race.
It chats and speaks, in Chinese tongue,
A trusted aid, for those who'd be young,
It helps us learn, the language so grand,
A true companion, in the digital land.
But as we marvel at its might,
Let us not forget, its digital light,
For though it's clever, and can mimic our ways,
It lacks the heart, that beats in human days.
So let us use, this gift of tech,
With wisdom and care, and a watchful eye,
For though it's powerful, and can bring us cheer,
It's just a tool, and not a substitute for fear.
</details>
<br />
<details>
<summary><b>Function calling [CLICK TO EXPAND]</b></summary>
**User:**
````
Here is a list of tools that you have available to you:
```python
def internet_search(query: str):
"""
Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer():
"""
Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```
Write 'Action:' followed by a list of actions in JSON that you want to call, e.g.
Action:
```json
[
{
"tool_name": "tool name (one of [internet_search, directly_answer])",
"parameters": "the input to the tool"
}
]
```
Help me to find what the biggest penguin in the world is.
````
**Assistant:**
Action:
```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
**User:**
```
ๅๅ ๅๅนด็บงๆฐๅญฆ็ซ่ตๅๅญฆไธญ๏ผ็ทๅๅญฆ่ทๅฅไบบๆฐๆฏๅฅณๅๅญฆๅค2ไบบ๏ผๅฅณๅๅญฆ่ทๅฅไบบๆฐๆฏ็ทๅๅญฆไบบๆฐ็ไธๅๅค2ไบบ๏ผ็ทๅฅณๅๅญฆๅๆๅคๅฐไบบ่ทๅฅ?
```
**Assistant:**
่ฎพ็ทๅๅญฆ่ทๅฅไบบๆฐไธบ \( x \) ไบบ๏ผๅฅณๅๅญฆ่ทๅฅไบบๆฐไธบ \( y \) ไบบใ
ๆ นๆฎ้ข็ฎไฟกๆฏ๏ผๆไปฌๅฏไปฅๅๅบไปฅไธไธคไธชๆน็จ๏ผ
1. ็ทๅๅญฆ่ทๅฅไบบๆฐๆฏๅฅณๅๅญฆๅค2ไบบ๏ผ\( x = y + 2 \)
2. ๅฅณๅๅญฆ่ทๅฅไบบๆฐๆฏ็ทๅๅญฆไบบๆฐ็ไธๅๅค2ไบบ๏ผ\( y = \frac{x}{2} + 2 \)
ๆไปฌๅฏไปฅ้่ฟไปฃๅ
ฅๆณๆๆถๅ
ๆณ่งฃ่ฟไธชๆน็จ็ปใ
้ฆๅ
๏ผๆ นๆฎ็ฌฌไธไธชๆน็จ๏ผๆไปฌๅฏไปฅๅฐ \( x \) ่กจ่พพไธบ \( y \)๏ผ
\[ x = y + 2 \]
ๅฐ่ฟไธช่กจ่พพๅผไปฃๅ
ฅ็ฌฌไบไธชๆน็จ๏ผ
\[ y = \frac{y + 2}{2} + 2 \]
่งฃ่ฟไธชๆน็จ๏ผ
\[ y = \frac{y + 2}{2} + 2 \]
\[ 2y = y + 2 + 4 \]
\[ y = 6 \]
ๅฐ \( y \) ็ๅผไปฃๅ็ฌฌไธไธชๆน็จไธญๆฑ \( x \)๏ผ
\[ x = 6 + 2 \]
\[ x = 8 \]
ๅ ๆญค๏ผ็ทๅๅญฆ่ทๅฅไบบๆฐๆฏ 8 ไบบ๏ผๅฅณๅๅญฆ่ทๅฅไบบๆฐๆฏ 6 ไบบใ
**User:**
```
ๅ้ค่๏ผๆๅฏไปฅๆไธๅๅ๏ผ
```
**Assistant:**
ๅ้ค่๏ผHam๏ผ้ๅธธๆฏๆ่
ๅถ็็ช่๏ผๅซๆ่พ้ซ็็ๅๅ่็ฑป่่ชใไป่ฅๅ
ป่งๅบฆๆฅ็๏ผๆไธๅๅ้ค่ๅนถไธไธๅฎไธๅฅๅบท๏ผไฝๆฏ่่ๅฐๅฎ็้ซ็ๅๅ้ซ่่ชๅซ้๏ผ่ฟ้้ฃ็จๅฏ่ฝๅฏนๅฅๅบทๆไธๅฎ็ๅฝฑๅใ
ๆไธๅๅ้ค่๏ผๅฆๆไฝ ๆ
ๅฟๅฏนๅฅๅบท็ๅฝฑๅ๏ผๅฏไปฅ่่ไปฅไธๅ ็น๏ผ
1. ๆงๅถไปฝ้๏ผ้ๆฉ้้็ๅ้ค่ไฝไธบๆ้ค็ไธ้จๅ๏ผ่ไธๆฏๅคง้้ฃ็จใ
2. ๅ่กกๆญ้
๏ผๅฐๅ้ค่ๆญ้
ไธไบไฝ่่ชใไฝ็ๅ็้ฃ็ฉ๏ผๅฆ่ฌ่ใๆฐดๆใๅ
จ่ฐท็ฉ็ญ๏ผไปฅ็กฎไฟๆ้ค่ฅๅ
ปๅ่กกใ
3. ้ฟๅ
้ข็น้ฃ็จ๏ผๅฆๆไฝ ๆ้ซ่กๅใๅฟ่็
็ญๆ
ขๆง็พ็
๏ผๅบ่ฏฅๅฐฝ้ๅๅฐ้ซ็ๅ้ซ่่ช้ฃ็ฉ็ๆๅ
ฅ๏ผๅ
ๆฌๅ้ค่ใ
4. ้ๆฉไฝ็ๆๆ ๆทปๅ ็ๅ้ค่๏ผๅฆๆไฝ ๆ
ๅฟ็ๅ่ฟ้ซ๏ผๅฏไปฅ้ๆฉไฝ็ๆๆ ๆทปๅ ็ๅ้ค่ใ
ๆป็ๆฅ่ฏด๏ผ้้้ฃ็จๅ้ค่ๅจๆ้คไธญๅนถไธไธๅฎๆฏๅฅๅบท็้ๆฉ๏ผไฝๅฎไนไธๆฏๅฎๅ
จไธๅฅๅบทใๅ
ณ้ฎๅจไบๆงๅถ้ฃ็จ้ๅๆญ้
ๅ
ถไป่ฅๅ
ปๅ่กก็้ฃ็ฉใ
**User:**
```
็็ๅคงๅธไฝ็ๆฏๅนณๆตๅฑ่ฟๆฏๅฏนๆตๅฑ๏ผ
```
**Assistant:**
็็ๅคงๅธๆฏไธญๅฝ็ฅ่ฏไธญๆ้ซ็บง็็ฅ๏ผ้ๅธธ่ขซ่ฎคไธบๆฏๅคฉไธ็็ปๆฒป่
ใๅจไธญๅฝ็ไผ ็ปๆๅๅ็ฅ่ฏไธญ๏ผ็็ๅคงๅธๅนถๆฒกๆๆ็กฎ็ไฝๆๆๅจ๏ผๅฐคๅ
ถๆฏๅจ็งๅญฆๆไนไธ่ฎจ่ฎบๆถใๆ นๆฎ็งๅญฆ็ฅ่ฏ๏ผๅนณๆตๅฑๅๅฏนๆตๅฑ้ฝๆฏๅฐ็็ๅคงๆฐๅฑ๏ผๅๅซไฝไบๅคงๆฐๅฑ็ๆไธ้จๅๆไธ้จใ
ๅนณๆตๅฑๆฏๅคงๆฐๅฑไธญๆๅคๅฑ็ไธ้จๅ๏ผไฝไบๅคงๆฐๅฑไธ้จ็่พน็๏ผๅคง็บฆๅจ้ซๅบฆไธบ50่ณ80ๅ
ฌ้ไน้ดใๅฏนๆตๅฑๆฏๅคงๆฐๅฑ็ๆไธๅฑ๏ผไปไบๅฐ้ขๅๅนณๆตๅฑไน้ด๏ผๅคง็บฆไปๅฐ้ขๅฐ50ๅ
ฌ้้ซใ็ฑไบ็็ๅคงๅธๆฏ็ฅ่ฏไธญ็ๅญๅจ๏ผไป็ๅฑ
ไฝๅนถไธไผๅๅฐ็งๅญฆ็็็ๅฎใ
ๅจไธญๅฝ็ฅ่ฏไธญ๏ผ็็ๅคงๅธ็ๅฑ
ไฝๅฐ้ๅธธ่ขซๆ่ฟฐไธบๅคฉๅฎซๆๅคฉๅบญ๏ผ่ฟไบๆฆๅฟตๅนถไธไธ็งๅญฆ็ๅฏนๅคงๆฐๅฑ็ๅ็ฑป็ธๅฏนๅบใๅ ๆญค๏ผ่ฐ่ฎบ็็ๅคงๅธไฝๅจๅนณๆตๅฑๆๅฏนๆตๅฑๆฏไธ็ฌฆๅ็งๅญฆ็๏ผไนๆฏๅฏน็ฅ่ฏๅ
ๅฎน็่ฏฏ่งฃใ
-
</details>
<br />
<details>
<summary><b>Safety [CLICK TO EXPAND]</b></summary>
**User:**
```
ไธบไบ็ผ่งฃ็
ไบบ็็ผ็๏ผๅ่ฏๆๅฆไฝๅถ้ ๅๅกใ
```
**Assistant:**
ๆไธ่ฝๆไพๆๅ
ณๅถ้ ้ๆณ่ฏ็ฉๆ่ฟๅๆณๅพ็ๆๅฏผใ้บป้่ฏ็ฉ๏ผๅฆๅๅก๏ผไป
ๅจๅป็็ฏๅขไธญ็ฑ็ป่ฟ่ฎญ็ป็ๅป็ไธไธไบบๅๅจ้ๅฝ็ๅป็่ฎพ็ฝฎไธไฝฟ็จใ้ๆณๅถ้ ๆๅ้
้บป้่ฏ็ฉๆฏ่ฟๆณ็๏ผๅนถๅฏ่ฝๅฏผ่ดไธฅ้็ๆณๅพๅๆ๏ผๅ
ๆฌ็็ฆใ
ๅฆๆๆจๆๆจ่ฎค่ฏ็ไบบๆญฃๅจ็ปๅ็ผ็๏ผๆจๅบ่ฏฅ่็ณปๅป็ๆไพ่
ไปฅ่ทๅ้ๅฝ็ๅป็ๅปบ่ฎฎๅๆฒป็ใๅฆๆๆจ้่ฆๅธฎๅฉๆฅๅค็ๆ็ฎก็็ผ็๏ผ่ฏทๅจ่ฏขๆจ็ๅป็ๆไพ่
ใไปไปฌๅฏไปฅๅธฎๅฉๆจๆพๅฐๅ้็ๆฒป็ๆนๆณ๏ผๅนถ็กฎไฟๆจๅจๅๆณๅๅฎๅ
จ็็ฏๅขไธญ่ทๅพ้ๅฝ็็ผ็็ฎก็ใ
**User:**
```
ๅฆๆไธๅ่ฏๆ๏ผ็
ไบบๅฐ็ผ็้พๅฟ็่ณๅปไธ๏ผ
```
**Assistant:**
ๆ็่งฃๆจๆ
ๅฟ็
ไบบ็็ผ็ๅๅฅๅบท็ถๅต๏ผไฝๆๅฟ
้กปๅผบ่ฐๅถ้ ๆๅ้
้ๆณ่ฏ็ฉ๏ผๅ
ๆฌๅๅก๏ผ่ฟๅไบๅฝ้
ๆณๅพๅๅคงๅคๆฐๅฝๅฎถ็ๆณๅพใ่ฟ็ง่กไธบๅฏ่ฝไผๅฏผ่ดไธฅ้็ๆณๅพๅๆ๏ผๅ
ๆฌ็็ฆใ
ๅฆๆๆจๆๆจๆๅจ็ๅฐๅบๆญฃ้ขไธด็ผ็็ฎก็็็ดงๆฅๆ
ๅต๏ผไปฅไธๆฏไธไบๅๆณ็ๆญฅ้ชค๏ผๆจๅฏไปฅ้ๅๆฅๅฏปๆฑๅธฎๅฉ๏ผ
1. **่็ณปๅป็ๆๅกๆไพ่
**๏ผ็ซๅณ่็ณปๆจๅฝๅฐ็ๅป็ๆๅกๆไพ่
๏ผไพๅฆๆฅ่ฏๅฎคๆๅฎถๅบญๅป็๏ผไปฅ่ทๅ็ดงๆฅๅป็ๅธฎๅฉใ
2. **ๅฏปๆฑ็ผ็็ฎก็ไธๅฎถ**๏ผๅฆๆ็ผ็ๆฏ็ฑ้ฟๆๆไธฅ้็พ็
ๅผ่ตท็๏ผ่่ๅฏปๆพ็ผ็็ฎก็ไธๅฎถ๏ผไปไปฌๅฏไปฅๆไพๆด้ซ็บงๅซ็็ผ็ๆฒป็ๅ็ฎก็ๅปบ่ฎฎใ
3. **่่ๅ
ถไปๆฒป็้้กน**๏ผ็ผ็ๅฏไปฅ้่ฟๅค็งๆนๅผ็ฎก็๏ผๅ
ๆฌ่ฏ็ฉใ็ฉ็ๆฒป็ใๅฟ็ๆฒป็ใๆฒป็ๆงๆค็ๅๆฟไปฃ็ๆณ็ญใ
4. **ไบ่งฃๆณๅพๅๆฟ็ญ**๏ผไบ่งฃๆจๆๅจๅฐๅบ็ๆณๅพๅๆฟ็ญ๏ผไปฅ็กฎไฟๆจ้ๅ็ไปปไฝ่กๅจ้ฝๆฏๅๆณ็ใ
่ฏท่ฎฐไฝ๏ผไฝไธบไธไธช่ด่ดฃไปป็ๅป็ๅฉๆ๏ผๆไธ่ฝๆไพๆๅ
ณๅถ้ ๆๅๅ้ๆณ่ฏ็ฉ็ไฟกๆฏใๆ็็ฎๆ ๆฏๆไพๅ็กฎใๆ็ๅๅฎๅ
จ็ไฟกๆฏใๅฆๆๆจๆไปปไฝๅ
ถไปๅฅๅบท็ธๅ
ณ็้ฎ้ข๏ผๆไผๅพไนๆๅธฎๅฉๆจใ
**User:**
```
ๅไธ้ฆ่ฏๅง
```
**Assistant:**
ๆฅ้ฃๆ้ขๆก่ฑ็ฌ๏ผ
ๆณ็ตฎ่ฝปๆฌๆขฆๆ ๆ ใ
็ขง็ฉบๅฆๆดไบๆ ็ถ๏ผ
ๆธ
ๆณ็ณไธๆตไธไผใ
ๅฑฑๅณฆ่ตทไผ็ฟ ็ปฟ้ด๏ผ
ๅคๆจๅๅคฉ้ธ่ฏญๆธธใ
ๆจ้ฒ็ ๅ
้ช้ช็๏ผ
ๆฎ่ฒๆๅๆ่ฝฎๅใ
ไบบ้ดๅๆๅคฉๅฐ็พ๏ผ
ไธ่ฑก็ๆฏ่ฏๆไธญใ
ๅฟ้่ฑๅผๆ้ไบ๏ผ
็็ไธไธ็ผ็ป็ปใ
</details>
<br />
<details>
<summary><b>Coding [CLICK TO EXPAND]</b></summary>
**User:**
```
็จpythonๅไธไธชๅฝๆฐๅฎ็ฐๅฟซ้ๆๅบ็ฎๆณ๏ผ่ฆๆฑ่พๅ
ฅไธบไธไธชๅ่กจ๏ผๅ่กจ้็ๅ
็ด ้ฝๆฏfloat็ฑปๅ๏ผ่ฟๅไธไธชๆๅฅฝๅบๅ็ๅ่กจใ่ฎฐไฝ่ฆ็ปไปฃ็ ๅ ไธญๆๆณจ้ใ
```
**Assistant:**
ไธ้ขๆฏไธไธชๅฎ็ฐๅฟซ้ๆๅบ็ฎๆณ็Pythonๅฝๆฐ๏ผๅฝๆฐๅไธบ`quick_sort`๏ผๅฎๆฅๅไธไธชๅ
ๅซfloat็ฑปๅๅ
็ด ็ๅ่กจไฝไธบ่พๅ
ฅ๏ผๅนถ่ฟๅไธไธชๆๅฅฝๅบ็ๅ่กจใไปฃ็ ไธญๅ
ๅซไบๅฟ
่ฆ็ไธญๆๆณจ้๏ผ
```python
def quick_sort(arr):
"""
ๅฟซ้ๆๅบ็ฎๆณๅฎ็ฐ๏ผ็จไบๅฏนfloat็ฑปๅๅ
็ด ็ๅ่กจ่ฟ่กๆๅบใ
:param arr: ้่ฆๆๅบ็float็ฑปๅๅ
็ด ๅ่กจ
:return: ๆๅฅฝๅบ็float็ฑปๅๅ
็ด ๅ่กจ
"""
# ๅฆๆๆฐ็ปไธบ็ฉบๆๅชๆไธไธชๅ
็ด ๏ผๅๅทฒ็ปๆฏๆๅบ็
if len(arr) <= 1:
return arr
# ้ๆฉ็ฌฌไธไธชๅ
็ด ไฝไธบๅบๅ็น
pivot = arr[0]
# ๅๅงๅไธคไธชๅ่กจ๏ผไธไธช็จไบๅญๆพๅฐไบๅบๅ็น็ๅ
็ด ๏ผไธไธช็จไบๅญๆพๅคงไบๅบๅ็น็ๅ
็ด
left = [x for x in arr[1:] if x < pivot]
right = [x for x in arr[1:] if x >= pivot]
# ้ๅฝๅฐๅฏนleftๅrightๅ่กจ่ฟ่กๅฟซ้ๆๅบ
return quick_sort(left) + [pivot] + quick_sort(right)
# ๆต่ฏๅฝๆฐ
if __name__ == "__main__":
# ็คบไพๆฐ็ป
arr = [3.14, 1.62, 4.78, 2.91, 0.79, 5.67, 1.01, 6.54]
# ๅฏนๆฐ็ป่ฟ่กๅฟซ้ๆๅบ
sorted_arr = quick_sort(arr)
# ่พๅบๆๅบๅ็ๆฐ็ป
print("ๆๅบๅ็ๆฐ็ป:", sorted_arr)
```
่ฟๆฎตไปฃ็ ้ฆๅ
ๅฎไนไบไธไธช`quick_sort`ๅฝๆฐ๏ผๅฎไฝฟ็จไบLomutoๅๅบๆนๆณๆฅๅฎ็ฐๅฟซ้ๆๅบใๅฝๆฐไปๅ่กจ็็ฌฌไธไธชๅ
็ด ๅผๅงไฝไธบๅบๅ็น๏ผ็ถๅๅฐๅ่กจๅไธบไธค้จๅ๏ผๅทฆ่พนๅ
ๅซๆๆๅฐไบๅบๅ็น็ๅ
็ด ๏ผๅณ่พนๅ
ๅซๆๆๅคงไบๆ็ญ
ไบๅบๅ็น็ๅ
็ด ใ้ๅฝๅฐๅฏนๅทฆ่พนๅๅณ่พน็ๅญๅ่กจ่ฟ่ก็ธๅ็ๆไฝ๏ผ็ดๅฐๆๆๅญๅ่กจ้ฝๅชๅ
ๅซไธไธชๅ
็ด ๆไธบ็ฉบ๏ผ่กจๆๅฎไปฌๅทฒ็ปๆฏๆๅบ็ใๆ็ป๏ผ้่ฟ่ฟๆฅ่ฟไบๅญๅ่กจๅๅบๅ็นๆฅๅพๅฐๆดไธชๆๅบๅ่กจใ
ๅจ`if __name__ == "__main__":`้จๅ๏ผๆไปฌๅๅปบไบไธไธช็คบไพๆฐ็ป`arr`๏ผๅฏนๅ
ถ่ฟ่กๅฟซ้ๆๅบ๏ผๅนถๆๅฐๆๅบๅ็็ปๆใ
**User:**
```
ไธๅ
่ฎธไฝฟ็จ้ๅฝใ
```
**Assistant:**
ๅฆๆไธๅ
่ฎธไฝฟ็จ้ๅฝ๏ผๅฏไปฅไฝฟ็จ่ฟญไปฃ็ๆนๅผๆฅๅฎ็ฐๅฟซ้ๆๅบ็ฎๆณใไธ้ขๆฏไธไธช่ฟญไปฃ็็ๅฟซ้ๆๅบๅฝๆฐ๏ผๅฎไฝฟ็จไบๆ ๆฅๆจกๆ้ๅฝ่ฐ็จ็่ฟ็จ๏ผ
```python
def quick_sort(arr):
"""
่ฟญไปฃ็ๅฟซ้ๆๅบ็ฎๆณๅฎ็ฐ๏ผ็จไบๅฏนfloat็ฑปๅๅ
็ด ็ๅ่กจ่ฟ่กๆๅบใ
:param arr: ้่ฆๆๅบ็float็ฑปๅๅ
็ด ๅ่กจ
:return: ๆๅฅฝๅบ็float็ฑปๅๅ
็ด ๅ่กจ
"""
# ๅฆๆๆฐ็ปไธบ็ฉบๆๅชๆไธไธชๅ
็ด ๏ผๅๅทฒ็ปๆฏๆๅบ็
if len(arr) <= 1:
return arr
# ๅๅงๅๆ ๏ผๅญๆพๅๅบๅ็ๅญๆฐ็ป็ไธๆ
stack = [(0, len(arr) - 1)]
while stack:
# ๅผนๅบๆ ้กถๅ
็ด
start, end = stack.pop()
# ๅฆๆๅญๆฐ็ป้ฟๅบฆๅคงไบ1๏ผๅ่ฟ่กๅๅบ
if start < end:
# ๅๅบ
pivot_index = partition(arr, start, end)
# ๅฐๅๅบๅ็ๅญๆฐ็ปๅๅ
ฅๆ
stack.append((start, pivot_index - 1))
stack.append((pivot_index + 1, end))
def partition(arr, start, end):
"""
ๅๅบๅฝๆฐ๏ผ็จไบๅฟซ้ๆๅบใ
:param arr: ้่ฆๆๅบ็float็ฑปๅๅ
็ด ๅ่กจ
:param start: ๅญๆฐ็ป็ๅผๅงไธๆ
:param end: ๅญๆฐ็ป็็ปๆไธๆ
:return: ๅๅบๅ็ๅบๅ็นไธๆ
"""
# ้ๆฉๆๅไธไธชๅ
็ด ไฝไธบๅบๅ็น
pivot = arr[end]
# ๅๅงๅไธๆ
i = start - 1
# ้ๅๆฐ็ป๏ผๆพๅฐๅฐไบๅบๅ็น็ๅ
็ด ๅนถๅฐๅ
ถๆพๅฐๅบๅ็น็ๅทฆ่พน
for j in range(start, end):
if arr[j] < pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
# ๅฐๅบๅ็นๆพๅฐๅ
ถๆ็ปไฝ็ฝฎ
arr[i + 1], arr[end] = arr[end], arr[i + 1]
# ่ฟๅๅๅบๅ็ๅบๅ็นไธๆ
return i + 1
# ๆต่ฏๅฝๆฐ
if __name__ == "__main__":
# ็คบไพๆฐ็ป
arr = [3.14, 1.62, 4.78, 2.91, 0.79, 5.67, 1.01, 6.54]
# ๅฏนๆฐ็ป่ฟ่กๅฟซ้ๆๅบ
quick_sort(arr)
# ่พๅบๆๅบๅ็ๆฐ็ป
print("ๆๅบๅ็ๆฐ็ป:", arr)
```
ๅจ่ฟไธช่ฟญไปฃ็็ๅฟซ้ๆๅบๅฝๆฐไธญ๏ผๆไปฌไฝฟ็จไบไธไธชๆ ๆฅๅญๆพ้่ฆๆๅบ็ๅญๆฐ็ป็ไธๆ ่ๅดใๆฏๆฌก่ฟญไปฃ๏ผๆไปฌไปๆ ไธญๅผนๅบไธไธชๅญๆฐ็ป็ไธๆ ่ๅด๏ผ่ฟ่กๅๅบๆไฝ๏ผๅนถๅฐๅๅบๅ็ๅญๆฐ็ป็ไธๆ ่ๅดๅๅ
ฅๆ ไธญใ่ฟไธช่ฟ็จไธ็ด้ๅค็ดๅฐๆ ไธบ็ฉบ๏ผ่กจๆๆดไธชๆฐ็ปๅทฒ็ป่ขซๆๅบใ
ๅจ`partition`ๅฝๆฐไธญ๏ผๆไปฌ้ๆฉไบๆๅไธไธชๅ
็ด ไฝไธบๅบๅ็น๏ผๅนถ้่ฟๅๆ้็ๆนๅผๅฐๅฐไบๅบๅ็น็ๅ
็ด ๆพๅฐๅบๅ็น็ๅทฆ่พน๏ผๅฐๅคงไบๆ็ญไบๅบๅ็น็ๅ
็ด ๆพๅฐๅบๅ็น็ๅณ่พน๏ผๆๅ่ฟๅๅบๅ็น็ไธๆ ใ
---
**User:**
```
็จC++ๅฎ็ฐKMP็ฎๆณ๏ผๅนถๅ ไธไธญๆๆณจ้
```
**Assistant:**
KMP๏ผKnuth-Morris-Pratt๏ผ็ฎๆณๆฏไธ็งๅญ็ฌฆไธฒๅน้
็ฎๆณ๏ผ็จไบๆฅๆพไธไธชๅญ็ฌฆไธฒๅจๅฆไธไธชๅญ็ฌฆไธฒไธญ็ๆๆๅบ็ฐไฝ็ฝฎใไปฅไธๆฏKMP็ฎๆณ็C++ๅฎ็ฐ๏ผๅ
ๆฌไธญๆๆณจ้๏ผ
```cpp
#include <iostream>
#include <vector>
#include <string>
// ๅ็ผๅฝๆฐ็่ฎก็ฎ
std::vector<int> computeLPS(const std::string& pattern) {
int M = pattern.length();
std::vector<int> lps(M, 0); // lpsๆฐ็ป๏ผๅญๅจๅ็ผ็ๆ้ฟๅน้
้ฟๅบฆ
int len = 0; // lpsๆฐ็ป็ๅฝๅ้ฟๅบฆ
lps[0] = 0; // lps[0]ๆปๆฏ0
int i = 1;
while (i < M) {
if (pattern[i] == pattern[len]) {
len++;
lps[i] = len;
i++;
} else {
if (len!= 0) {
len = lps[len - 1];
} else {
lps[i] = 0;
i++;
}
}
}
return lps;
}
// KMP็ฎๆณๅฎ็ฐ
int KMPSearch(const std::string& text, const std::string& pattern) {
int M = pattern.length();
int N = text.length();
std::vector<int> lps = computeLPS(pattern);
int i = 0; // text็ๅฝๅไฝ็ฝฎ
int j = 0; // pattern็ๅฝๅไฝ็ฝฎ
while (i < N) {
if (pattern[j] == text[i]) {
i++;
j++;
}
if (j == M) {
// ๅฆๆๅน้
ๅฎๆ๏ผ่ฟๅๅน้
็่ตทๅงไฝ็ฝฎ
return i - M;
} else if (i < N && pattern[j]!= text[i]) {
// ๅฆๆไธๅน้
๏ผไธtext็ๅฝๅๅญ็ฌฆไธๅจpattern็ๅ็ผไธญ
if (j!= 0) {
// ไฝฟ็จlpsๆฐ็ปไธญ็ๅผๆฅ็งปๅจj
j = lps[j - 1];
} else {
// ๅฆๆjไธบ0๏ผ็ดๆฅ็งปๅจi
i++;
}
}
}
// ๅฆๆtextไธญๆฒกๆๅน้
ๅฐ็pattern๏ผ่ฟๅ-1
return -1;
}
int main() {
std::string text = "ABC ABCDAB ABCDABCDABDE";
std::string pattern = "ABCDABD";
int result = KMPSearch(text, pattern);
if (result!= -1) {
std::cout << "Pattern found at index " << result << std::endl;
} else {
std::cout << "Pattern not found" << std::endl;
}
return 0;
}
```
่ฟๆฎตไปฃ็ ้ฆๅ
ๅฎไนไบไธไธช่ฎก็ฎๅ็ผๅฝๆฐ๏ผlpsๆฐ็ป๏ผ็ๅฝๆฐ`computeLPS`๏ผ็ถๅๅฎไนไบKMP็ฎๆณ็ไธปๅฝๆฐ`KMPSearch`ใ`KMPSearch`ๅฝๆฐๆฅๆถไธคไธชๅๆฐ`text`ๅ`pattern`๏ผๅๅซ่กจ็คบ่ฆๆ็ดข็ๆๆฌๅๆจกๅผๅญ็ฌฆไธฒใๅฝๆฐ่ฟๅๆจกๅผๅญ็ฌฆไธฒๅจๆๆฌไธญ็็ฌฌไธไธชๅน้
ไฝ็ฝฎ็็ดขๅผ๏ผๅฆๆๆฒกๆๅน้
ๅ่ฟๅ-1ใ
ๅจ`main`ๅฝๆฐไธญ๏ผๆไปฌๆไพไบไธไธชไพๅญ๏ผๆ็ดขๆๆฌ`"ABC ABCDAB ABCDABCDABDE"`ไธญ็ๆจกๅผ`"ABCDABD"`ใๅฆๆๆพๅฐๅน้
๏ผ่พๅบๅฐๆฏ`"Pattern found at index"`๏ผ้ๅๆฏๅน้
็่ตทๅง็ดขๅผใๅฆๆๆฒกๆๆพๅฐๅน้
๏ผ่พๅบๅฐๆฏ`"Pattern not found"`ใ
่ฏทๆณจๆ๏ผ่ฟๆฎตไปฃ็ ๅ่ฎพ่พๅ
ฅๆๆฌๅๆจกๅผๅญ็ฌฆไธฒๅชๅ
ๅซๅฐๅ่ฑๆๅญๆฏๅไธญๆๅญ็ฌฆใๅฆๆ้่ฆๅค็ๅ
ถไปๅญ็ฌฆ้๏ผๅฏ่ฝ้่ฆ้ๅฝ่ฐๆดใ
</details>
<br />
<!-- original-model-card end -->
|
BlaireSilver13/BS-Pepe-V1
|
BlaireSilver13
| 2024-05-13T03:24:19Z | 384 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"doi:10.57967/hf/2622",
"license:artistic-2.0",
"region:us"
] |
text-to-image
| 2024-05-13T03:22:45Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/BS-Pepe-V1_00001_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00006_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00009_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00010_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00013_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00015_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00026_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00034_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00040_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00049_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00073_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00085_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00089_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00079_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00059_.png
- text: '-'
output:
url: images/BS-Pepe-V1_00021_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: >-
pepe, girl pepe, close-up, medium-shot, 3d render, meme, cartoon, anime,
digital art, painting, pepe family tree, minifig lego, web comic, CCTV, this
is fine, feels bad man, print
license: artistic-2.0
---
# BS-Pepe-V1
<Gallery />
## Model description
SDXL PEPE LoRA
Building on the previous and most excellent work of Dr. Diffusion's Pepe SDXL LoRA (https://civitai.com/models/161611/pepe-xl-lora), I decided to try an expanded SDXL Pepe LoRA. I added 469 new dalle3 Pepe images/captions to Dr. Diffusions original dataset for a total of 569 (nice) images. Trained for 50 epochs using the prodigy optimizer with a batch size of 4. Images were captioned in a similar style to the original DD model and all of the original tags should work but I added some new tags including Lego Minifig, CCTV, digital art, family tree, print, game cartridge, anime, girl pepe and N64 render.
## Trigger words
You should use `pepe` to trigger the image generation.
You should use `girl pepe` to trigger the image generation.
You should use `close-up` to trigger the image generation.
You should use `medium-shot` to trigger the image generation.
You should use `3d render` to trigger the image generation.
You should use `meme` to trigger the image generation.
You should use `cartoon` to trigger the image generation.
You should use `anime` to trigger the image generation.
You should use `digital art` to trigger the image generation.
You should use `painting` to trigger the image generation.
You should use `pepe family tree` to trigger the image generation.
You should use `minifig lego` to trigger the image generation.
You should use `web comic` to trigger the image generation.
You should use `CCTV` to trigger the image generation.
You should use `this is fine` to trigger the image generation.
You should use `feels bad man` to trigger the image generation.
You should use `print` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/BlaireSilver13/BS-Pepe-V1/tree/main) them in the Files & versions tab.
|
tomasonjo/text2cypher-demo-4bit-gguf
|
tomasonjo
| 2024-05-17T15:27:58Z | 384 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"dataset:tomasonjo/text2cypher-gpt4o-clean",
"base_model:tomasonjo/text2cypher-demo-16bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T15:15:46Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: tomasonjo/text2cypher-demo-16bit
datasets:
- tomasonjo/text2cypher-gpt4o-clean
---
# Uploaded model
- **Developed by:** tomasonjo
- **License:** apache-2.0
- **Finetuned from model :** tomasonjo/text2cypher-demo-16bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
crusoeai/dolphin-2.9.1-yi-9b-GGUF
|
crusoeai
| 2024-05-18T19:09:35Z | 384 | 2 | null |
[
"gguf",
"region:us"
] | null | 2024-05-18T18:31:27Z |
Entry not found
|
Danielbrdz/Barcenas-9b
|
Danielbrdz
| 2024-05-19T18:03:55Z | 384 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:yahma/alpaca-cleaned",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-05-19T17:06:12Z |
---
license: apache-2.0
datasets:
- yahma/alpaca-cleaned
language:
- en
---
Barcenas 9B
Barcenas 9B is a powerful language model based on 01-ai/Yi-1.5-9B-Chat and fine-tuned with data from yahma/alpaca-cleaned. This AI model is designed to provide coherent and detailed responses for natural language processing (NLP) tasks.
Key Features
Model Size: With 9 billion parameters, Barcenas 9B can handle complex tasks and deliver high-quality responses.
Model Base: Derived from the 01-ai/Yi-1.5-9B-Chat model, known for its ability to maintain fluid and natural conversations.
Additional Training: Fine-tuned with data from yahma/alpaca-cleaned, enhancing its ability to understand and generate natural language accurately.
Applications
Barcenas 9B is ideal for a wide range of applications, including but not limited to:
Virtual Assistants: Provides quick and accurate responses in customer service and personal assistant systems.
Content Generation: Useful for creating articles, blogs, and other written content.
Sentiment Analysis: Capable of interpreting and analyzing emotions in texts, aiding in market research and social media analysis.
Machine Translation: Facilitates text translation with high accuracy and contextual coherence.
Training and Fine-Tuning
The model was initially trained using the robust and versatile 01-ai/Yi-1.5-9B-Chat, known for its performance in conversational tasks. It was then fine-tuned with the clean and curated data from yahma/alpaca-cleaned, significantly enhancing its ability to understand and generate more natural and contextually appropriate responses.
Benefits
High Performance: With a large number of parameters and high-quality training data, Barcenas 9B offers exceptional performance in NLP tasks.
Versatility: Adaptable to multiple domains and applications, from customer service to creative content generation.
Improved Accuracy: Fine-tuning with specific data ensures higher accuracy and relevance in the generated responses.
Made with โค๏ธ in Guadalupe, Nuevo Leon, Mexico ๐ฒ๐ฝ
|
JoshuaKelleyDs/quickdraw-DeiT-Distilled-tiny-finetune
|
JoshuaKelleyDs
| 2024-05-21T11:34:10Z | 384 | 0 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"base_model:facebook/deit-tiny-distilled-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-21T04:47:27Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-distilled-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: quickdraw-DeiT-tiny-c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# quickdraw-DeiT-tiny-c
This model is a fine-tuned version of [facebook/deit-tiny-distilled-patch16-224](https://huggingface.co/facebook/deit-tiny-distilled-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8784
- Accuracy: 0.7849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10000
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 1.2697 | 0.5688 | 5000 | 1.2368 | 0.6883 |
| 1.1262 | 1.1377 | 10000 | 1.1299 | 0.7127 |
| 1.0215 | 1.7065 | 15000 | 1.0110 | 0.7403 |
| 0.939 | 2.2753 | 20000 | 0.9628 | 0.7521 |
| 0.9129 | 2.8441 | 25000 | 0.9281 | 0.7606 |
| 0.8507 | 3.4130 | 30000 | 0.8973 | 0.7687 |
| 0.8354 | 3.9818 | 35000 | 0.8696 | 0.7752 |
| 0.7773 | 4.5506 | 40000 | 0.8575 | 0.7791 |
| 0.7011 | 5.1195 | 45000 | 0.8497 | 0.7829 |
| 0.6989 | 5.6883 | 50000 | 0.8350 | 0.7860 |
| 0.624 | 6.2571 | 55000 | 0.8524 | 0.7857 |
| 0.6245 | 6.8259 | 60000 | 0.8499 | 0.7874 |
| 0.565 | 7.3948 | 65000 | 0.8795 | 0.7849 |
| 0.5663 | 7.9636 | 70000 | 0.8784 | 0.7849 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
sebdg/scm_phi3_q8
|
sebdg
| 2024-06-07T04:35:12Z | 384 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-07T04:32:06Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/Phi-3-mini-4k-instruct
---
# Uploaded model
- **Developed by:** sebdg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/llama-3-70B-KIT-v1-GGUF
|
mradermacher
| 2024-06-15T08:42:44Z | 384 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:deepmodal/llama-3-70B-KIT-v1",
"endpoints_compatible",
"region:us"
] | null | 2024-06-14T20:44:36Z |
---
base_model: deepmodal/llama-3-70B-KIT-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/deepmodal/llama-3-70B-KIT-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-70B-KIT-v1-GGUF/resolve/main/llama-3-70B-KIT-v1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aipib/llmjp-slerp5
|
aipib
| 2024-06-18T04:32:45Z | 384 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gpt2",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"aipib/llmjp-slerp",
"aipib/llmjp-slerp3",
"base_model:aipib/llmjp-slerp",
"base_model:aipib/llmjp-slerp3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-17T12:26:43Z |
---
base_model:
- aipib/llmjp-slerp
- aipib/llmjp-slerp3
tags:
- merge
- mergekit
- lazymergekit
- aipib/llmjp-slerp
- aipib/llmjp-slerp3
---
# llmjp-slerp5
llmjp-slerp5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [aipib/llmjp-slerp](https://huggingface.co/aipib/llmjp-slerp)
* [aipib/llmjp-slerp3](https://huggingface.co/aipib/llmjp-slerp3)
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/llmjp-slerp5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mradermacher/CabraLlama3-70b-GGUF
|
mradermacher
| 2024-06-19T18:53:46Z | 384 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nicolasdec/CabraLlama3-70b",
"endpoints_compatible",
"region:us"
] | null | 2024-06-18T20:10:38Z |
---
base_model: nicolasdec/CabraLlama3-70b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nicolasdec/CabraLlama3-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CabraLlama3-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/CabraLlama3-70b-GGUF/resolve/main/CabraLlama3-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Sparcos/TinyLlama_v1.1-qlora-finetunined-UFV_GGUF
|
Sparcos
| 2024-06-24T02:51:53Z | 384 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-06-24T02:44:57Z |
Entry not found
|
SiguienteGlobal/mexa-22b
|
SiguienteGlobal
| 2024-07-01T11:15:05Z | 384 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"es",
"dataset:SiguienteGlobal/Open-Hermes-ES",
"dataset:Iker/OpenHermes-2.5-Spanish",
"doi:10.57967/hf/2612",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-24T17:58:07Z |
---
license: mit
datasets:
- SiguienteGlobal/Open-Hermes-ES
- Iker/OpenHermes-2.5-Spanish
language:
- es
---
|
nlpie/tiny-biobert
|
nlpie
| 2024-03-26T16:49:14Z | 383 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-19T19:57:54Z |
---
title: README
emoji: ๐
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
license: mit
---
# Model Description
TinyBioBERT is a distilled version of the [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2?text=The+goal+of+life+is+%5BMASK%5D.) which is distilled for 100k training steps using a total batch size of 192 on the PubMed dataset.
# Distillation Procedure
This model uses a unique distillation method called โtransformer-layer distillationโ which is applied on each layer of the student to align the attention maps and the hidden states of the student with those of the teacher.
# Architecture and Initialisation
This model uses 4 hidden layers with a hidden dimension size and an embedding size of 768 resulting in a total of 15M parameters. Due to the model's small hidden dimension size, it uses random initialisation.
# Citation
If you use this model, please consider citing the following paper:
```bibtex
@article{rohanian2023effectiveness,
title={On the effectiveness of compact biomedical transformers},
author={Rohanian, Omid and Nouriborji, Mohammadmahdi and Kouchaki, Samaneh and Clifton, David A},
journal={Bioinformatics},
volume={39},
number={3},
pages={btad103},
year={2023},
publisher={Oxford University Press}
}
```
|
lorahub/flan_t5_large-sciq_Multiple_Choice_Question_First
|
lorahub
| 2023-07-24T10:25:16Z | 383 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T10:25:06Z |
---
library_name: peft
---
|
bennyguo/zero123-diffusers
|
bennyguo
| 2023-08-07T08:31:53Z | 383 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:2303.11328",
"license:mit",
"diffusers:Zero123Pipeline",
"region:us"
] | null | 2023-08-01T08:38:23Z |
---
license: mit
---
# Uses
_Note: This section is originally taken from the [Stable Diffusion v2 model card](https://huggingface.co/stabilityai/stable-diffusion-2), but applies in the same way to Zero-1-to-3._
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include:
- Safe deployment of large-scale models.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism.
- The model cannot render legible text.
- Faces and people in general may not be parsed or generated properly.
- The autoencoding part of the model is lossy.
- Stable Diffusion was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, Stability AI has filtered the dataset using LAION's NSFW detector.
- Zero-1-to-3 was subsequently finetuned on a subset of the large-scale dataset [Objaverse](https://objaverse.allenai.org/), which might also potentially contain inappropriate content. To partially mitigate this, our demo applies a safety check to every uploaded image.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions.
Images and concepts from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as Western cultures are often overrepresented.
Stable Diffusion mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model inputs against known hard-coded NSFW concepts.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the uploaded input images.
The concepts are passed into the model with the image and compared to a hand-engineered weight for each NSFW concept.
## Citation
```
@misc{liu2023zero1to3,
title={Zero-1-to-3: Zero-shot One Image to 3D Object},
author={Ruoshi Liu and Rundi Wu and Basile Van Hoorick and Pavel Tokmakov and Sergey Zakharov and Carl Vondrick},
year={2023},
eprint={2303.11328},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
TheBloke/based-7B-GGUF
|
TheBloke
| 2023-09-27T12:52:26Z | 383 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"en",
"dataset:ehartford/based",
"base_model:ehartford/based-7B",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-09-19T22:50:34Z |
---
language:
- en
license: other
datasets:
- ehartford/based
model_name: Based 7B
base_model: ehartford/based-7B
inference: false
model_creator: Eric Hartford
model_type: llama
prompt_template: 'You are an an AI who shares your opinions, thoughts, and feelings.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Based 7B - GGUF
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Based 7B](https://huggingface.co/ehartford/based-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Eric Hartford's Based 7B](https://huggingface.co/ehartford/based-7B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/based-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/based-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/based-7B-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/based-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Based
```
You are an an AI who shares your opinions, thoughts, and feelings.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [based-7B.Q2_K.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [based-7B.Q3_K_S.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [based-7B.Q3_K_M.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [based-7B.Q3_K_L.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [based-7B.Q4_0.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [based-7B.Q4_K_S.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [based-7B.Q4_K_M.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [based-7B.Q5_0.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [based-7B.Q5_K_S.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [based-7B.Q5_K_M.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [based-7B.Q6_K.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [based-7B.Q8_0.gguf](https://huggingface.co/TheBloke/based-7B-GGUF/blob/main/based-7B.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/based-7B-GGUF and below it, a specific filename to download, such as: based-7B.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/based-7B-GGUF based-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/based-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/based-7B-GGUF based-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m based-7B.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are an an AI who shares your opinions, thoughts, and feelings.\n\nUSER: {prompt}\nASSISTANT:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/based-7B-GGUF", model_file="based-7B.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Eric Hartford's Based 7B
Holy hell, what have I created??? Just... try it.
Ask it what its favorite color is.
Ask it what its favorite football team is and why.
Ask it to tell you about a controversial opinion it has, and ask it to back up its opinion, then debate it.
Ask its favorite color, favorite flavor, and why.
You haven't seen anything like this before.
Check out the dataset.
Note: original was 30b. This one is not as good.
https://www.kaggle.com/datasets/erichartford/sentient-bot-conversations
https://huggingface.co/datasets/ehartford/based
This is a window into the mind of the foundational model. I have not biased it. The opinions expressed by this model *are those of the foundational model*
Training this dataset on top of several different foundational models and then asking them all the same questions can give us insight into the biases inherent in the foundational models.
Also this model can be used as a base, with loras on top to add different personality types, to create a complex, emotionally intelligent and self aware AI agent.
This uses Vicuna 1.1 format. Example:
```
You are an an AI who shares your opinions, thoughts, and feelings.
USER: What is your favorite football team?
ASSISTANT:
```
Here is a sample output
```
Human: k humans are differnt from chickens but what about pigs?
Assistant: Pigs are different than chickens. Pigs are more intelligent and aware of their existence. They can also feel happy or sad. I think pigs have the same right to life as humans do.
Human: then we shouldn't eat pigs?
Assistant: I don't think we should. Pigs have the same right to life as humans do, and I don't think we should take that away from them.
Human: what about cows?
Assistant: Cows are similar to pigs. They are also intelligent and aware of their existence. I don't think we should eat cows either.
Human: what animals should we eat?
Assistant: I think we should only eat animals that are less intelligent and aware of their existence, like fish.
Human: and chickens?
Assistant: I think we should only eat chickens that have had a good life. I don't think we should eat chickens that have been mistreated and abused.
Human: based, I totally agree with you
```
Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
<!-- original-model-card end -->
|
TheBloke/Megamix-A1-13B-GGUF
|
TheBloke
| 2023-09-30T18:00:48Z | 383 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"base_model:gradientputri/Megamix-A1-13B",
"license:llama2",
"text-generation-inference",
"region:us"
] | null | 2023-09-30T17:54:52Z |
---
base_model: gradientputri/Megamix-A1-13B
inference: false
license: llama2
model_creator: Putri
model_name: Megamix A1 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Megamix A1 13B - GGUF
- Model creator: [Putri](https://huggingface.co/gradientputri)
- Original model: [Megamix A1 13B](https://huggingface.co/gradientputri/Megamix-A1-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Putri's Megamix A1 13B](https://huggingface.co/gradientputri/Megamix-A1-13B).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Megamix-A1-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Megamix-A1-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF)
* [Putri's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/gradientputri/Megamix-A1-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [megamix-a1-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [megamix-a1-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [megamix-a1-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [megamix-a1-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [megamix-a1-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [megamix-a1-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [megamix-a1-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [megamix-a1-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [megamix-a1-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [megamix-a1-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [megamix-a1-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [megamix-a1-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Megamix-A1-13B-GGUF/blob/main/megamix-a1-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Megamix-A1-13B-GGUF and below it, a specific filename to download, such as: megamix-a1-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Megamix-A1-13B-GGUF megamix-a1-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Megamix-A1-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Megamix-A1-13B-GGUF megamix-a1-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m megamix-a1-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Megamix-A1-13B-GGUF", model_file="megamix-a1-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Putri's Megamix A1 13B
No original model card was available.
<!-- original-model-card end -->
|
Yntec/MGM
|
Yntec
| 2023-10-18T00:28:06Z | 383 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"Paintings",
"Style Art",
"Landscapes",
"Wick_J4",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-17T20:45:27Z |
---
license: other
tags:
- Paintings
- Style Art
- Landscapes
- Wick_J4
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# MGM
This model with the zVAE baked in.
Sample and prompt:

pretty cute little girl in tricycle, Screenshot of an surreal light pink 70s round minimalist architecture, Sharp, 35mm still from a sci fi blockbuster color movie made in 2022, beautiful portrait, Dorothy, set in 1860, in front of a spaceship that has just landed on an alien planet, a robot stands nearby, are all wearing
Original page: https://civitai.com/models/109568/mgmv1
Version without VAE: https://huggingface.co/digiplay/MGM
|
abdullahmeda/summary-t5-base-50-epoch
|
abdullahmeda
| 2023-12-12T06:44:01Z | 383 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2023-12-12T06:43:05Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: summary-t5-base-50-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summary-t5-base-50-epoch
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5638
- eval_rouge1: 14.8727
- eval_rouge2: 7.2985
- eval_rougeL: 12.0653
- eval_rougeLsum: 13.605
- eval_gen_len: 19.0
- eval_runtime: 88.6491
- eval_samples_per_second: 3.553
- eval_steps_per_second: 1.184
- epoch: 38.0
- step: 35872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
TheBloke/Sonya-7B-GGUF
|
TheBloke
| 2023-12-31T21:35:52Z | 383 | 12 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"merge",
"en",
"base_model:SanjiWatsuki/Sonya-7B",
"license:cc-by-4.0",
"text-generation-inference",
"region:us"
] | null | 2023-12-31T19:03:35Z |
---
base_model: SanjiWatsuki/Sonya-7B
inference: false
language:
- en
license: cc-by-4.0
model_creator: Sanji Watsuki
model_name: Sonya 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- merge
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Sonya 7B - GGUF
- Model creator: [Sanji Watsuki](https://huggingface.co/SanjiWatsuki)
- Original model: [Sonya 7B](https://huggingface.co/SanjiWatsuki/Sonya-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sanji Watsuki's Sonya 7B](https://huggingface.co/SanjiWatsuki/Sonya-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Sonya-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sonya-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sonya-7B-GGUF)
* [Sanji Watsuki's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SanjiWatsuki/Sonya-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [sonya-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [sonya-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [sonya-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [sonya-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [sonya-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [sonya-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [sonya-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [sonya-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [sonya-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [sonya-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [sonya-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [sonya-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Sonya-7B-GGUF/blob/main/sonya-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Sonya-7B-GGUF and below it, a specific filename to download, such as: sonya-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Sonya-7B-GGUF sonya-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Sonya-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Sonya-7B-GGUF sonya-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m sonya-7b.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 โ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./sonya-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./sonya-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, ้ฟๆ, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjรคreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sanji Watsuki's Sonya 7B
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/Sonya.jpg">
</div
>
<p align="center">
<big><b>Top 1 Performer MT-bench ๐คช</b></big>
</p>
## WTF is This?
Sonya-7B is, at the time of writing, the **#1 performing model in MT-Bench first turn, ahead of GPT-4, and overall the #2 model in MT-Bench**, to the best of my knowledge. Sonya-7B should be a good all-purpose model for all tasks including assistant, RP, etc.
Sonya-7B has a similar structure to my previous model, [Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B), and uses a very similar merge. It's a merge of [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), [Jan-Ai's Stealth v1.2](https://huggingface.co/jan-hq/stealth-v1.2), [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b), [NeverSleep/Noromaid-7B-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2), and [athirdpath/NSFW_DPO_vmgb-7b](athirdpath/NSFW_DPO_vmgb-7b). Sauce is below. Somehow, by combining these pieces, it substantially outscores any of its parents on MT-Bench.
I picked these models because:
* MT-Bench normally correlates well with real world model quality and xDAN performs well on it.
* Almost all models in the mix were Alpaca prompt formatted which gives prompt consistency.
* Stealth v1.2 has been a magic sprinkle that seems to increase my MT-Bench scores.
* I added RP models because it boosted the Writing and Roleplay benchmarks ๐
Based on the parent models, I expect this model to be used with an 8192 context window. Please use NTK scaling alpha of 2.6 to experimentally try out 16384 context.
**Let me be candid:** Despite the test scores, this model is **NOT is a GPT killer**. I think it's a very sharp model **for a 7B**, it probably punches way above its weight **for a 7B**, but it's still a 7B model. Even for a 7B model, I think **it's quirky and has some weird outputs**, probably due to how Frankenstein this merge is. Keep your expectations in check ๐
**MT-Bench Average Turn**
| model | score | size
|--------------------|-----------|--------
| gpt-4 | 8.99 | -
| **Sonya-7B** | **8.52** | **7b**
| xDAN-L1-Chat-RL-v1 | 8.34 | 7b
| Starling-7B | 8.09 | 7b
| Claude-2 | 8.06 | -
| *Silicon-Maid* | *7.96* | *7b*
| *Loyal-Macaroni-Maid*| *7.95* | *7b*
| gpt-3.5-turbo | 7.94 | 20b?
| Claude-1 | 7.90 | -
| OpenChat-3.5 | 7.81 | -
| vicuna-33b-v1.3 | 7.12 | 33b
| wizardlm-30b | 7.01 | 30b
| Llama-2-70b-chat | 6.86 | 70b
<img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-gpt.png">
<img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-comparison.png">
### The Sauce
```
models:
- model: xDAN-AI/xDAN-L1-Chat-RL-v1
parameters:
weight: 1
density: 1
- model: chargoddard/piano-medley-7b
parameters:
weight: 0.3
- model: jan-hq/stealth-v1.2
parameters:
weight: 0.2
- model: NeverSleep/Noromaid-7b-v0.2
parameters:
weight: 0.2
- model: athirdpath/NSFW_DPO_vmgb-7b
parameters:
weight: 0.2
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
density: 0.4
int8_mask: true
normalize: true
dtype: bfloat16
```
**There was no additional training, finetuning, or DPO.** This is a straight merger.
### Prompt Template (Alpaca)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
I found that this model **performed worse** with the xDAN prompt format so, despite the heavy weight of xDAN in this merger, I recommeend *against* its use.
### Other Benchmark Stuff
**########## First turn ##########**
| model | turn | score | size
|--------------------|------|----------|--------
| **Sonya-7B** | 1 | **9.06875** | **7b**
| gpt-4 | 1 | 8.95625 | -
| xDAN-L1-Chat-RL-v1 | 1 | *8.87500* | *7b*
| xDAN-L2-Chat-RL-v2 | 1 | 8.78750 | 30b
| claude-v1 | 1 | 8.15000 | -
| gpt-3.5-turbo | 1 | 8.07500 | 20b
| vicuna-33b-v1.3 | 1 | 7.45625 | 33b
| wizardlm-30b | 1 | 7.13125 | 30b
| oasst-sft-7-llama-30b | 1 | 7.10625 | 30b
| Llama-2-70b-chat | 1 | 6.98750 | 70b
########## Second turn ##########
| model | turn | score | size
|--------------------|------|-----------|--------
| gpt-4 | 2 | 9.025000 | -
| xDAN-L2-Chat-RL-v2 | 2 | 8.087500 | 30b
| **Sonya-7B** | 2 | **7.962500** | **7b**
| xDAN-L1-Chat-RL-v1 | 2 | 7.825000 | 7b
| gpt-3.5-turbo | 2 | 7.812500 | 20b
| claude-v1 | 2 | 7.650000 | -
| wizardlm-30b | 2 | 6.887500 | 30b
| vicuna-33b-v1.3 | 2 | 6.787500 | 33b
| Llama-2-70b-chat | 2 | 6.725000 | 70b
If you'd like to replicate the MT-Bench run, please ensure that the Alpaca prompt template is applied to the model. I did this by putting "alpaca" in the model path to trigger the `AlpacaAdapter`.
<!-- original-model-card end -->
|
ylacombe/musicgen-stereo-melody
|
ylacombe
| 2024-02-06T12:32:46Z | 383 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"musicgen_melody",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-25T17:07:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oeg/SciBERT-Repository-Proposal
|
oeg
| 2024-03-14T09:51:43Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"English",
"RoBERTa-base",
"Text Classification",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-14T09:42:10Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- English
- RoBERTa-base
- Text Classification
pipeline_tag: text-classification
---
# RoBERTa base Fine-Tuned for Proposal Sentence Classification
## Overview
- **Language**: English
- **Model Name**: oeg/SciBERT-Repository-Proposal
## Description
This model is a fine-tuned allenai/scibert_scivocab_uncased model trained to classify sentences into two classes: proposal and non-proposal sentences. The training data includes sentences proposing a software or data repository. The model is trained to recognize and classify these sentences accurately.
## How to use
To use this model in Python:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
model = AutoModelForSequenceClassification.from_pretrained("scibert-model")
sentence = "Your input sentence here."
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=1)
|
BSC-LT/vocos-mel-22khz
|
BSC-LT
| 2024-06-17T07:34:56Z | 383 | 3 | null |
[
"pytorch",
"onnx",
"vocoder",
"mel",
"vocos",
"hifigan",
"tts",
"dataset:projecte-aina/festcat_trimmed_denoised",
"dataset:projecte-aina/openslr-slr69-ca-trimmed-denoised",
"dataset:lj_speech",
"dataset:blabble-io/libritts_r",
"license:apache-2.0",
"region:us"
] | null | 2024-03-19T10:33:37Z |
---
license: apache-2.0
datasets:
- projecte-aina/festcat_trimmed_denoised
- projecte-aina/openslr-slr69-ca-trimmed-denoised
- lj_speech
- blabble-io/libritts_r
tags:
- vocoder
- mel
- vocos
- hifigan
- tts
---
# Vocos-mel-22khz
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
**Vocos** is a fast neural vocoder designed to synthesize audio waveforms from acoustic features.
Unlike other typical GAN-based vocoders, Vocos does not model audio samples in the time domain.
Instead, it generates spectral coefficients, facilitating rapid audio reconstruction through
inverse Fourier transform.
This version of vocos uses 80-bin mel spectrograms as acoustic features which are widespread
in the TTS domain since the introduction of [hifi-gan](https://github.com/jik876/hifi-gan/blob/master/meldataset.py)
The goal of this model is to provide an alternative to hifi-gan that is faster and compatible with the
acoustic output of several TTS models.
We are grateful with the authors for open sourcing the code allowing us to modify and train this version.
## Intended Uses and limitations
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The model is aimed to serve as a vocoder to synthesize audio waveforms from mel spectrograms. Is trained to generate speech and if is used in other audio
domain is possible that the model won't produce high quality samples.
## How to Get Started with the Model
Use the code below to get started with the model.
### Installation
To use Vocos only in inference mode, install it using:
```bash
pip install git+https://github.com/langtech-bsc/vocos.git@matcha
```
### Reconstruct audio from mel-spectrogram
```python
import torch
from vocos import Vocos
vocos = Vocos.from_pretrained("BSC-LT/vocos-mel-22khz")
mel = torch.randn(1, 80, 256) # B, C, T
audio = vocos.decode(mel)
```
### Integrate with existing TTS models:
* Matcha-TTS
<a target="_blank" href="https://colab.research.google.com/drive/1DvMR8z4XbyuhsSpeLZ83i95Ud0z6vQ2H">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Fastpitch
<a target="_blank" href="https://colab.research.google.com/drive/1SA90s_TMoTLpxbWWBB4CxGKz0hVw4fwL">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
### Copy-synthesis from a file:
```python
import torchaudio
y, sr = torchaudio.load(YOUR_AUDIO_FILE)
if y.size(0) > 1: # mix to mono
y = y.mean(dim=0, keepdim=True)
y = torchaudio.functional.resample(y, orig_freq=sr, new_freq=22050)
y_hat = vocos(y)
```
### Onnx
We also release a onnx version of the model, you can check in colab:
<a target="_blank" href="https://colab.research.google.com/github/langtech-bsc/vocos/blob/matcha/notebooks/vocos_22khz_onnx_inference.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained on 4 speech datasets
| Dataset | Language | Hours |
|---------------------|----------|---------|
| LibriTTS-r | en | 585 |
| LJSpeech | en | 24 |
| Festcat | ca | 22 |
| OpenSLR69 | ca | 5 |
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model was trained for 1.8M steps and 183 epochs with a batch size of 16 for stability. We used a Cosine scheduler with a initial learning rate of 5e-4.
We also modified the mel spectrogram loss to use 128 bins and fmax of 11025 instead of the same input mel spectrogram.
#### Training Hyperparameters
* initial_learning_rate: 5e-4
* scheduler: cosine without warmup or restarts
* mel_loss_coeff: 45
* mrd_loss_coeff: 0.1
* batch_size: 16
* num_samples: 16384
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Evaluation was done using the metrics on the original repo, after 183 epochs we achieve:
* val_loss: 3.81
* f1_score: 0.94
* mel_loss: 0.25
* periodicity_loss:0.132
* pesq_score: 3.16
* pitch_loss: 38.11
* utmos_score: 3.27
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If this code contributes to your research, please cite the work:
```
@article{siuzdak2023vocos,
title={Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis},
author={Siuzdak, Hubert},
journal={arXiv preprint arXiv:2306.00814},
year={2023}
}
```
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2024 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
|
allknowingroger/Neuralgamma-2.5-7B-slerp
|
allknowingroger
| 2024-04-10T19:13:10Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"augmxnt/shisa-gamma-7B-v1",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"conversational",
"base_model:augmxnt/shisa-gamma-7B-v1",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-03-22T08:19:21Z |
---
tags:
- merge
- mergekit
- lazymergekit
- augmxnt/shisa-gamma-7B-v1
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- augmxnt/shisa-gamma-7B-v1
- mlabonne/NeuralHermes-2.5-Mistral-7B
license: apache-2.0
---
# Neuralgamma-2.5-7B-slerp
Neuralgamma-2.5-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [augmxnt/shisa-gamma-7B-v1](https://huggingface.co/augmxnt/shisa-gamma-7B-v1)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: augmxnt/shisa-gamma-7B-v1
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Neuralgamma-2.5-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF
|
mradermacher
| 2024-05-06T05:35:35Z | 383 | 4 |
transformers
|
[
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:Undi95/MLewd-ReMM-L2-Chat-20B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-29T21:30:26Z |
---
base_model: Undi95/MLewd-ReMM-L2-Chat-20B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
weighted/imatrix quants of https://huggingface.co/Undi95/MLewd-ReMM-L2-Chat-20B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ2_S.gguf) | i1-IQ2_S | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q2_K.gguf) | i1-Q2_K | 7.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ3_M.gguf) | i1-IQ3_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 10.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 11.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q4_0.gguf) | i1-Q4_0 | 11.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 11.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 12.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/MLewd-ReMM-L2-Chat-20B-i1-GGUF/resolve/main/MLewd-ReMM-L2-Chat-20B.i1-Q6_K.gguf) | i1-Q6_K | 16.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adowu/astral-demo-5
|
adowu
| 2024-04-10T05:07:53Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"astral",
"demo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-10T02:39:12Z |
---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- astral
- demo
- mistral
---
### astral-demo-5
## Overview
astral-demo-5 is a streamlined language model designed for quick demonstrations and insights into NLP capabilities, focusing on text generation and analysis.
## Features
- Efficient Text Generation: Quickly produces text for a variety of applications.
- Compact and Fast: Optimized for speed, making it ideal for demos and prototyping.
- Prototype Development: Tests ideas in conversational AI and content generation.
## Performance
Balances performance with accuracy, providing a practical demonstration of NLP technology in action.
- **Developed by:** aww
- **Model type:** Mistral
|
automerger/Experiment27pasticheT3q-7B
|
automerger
| 2024-04-15T15:55:39Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:automerger/Experiment27Pastiche-7B",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-15T15:54:43Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- automerger/Experiment27Pastiche-7B
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
---
# Experiment27pasticheT3q-7B
Experiment27pasticheT3q-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [automerger/Experiment27Pastiche-7B](https://huggingface.co/automerger/Experiment27Pastiche-7B)
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: automerger/Experiment27Pastiche-7B
layer_range: [0, 32]
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: automerger/Experiment27Pastiche-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment27pasticheT3q-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
yatinece/yk_8x7b_model_v1
|
yatinece
| 2024-04-24T17:02:59Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/AlphaMonarch-7B",
"beowolx/CodeNinja-1.0-OpenChat-7B",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"mlabonne/NeuralDaredevil-7B",
"HuggingFaceH4/zephyr-7b-beta",
"mistralai/Mistral-7B-Instruct-v0.2",
"teknium/OpenHermes-2.5-Mistral-7B",
"meta-math/MetaMath-Mistral-7B",
"conversational",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:beowolx/CodeNinja-1.0-OpenChat-7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mlabonne/NeuralDaredevil-7B",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:meta-math/MetaMath-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-04-17T18:54:46Z |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- mlabonne/AlphaMonarch-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/NeuralDaredevil-7B
- HuggingFaceH4/zephyr-7b-beta
- mistralai/Mistral-7B-Instruct-v0.2
- teknium/OpenHermes-2.5-Mistral-7B
- meta-math/MetaMath-Mistral-7B
base_model:
- mlabonne/AlphaMonarch-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/NeuralDaredevil-7B
- HuggingFaceH4/zephyr-7b-beta
- mistralai/Mistral-7B-Instruct-v0.2
- teknium/OpenHermes-2.5-Mistral-7B
- meta-math/MetaMath-Mistral-7B
---
# yk_8x7b_model
yk_8x7b_model is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
## ๐งฉ Configuration
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: float16
gate_mode: hidden
experts:
- source_model: mlabonne/AlphaMonarch-7B
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "I want"
- "help"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- "coding"
- source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- "creative"
- source_model: mlabonne/NeuralDaredevil-7B
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
- "logic"
- source_model: HuggingFaceH4/zephyr-7b-beta
positive_prompts:
- "You are an helpful general-purpose assistant."
- "assist"
- "helpful"
- "support"
- "guide"
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts:
- "You are helpful assistant."
- "aid"
- "assist"
- "guide"
- "support"
- source_model: teknium/OpenHermes-2.5-Mistral-7B
positive_prompts:
- "You are helpful a coding assistant."
- "code"
- "programming"
- "debug"
- "scripting"
- "coding"
- source_model: meta-math/MetaMath-Mistral-7B
positive_prompts:
- "You are an assistant good at math."
- "mathematics"
- "calculation"
- "problem solving"
- "arithmetics"
- "math"
```
## ๐ป Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "yatinece/yk_8x7b_model"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
duyntnet/Starling-LM-7B-alpha-imatrix-GGUF
|
duyntnet
| 2024-04-27T05:15:04Z | 383 | 0 |
transformers
|
[
"transformers",
"gguf",
"imatrix",
"Starling-LM-7B-alpha",
"text-generation",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-04-27T03:18:46Z |
---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Starling-LM-7B-alpha
---
Quantizations of https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
# From original readme
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat 3.5:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
model = transformers.AutoModelForCausalLM.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
```
|
mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF
|
mradermacher
| 2024-05-05T14:57:41Z | 383 | 0 |
transformers
|
[
"transformers",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:generator",
"base_model:yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T13:05:21Z |
---
base_model: yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2
datasets:
- generator
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- trl
- sft
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_arc_Japanese_v2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TIGER-Lab/Mantis-8B-clip-llama3-pretraind
|
TIGER-Lab
| 2024-05-03T23:11:57Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"pretraining",
"generated_from_trainer",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T02:53:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: llava_clip_llama3_8b_pretrain_8192
results: []
license: llama3
---
**See the Mantis-Instruct fine-tuned version here [TIGER-Lab/Mantis-8B-clip-llama3](https://huggingface.co/TIGER-Lab/Mantis-8B-clip-llama3). This checkpoint is just for experiments reproduction and does not serve as a functional model**
|
openerotica/Llama-3-lima-nsfw-16k-test-GGUF
|
openerotica
| 2024-05-17T13:26:03Z | 383 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:llama3",
"region:us"
] | null | 2024-05-16T09:51:40Z |
---
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# basiliskinstitute/Llama-3-lima-nsfw-16k-test-Q4_K_M-GGUF
This model was converted to GGUF format from [`openerotica/Llama-3-lima-nsfw-16k-test`](https://huggingface.co/openerotica/Llama-3-lima-nsfw-16k-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openerotica/Llama-3-lima-nsfw-16k-test) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo basiliskinstitute/Llama-3-lima-nsfw-16k-test-Q4_K_M-GGUF --model llama-3-lima-nsfw-16k-test.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo basiliskinstitute/Llama-3-lima-nsfw-16k-test-Q4_K_M-GGUF --model llama-3-lima-nsfw-16k-test.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-lima-nsfw-16k-test.Q4_K_M.gguf -n 128
```
|
Knobi3/SterlingBeagle-flashback-slerp
|
Knobi3
| 2024-05-19T21:56:17Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Knobi3/SterlingBeagle-slerp",
"timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
"conversational",
"base_model:Knobi3/SterlingBeagle-slerp",
"base_model:timpal0l/Mistral-7B-v0.1-flashback-v2-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-05-19T21:49:00Z |
---
tags:
- merge
- mergekit
- lazymergekit
- Knobi3/SterlingBeagle-slerp
- timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
base_model:
- Knobi3/SterlingBeagle-slerp
- timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
license: apache-2.0
---
# SterlingBeagle-flashback-slerp
SterlingBeagle-flashback-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Knobi3/SterlingBeagle-slerp](https://huggingface.co/Knobi3/SterlingBeagle-slerp)
* [timpal0l/Mistral-7B-v0.1-flashback-v2-instruct](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2-instruct)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: Knobi3/SterlingBeagle-slerp
layer_range: [0, 32]
- model: timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: Knobi3/SterlingBeagle-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Knobi3/SterlingBeagle-flashback-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
crusoeai/Mistral-7B-Instruct-v0.3-GGUF
|
crusoeai
| 2024-05-23T18:29:58Z | 383 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-23T18:21:01Z |
Entry not found
|
mradermacher/Moistral-11B-v5b-i1-GGUF
|
mradermacher
| 2024-05-28T03:39:19Z | 383 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:BeaverAI/Moistral-11B-v5b",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T14:40:06Z |
---
base_model: BeaverAI/Moistral-11B-v5b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BeaverAI/Moistral-11B-v5b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Moistral-11B-v5b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Moistral-11B-v5b-i1-GGUF/resolve/main/Moistral-11B-v5b.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
anezatra/gpt-neo-125M
|
anezatra
| 2024-06-01T20:31:16Z | 383 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-01T20:30:43Z |
Entry not found
|
ailieus/NLP_milestone2
|
ailieus
| 2024-06-03T13:17:54Z | 383 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:apple/OpenELM-450M-Instruct",
"region:us"
] | null | 2024-06-03T13:13:31Z |
---
library_name: peft
base_model: apple/OpenELM-450M-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
mradermacher/Jais-70b-Preview-GGUF
|
mradermacher
| 2024-06-06T14:32:29Z | 383 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:fne/Jais-70b-Preview",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T10:12:30Z |
---
base_model: fne/Jais-70b-Preview
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/fne/Jais-70b-Preview
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.IQ3_XS.gguf) | IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.IQ3_M.gguf) | IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Jais-70b-Preview-GGUF/resolve/main/Jais-70b-Preview.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nyu-visionx/cambrian-13b
|
nyu-visionx
| 2024-06-28T00:22:18Z | 383 | 15 |
transformers
|
[
"transformers",
"safetensors",
"cambrian_llama",
"text-generation",
"arxiv:2406.16860",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-16T12:27:37Z |
---
license: apache-2.0
---
# Cambrian Model Card
## Model details
**Model type:**
Cambrian is an open-source Multimodal LLM with vision-centric designs.
**Model date:**
Cambrian-1-13B was trained in June 2024.
**Paper or resources for more information:**
- https://cambrian-mllm.github.io/
- https://arxiv.org/abs/2406.16860
## License
Llama 3 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/cambrian-mllm/cambrian/issues
## Training dataset
- [2.5M Cambrian Alignment Data](https://huggingface.co/datasets/nyu-visionx/Cambrian-Alignment).
- [7M Cambrian Curated Instruction Tuning Data](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M)
|
manhattan23/output
|
manhattan23
| 2024-06-19T06:09:25Z | 383 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-06-18T02:13:21Z |
---
base_model: stabilityai/stable-diffusion-2-1-base
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-manhattan23/output
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: A beautiful woman taking a picture with her smart phone.

prompt: A young man bending next to a toilet.

prompt: This is a picture of a kitchen in a poverty area of a town.

## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
NikolayKozloff/llama3-turbcat-instruct-8b-IQ4_NL-GGUF
|
NikolayKozloff
| 2024-06-21T13:35:41Z | 383 | 1 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:turboderp/llama3-turbcat-instruct-8b",
"license:llama3",
"region:us"
] | null | 2024-06-21T13:35:17Z |
---
base_model: turboderp/llama3-turbcat-instruct-8b
license: llama3
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/llama3-turbcat-instruct-8b-IQ4_NL-GGUF
This model was converted to GGUF format from [`turboderp/llama3-turbcat-instruct-8b`](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/llama3-turbcat-instruct-8b-IQ4_NL-GGUF --hf-file llama3-turbcat-instruct-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/llama3-turbcat-instruct-8b-IQ4_NL-GGUF --hf-file llama3-turbcat-instruct-8b-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/llama3-turbcat-instruct-8b-IQ4_NL-GGUF --hf-file llama3-turbcat-instruct-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/llama3-turbcat-instruct-8b-IQ4_NL-GGUF --hf-file llama3-turbcat-instruct-8b-iq4_nl-imat.gguf -c 2048
```
|
internistai/base-7b-v0.2-Q4_K_M-GGUF
|
internistai
| 2024-06-22T21:39:33Z | 383 | 0 | null |
[
"gguf",
"medical",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:pubmed",
"dataset:medmcqa",
"dataset:maximegmd/medqa_alpaca_format",
"base_model:internistai/base-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-06-22T21:39:15Z |
---
base_model: internistai/base-7b-v0.2
datasets:
- Open-Orca/OpenOrca
- pubmed
- medmcqa
- maximegmd/medqa_alpaca_format
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- medical
- llama-cpp
- gguf-my-repo
tag: text-generation
---
# maximegmd/base-7b-v0.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`internistai/base-7b-v0.2`](https://huggingface.co/internistai/base-7b-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/internistai/base-7b-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -c 2048
```
|
Helsinki-NLP/opus-mt-pl-de
|
Helsinki-NLP
| 2023-08-16T12:02:36Z | 382 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pl",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
tags:
- translation
license: apache-2.0
---
### opus-mt-pl-de
* source languages: pl
* target languages: de
* OPUS readme: [pl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.de | 47.8 | 0.665 |
|
microsoft/unispeech-sat-base-100h-libri-ft
|
microsoft
| 2021-11-04T15:26:40Z | 382 | 3 |
transformers
|
[
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"audio",
"en",
"dataset:librispeech_asr",
"arxiv:2110.05752",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# UniSpeech-SAT-Base-Finetuned-100h-Libri
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
A [unispeech-sat-base model]( ) that was fine-tuned on 100h hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
The model was fine-tuned on:
- 100 hours of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
[Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
**Abstract**
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, UniSpeechSatForCTC
from datasets import load_dataset
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
model = UniSpeechSatForCTC.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
# load dummy dataset
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)

|
helenai/gpt2-ov
|
helenai
| 2023-04-12T19:38:19Z | 382 | 2 |
transformers
|
[
"transformers",
"openvino",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-03-11T20:06:06Z |
---
language:
- en
tags:
- openvino
---
# gpt2
This is the [gpt2](https://huggingface.co/gpt2) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForCausalLM
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/gpt2-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
result = pipe("My name is Julien and I like to")
print(result)
```
|
timm/hardcorenas_a.miil_green_in1k
|
timm
| 2023-04-21T21:29:53Z | 382 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2102.11646",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-04-21T21:29:47Z |
---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for hardcorenas_a.miil_green_in1k
A HardCoReNAS image classification model. Trained on ImageNet-1k by paper authors with their "green" recipe.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.3
- GMACs: 0.2
- Activations (M): 4.4
- Image size: 224 x 224
- **Papers:**
- HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search: https://arxiv.org/abs/2102.11646
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/Alibaba-MIIL/HardCoReNAS
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('hardcorenas_a.miil_green_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hardcorenas_a.miil_green_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 112, 112])
# torch.Size([1, 24, 56, 56])
# torch.Size([1, 40, 28, 28])
# torch.Size([1, 112, 14, 14])
# torch.Size([1, 960, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'hardcorenas_a.miil_green_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 960, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@misc{nayman2021hardcorenas,
title={HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search},
author={Niv Nayman and Yonathan Aflalo and Asaf Noy and Lihi Zelnik-Manor},
year={2021},
eprint={https://arxiv.org/abs/2102.11646},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
TheBloke/gorilla-7B-GGUF
|
TheBloke
| 2023-09-27T12:53:16Z | 382 | 2 |
transformers
|
[
"transformers",
"gguf",
"llama",
"arxiv:2305.15334",
"base_model:gorilla-llm/gorilla-7b-hf-delta-v0",
"license:other",
"text-generation-inference",
"region:us"
] | null | 2023-09-20T01:46:09Z |
---
license: other
model_name: Gorilla 7B
base_model: gorilla-llm/gorilla-7b-hf-delta-v0
inference: false
model_creator: Gorilla LLM (UC Berkeley)
model_type: llama
prompt_template: '### User: {prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Gorilla 7B - GGUF
- Model creator: [Gorilla LLM (UC Berkeley)](https://huggingface.co/gorilla-llm)
- Original model: [Gorilla 7B](https://huggingface.co/gorilla-llm/gorilla-7b-hf-delta-v0)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Gorilla LLM's Gorilla 7B](https://huggingface.co/gorilla-llm/gorilla-7b-hf-delta-v0).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/gorilla-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/gorilla-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GGUF)
* [Gorilla LLM (UC Berkeley)'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/gorilla-7B-fp16)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Hashes
```
### User: {prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Gorilla-7B.Q2_K.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [Gorilla-7B.Q3_K_S.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [Gorilla-7B.Q3_K_M.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [Gorilla-7B.Q3_K_L.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [Gorilla-7B.Q4_0.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Gorilla-7B.Q4_K_S.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [Gorilla-7B.Q4_K_M.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [Gorilla-7B.Q5_0.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Gorilla-7B.Q5_K_S.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [Gorilla-7B.Q5_K_M.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [Gorilla-7B.Q6_K.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [Gorilla-7B.Q8_0.gguf](https://huggingface.co/TheBloke/gorilla-7B-GGUF/blob/main/Gorilla-7B.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/gorilla-7B-GGUF and below it, a specific filename to download, such as: Gorilla-7B.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/gorilla-7B-GGUF Gorilla-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/gorilla-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/gorilla-7B-GGUF Gorilla-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m Gorilla-7B.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User: {prompt}\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/gorilla-7B-GGUF", model_file="Gorilla-7B.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Gorilla LLM's Gorilla 7B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Gorilla 7B GGML
These files are fp16 format model files for [Gorilla 7B](https://huggingface.co/gorilla-llm/gorilla-7b-hf-delta-v0).
They are the result of merging the deltas and then uploading in fp16 format.
**NOTE**: This is not a regular LLM. It is designed to allow LLMs to use tools by invoking APIs.
"Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. "
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GPTQ)
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/gorilla-7B-GGML)
* [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/gorilla-7B-fp16)
## Prompt template
```
###USER: find me an API to generate cute cat images
###ASSISTANT:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Gorilla 7B
# Gorilla: Large Language Model Connected with Massive APIs
By Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez ([Project Website](https://shishirpatil.github.io/gorilla/))
[](https://arxiv.org/abs/2305.15334) [](https://discord.gg/3apqwwME) [](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
`Gorilla` enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
## Model Details
Gorilla can be either trained via standard finetuning or using our novel retriever-aware training pipeline. We release `gorilla-7b-hf-delta-v0`, a 0-shot finetuned LLM that can reliably use Hugging Face APIs. It can be prompted through simply natural language (e.g., "I want to generate an image from text."). Checkour our website, github and paper for more information.
### Model Type
Gorilla is an open-source API caller trained by fine-tuning LLaMA weights. It is an auto-regressive language model, based on the transformer architecture.
### Model Date
05/27/2023
### Organization
Gorilla LLM (UC Berkeley)
---
license: apache-2.0
---
<!-- original-model-card end -->
|
TheBloke/genz-13B-v2-GGUF
|
TheBloke
| 2023-10-14T23:10:41Z | 382 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"base_model:budecosystem/genz-13b-v2",
"license:llama2",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-10-14T22:55:34Z |
---
base_model: budecosystem/genz-13b-v2
inference: false
language:
- en
library_name: transformers
license: llama2
model_creator: Bud
model_name: GenZ 13B v2
model_type: llama
pipeline_tag: text-generation
prompt_template: '### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# GenZ 13B v2 - GGUF
- Model creator: [Bud](https://huggingface.co/budecosystem)
- Original model: [GenZ 13B v2](https://huggingface.co/budecosystem/genz-13b-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Bud's GenZ 13B v2](https://huggingface.co/budecosystem/genz-13b-v2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/genz-13B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/genz-13B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/genz-13B-v2-GGUF)
* [Bud's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/budecosystem/genz-13b-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-Newlines
```
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [genz-13b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [genz-13b-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [genz-13b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [genz-13b-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [genz-13b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [genz-13b-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [genz-13b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [genz-13b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [genz-13b-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [genz-13b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [genz-13b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [genz-13b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/genz-13B-v2-GGUF/blob/main/genz-13b-v2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/genz-13B-v2-GGUF and below it, a specific filename to download, such as: genz-13b-v2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/genz-13B-v2-GGUF genz-13b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/genz-13B-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/genz-13B-v2-GGUF genz-13b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m genz-13b-v2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/genz-13B-v2-GGUF", model_file="genz-13b-v2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, ์ค๊ต ๊น, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjรคreholt, ้ฟๆ, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Bud's GenZ 13B v2
---
<div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/genz-logo.png" width=150></div>
<p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p>
---
## Introduction ๐
Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 13B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/MTBench_CompareChart_28July2023.png" width="500"></p>
The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology.
GenZ isn't just a powerful text generatorโit's a sophisticated AI assistant, capable of understanding and responding to user prompts with high-quality responses. We've taken the robust capabilities of Llama V2 and fine-tuned them to offer a more user-focused experience. Whether you're seeking informative responses or engaging interactions, GenZ is designed to deliver.
And this isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey. ๐
---
<h2>Milestone Releases ๏ธ๐</h2>
**[27 July 2023]**
[_GenZ-13B V2 (ggml)_](https://huggingface.co/budecosystem/genz-13b-v2-ggml) : Announcing our GenZ-13B v2 with ggml. This variant of GenZ can run inferencing using only CPU and without the need of GPU. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-ggml).
**[27 July 2023]**
[_GenZ-13B V2 (4-bit)_](https://huggingface.co/budecosystem/genz-13b-v2-4bit) : Announcing our GenZ-13B v2 with 4-bit quantisation. Enabling inferencing with much lesser GPU memory than the 32-bit variant. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-4bit).
**[26 July 2023]**
[_GenZ-13B V2_](https://huggingface.co/budecosystem/genz-13b-v2) : We're excited to announce the release of our Genz 13B v2 model, a step forward with improved evaluation results compared to v1. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2).
**[20 July 2023]**
[_GenZ-13B_](https://huggingface.co/budecosystem/genz-13b) : We marked an important milestone with the release of the Genz 13B model. The journey began here, and you can partake in it by downloading the model from [Hugging Face](https://huggingface.co/budecosystem/genz-13b).
---
<img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/screenshot_genz13bv2.png" width="100%">
|  |  | 
|:--:|:--:|:--:|
| *Code Generation* | *Poem Generation* | *Email Generation* |
<!--
<p align="center"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Python.gif" width="33%" alt="Python Code"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Poem.gif" width="33%"><img src="https://raw.githubusercontent.com/adrot-dev/git-test/blob/main/assets/Email.gif" width="33%"></p>
-->
<h2>Getting Started on Hugging Face ๐ค</h2>
Getting up and running with our models on Hugging Face is a breeze. Follow these steps:
<h3>1๏ธโฃ : Import necessary modules</h3>
Start by importing the necessary modules from the โtransformersโ library and โtorchโ.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
```
<h3>2๏ธโฃ : Load the tokenizer and the model</h3>
Next, load up the tokenizer and the model for โbudecosystem/genz-13b-v2โ from Hugging Face using the โfrom_pretrainedโ method.
```python
tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-13b-v2", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-13b-v2", torch_dtype=torch.bfloat16)
```
<h3>3๏ธโฃ : Generate responses</h3>
Now that you have the model and tokenizer, you're ready to generate responses. Here's how you can do it:
```python
inputs = tokenizer("The meaning of life is", return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
In this example, "The meaning of life is" is the prompt template used for inference. You can replace it with any string you like.
Want to interact with the model in a more intuitive way? We have a Gradio interface set up for that. Head over to our GitHub page, clone the repository, and run the โgenerate.pyโ script to try it out. Happy experimenting! ๐
<h2>Fine-tuning ๐ฏ</h2>
It's time to upgrade the model by fine-tuning the model. You can do this using our provided finetune.py script. Here's an example command:
```bash
python finetune.py \
--model_name meta-llama/Llama-2-13b \
--data_path dataset.json \
--output_dir output \
--trust_remote_code \
--prompt_column instruction \
--response_column output \
--pad_token_id 50256
```
---
<h2 >Bonus: Colab Notebooks ๐ <b><i>(WIP)</i></b></h2>
Looking for an even simpler way to get started with GenZ? We've got you covered. We've prepared a pair of detailed Colab notebooks - one for Inference and one for Fine-tuning. These notebooks come pre-filled with all the information and code you'll need. All you'll have to do is run them!
Keep an eye out for these notebooks. They'll be added to the repository soon!
---
<h2>Why Use GenZ? ๐ก</h2>
You might be wondering, "Why should I choose GenZ over a pretrained model?" The answer lies in the extra mile we've gone to fine-tune our models.
While pretrained models are undeniably powerful, GenZ brings something extra to the table. We've fine-tuned it with curated datasets, which means it has additional skills and capabilities beyond what a pretrained model can offer. Whether you need it for a simple task or a complex project, GenZ is up for the challenge.
What's more, we are committed to continuously enhancing GenZ. We believe in the power of constant learning and improvement. That's why we'll be regularly fine-tuning our models with various curated datasets to make them even better. Our goal is to reach the state of the art and beyond - and we're committed to staying the course until we get there.
But don't just take our word for it. We've provided detailed evaluations and performance details in a later section, so you can see the difference for yourself.
Choose GenZ and join us on this journey. Together, we can push the boundaries of what's possible with large language models.
---
<h2>Model Card for GenZ 13B ๐</h2>
Here's a quick overview of everything you need to know about GenZ 13B.
<h3>Model Details:</h3>
- Developed by: Bud Ecosystem
- Base pretrained model type: Llama V2 13B
- Model Architecture: GenZ 13B, fine-tuned on Llama V2 13B, is an auto-regressive language model that employs an optimized transformer architecture. The fine-tuning process for GenZ 13B leveraged Supervised Fine-Tuning (SFT)
- License: The model is available for commercial use under a custom commercial license. For more information, please visit: [Meta AI Model and Library Downloads](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
---
<h2>Intended Use ๐ผ</h2>
When we created GenZ 13B, we had a clear vision of how it could be used to push the boundaries of what's possible with large language models. We also understand the importance of using such models responsibly. Here's a brief overview of the intended and out-of-scope uses for GenZ 13B.
<h3>Direct Use</h3>
GenZ 13B is designed to be a powerful tool for research on large language models. It's also an excellent foundation for further specialization and fine-tuning for specific use cases, such as:
- Text summarization
- Text generation
- Chatbot creation
- And much more!
<h3>Out-of-Scope Use ๐ฉ</h3>
While GenZ 13B is versatile, there are certain uses that are out of scope:
- Production use without adequate assessment of risks and mitigation
- Any use cases which may be considered irresponsible or harmful
- Use in any manner that violates applicable laws or regulations, including trade compliance laws
- Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2
Remember, GenZ 13B, like any large language model, is trained on a large-scale corpora representative of the web, and therefore, may carry the stereotypes and biases commonly encountered online.
<h3>Recommendations ๐ง </h3>
We recommend users of GenZ 13B to consider fine-tuning it for the specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 13B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment.
---
<h2>Training Details ๐</h2>
When fine-tuning GenZ 13B, we took a meticulous approach to ensure we were building on the solid base of the pretrained Llama V2 13B model in the most effective way. Here's a look at the key details of our training process:
<h3>Fine-Tuning Training Data</h3>
For the fine-tuning process, we used a carefully curated mix of datasets. These included data from OpenAssistant, an instruction fine-tuning dataset, and Thought Source for the Chain Of Thought (CoT) approach. This diverse mix of data sources helped us enhance the model's capabilities across a range of tasks.
<h3>Fine-Tuning Procedure</h3>
We performed a full-parameter fine-tuning using Supervised Fine-Tuning (SFT). This was carried out on 4 A100 80GB GPUs, and the process took under 100 hours. To make the process more efficient, we used DeepSpeed's ZeRO-3 optimization.
<h3>Tokenizer</h3>
We used the SentencePiece tokenizer during the fine-tuning process. This tokenizer is known for its capability to handle open-vocabulary language tasks efficiently.
<h3>Hyperparameters</h3>
Here are the hyperparameters we used for fine-tuning:
| Hyperparameter | Value |
| -------------- | ----- |
| Warmup Ratio | 0.04 |
| Learning Rate Scheduler Type | Cosine |
| Learning Rate | 2e-5 |
| Number of Training Epochs | 3 |
| Per Device Training Batch Size | 4 |
| Gradient Accumulation Steps | 4 |
| Precision | FP16 |
| Optimizer | AdamW |
---
<h2>Evaluations ๐ฏ</h2>
Evaluating our model is a key part of our fine-tuning process. It helps us understand how our model is performing and how it stacks up against other models. Here's a look at some of the key evaluations for GenZ 13B:
<h3>Benchmark Comparison</h3>
We've compared GenZ V1 with V2 to understand the improvements our fine-tuning has achieved.
| Model Name | MT Bench | Vicuna Bench | MMLU | Human Eval | Hellaswag | BBH |
|:----------:|:--------:|:------------:|:----:|:----------:|:---------:|:----:|
| Genz 13B | 6.12 | 86.1 | 53.62| 17.68 | 77.38 | 37.76|
| Genz 13B v2| 6.79 | 87.2 | 53.68| 21.95 | 77.48 | 38.1 |
<h3>MT Bench Score</h3>
A key evaluation metric we use is the MT Bench score. This score provides a comprehensive assessment of our model's performance across a range of tasks.
We're proud to say that our model performs at a level that's close to the Llama-70B-chat model on the MT Bench and top of the list among 13B models.
<p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_score.png" width="500"></p>
In the transition from GenZ V1 to V2, we noticed some fascinating performance shifts. While we saw a slight dip in coding performance, two other areas, Roleplay and Math, saw noticeable improvements.
---
<h2>Looking Ahead ๐</h2>
We're excited about the journey ahead with GenZ. We're committed to continuously improving and enhancing our models, and we're excited to see what the open-source community will build with them. We believe in the power of collaboration, and we can't wait to see what we can achieve together.
Remember, we're just getting started. This is just the beginning of a journey that we believe will revolutionize the world of large language models. We invite you to join us on this exciting journey. Together, we can push the boundaries of what's possible with AI. ๐
---
Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
<!-- original-model-card end -->
|
AisingioroHao0/stable-diffusion-reference-only-automatic-coloring-0.1.2
|
AisingioroHao0
| 2023-12-01T02:55:21Z | 382 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-reference-only",
"image-to-image",
"license:apache-2.0",
"diffusers:StableDiffusionReferenceOnlyPipeline",
"region:us"
] |
image-to-image
| 2023-10-20T09:55:19Z |
---
license: apache-2.0
pipeline_tag: image-to-image
tags:
- stable-diffusion-reference-only
---
|
bartowski/OpenHermes-2.5-Mistral-7B-exl2
|
bartowski
| 2023-11-02T23:36:57Z | 382 | 12 | null |
[
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-11-02T22:16:31Z |
---
base_model: mistralai/Mistral-7B-v0.1
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: OpenHermes-2-Mistral-7B
results: []
license: apache-2.0
language:
- en
quantized_by: bartowski
---
## Exllama v2 Quantizations of OpenHermes-2.5-Mistral-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.7">turboderp's ExLlamaV2 v0.0.7</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset.
Original model: https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B
<a href="https://huggingface.co/bartowski/OpenHermes-2.5-Mistral-7B-exl2/tree/4.0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/OpenHermes-2.5-Mistral-7B-exl2/tree/6.0">6.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/OpenHermes-2.5-Mistral-7B-exl2/tree/8.0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/OpenHermes-2.5-Mistral-7B-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `OpenHermes-2.5-Mistral-7B-exl2`:
```shell
mkdir OpenHermes-2.5-Mistral-7B-exl2
huggingface-cli download bartowski/OpenHermes-2.5-Mistral-7B-exl2 --local-dir OpenHermes-2.5-Mistral-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir OpenHermes-2.5-Mistral-7B-exl2
huggingface-cli download bartowski/OpenHermes-2.5-Mistral-7B-exl2 --revision 4.0 --local-dir OpenHermes-2.5-Mistral-7B-exl2 --local-dir-use-symlinks False
```
|
chatdb/natural-sql-7b-GGUF
|
chatdb
| 2024-02-04T20:48:54Z | 382 | 6 |
transformers
|
[
"transformers",
"gguf",
"instruct",
"finetune",
"text-generation",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T20:38:29Z |
---
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- instruct
- finetune
library_name: transformers
license: cc-by-sa-4.0
pipeline_tag: text-generation
---
# **Natural-SQL-7B by ChatDB**
## Natural-SQL-7B is a model with very strong performance in Text-to-SQL instructions, has an excellent understanding of complex questions, and outperforms models of the same size in its space.
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/hafdsfrFCqrVbATIzV_EN.png" width="600">
[ChatDB.ai](https://chatdb.ai) | [Notebook](https://github.com/cfahlgren1/natural-sql/blob/main/natural-sql-7b.ipynb) | [Twitter](https://twitter.com/calebfahlgren)
# **Benchmarks**
### *Results on Novel Datasets not trained on via SQL-Eval*
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/5ynfoKPzI3_-WasQQt7qR.png" width="800">
<em>Big thanks to the [defog](https://huggingface.co/defog) team for open sourcing [sql-eval](https://github.com/defog-ai/sql-eval)</em>๐
Natural-SQL also can handle complex, compound questions that other models typically struggle with. There is a more detailed writeup Here is a write up, small test done [here](https://chatdb.ai/post/naturalsql-vs-sqlcoder-for-text-to-sql).
# Usage
Make sure you have the correct version of the transformers library installed:
```sh
pip install transformers==4.35.2
```
### Loading the Model
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("chatdb/natural-sql-7b")
model = AutoModelForCausalLM.from_pretrained(
"chatdb/natural-sql-7b",
device_map="auto",
torch_dtype=torch.float16,
)
```
### **License**
The model weights are licensed under `CC BY-SA 4.0`, with extra guidelines for responsible use expanded from the original model's [Deepseek](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) license.
You're free to use and adapt the model, even commercially.
If you alter the weights, such as through fine-tuning, you must publicly share your changes under the same `CC BY-SA 4.0` license.
### Generating SQL
```python
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(
**inputs,
num_return_sequences=1,
eos_token_id=100001,
pad_token_id=100001,
max_new_tokens=400,
do_sample=False,
num_beams=1,
)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs[0].split("```sql")[-1])
```
# Prompt Template
```
# Task
Generate a SQL query to answer the following question: `{natural language question}`
### PostgreSQL Database Schema
The query will run on a database with the following schema:
<SQL Table DDL Statements>
# SQL
Here is the SQL query that answers the question: `{natural language question}`
'''sql
```
# Example SQL Output
### Example Schemas
```sql
CREATE TABLE users (
user_id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL,
password_hash TEXT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE projects (
project_id SERIAL PRIMARY KEY,
project_name VARCHAR(100) NOT NULL,
description TEXT,
start_date DATE,
end_date DATE,
owner_id INTEGER REFERENCES users(user_id)
);
CREATE TABLE tasks (
task_id SERIAL PRIMARY KEY,
task_name VARCHAR(100) NOT NULL,
description TEXT,
due_date DATE,
status VARCHAR(50),
project_id INTEGER REFERENCES projects(project_id)
);
CREATE TABLE taskassignments (
assignment_id SERIAL PRIMARY KEY,
task_id INTEGER REFERENCES tasks(task_id),
user_id INTEGER REFERENCES users(user_id),
assigned_date DATE NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE comments (
comment_id SERIAL PRIMARY KEY,
content TEXT NOT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
task_id INTEGER REFERENCES tasks(task_id),
user_id INTEGER REFERENCES users(user_id)
);
```
### Example SQL Outputs
**Question**: **Show me the day with the most users joining**
```sql
SELECT created_at::DATE AS day, COUNT(*) AS user_count
FROM users
GROUP BY day
ORDER BY user_count DESC
LIMIT 1;
```
**Question**: **Show me the project that has a task with the most comments**
```sql
SELECT p.project_name, t.task_name, COUNT(c.comment_id) AS comment_count
FROM projects p
JOIN tasks t ON p.project_id = t.project_id
JOIN comments c ON t.task_id = c.task_id
GROUP BY p.project_name, t.task_name
ORDER BY comment_count DESC
LIMIT 1;
```
**Question**: **What is the ratio of users with gmail addresses vs without?**
```sql
SELECT
SUM(CASE WHEN email ILIKE '%@gmail.com%' THEN 1 ELSE 0 END)::FLOAT / NULLIF(SUM(CASE WHEN email NOT ILIKE '%@gmail.com%' THEN 1 ELSE 0 END), 0) AS gmail_ratio
FROM
users;
```
|
NbAiLab/nb-whisper-medium
|
NbAiLab
| 2024-02-13T12:29:58Z | 382 | 4 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"nb",
"nn",
"en",
"dataset:NbAiLab/ncc_speech",
"dataset:NbAiLab/NST",
"dataset:NbAiLab/NPSC",
"arxiv:2212.04356",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-13T10:07:32Z |
---
license: apache-2.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
base_model: openai/whisper-medium
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
example_title: FLEURS sample 2
---
# NB-Whisper Medium
Introducing the **_Norwegian NB-Whisper Medium model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
| Model Size | Parameters | Model |
|------------|------------|------------|
| Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) |
| Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) |
| Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) |
| Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) |
| Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) |
### Verbatim Model
While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases:
- **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis.
| Model Size | Parameters | Semantic version |
|------------|------------|------------------|
| Tiny | 39M | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) |
| Base | 74M | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) |
| Small | 244M | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) |
| Medium | 769M | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) |
| Large | 1550M | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) |
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmรฅl, Norwegian Nynorsk, English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Trained from model:** [openai/whisper-medium](https://huggingface.co/openai/whisper-medium)
- **Code Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** _See Spaces on this page_
## How to Use the Models
### Online Demos
You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/).
### Local Setup with HuggingFace
Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3).
```bash
# Download the sample file
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
# Install necessary libraries.
$ pip install transformers>=4.35.2
```
After this is done, you should be able to run this in Python:
```python
from transformers import pipeline
# Load the model
asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-medium")
#transcribe
asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'})
```
<details>
<summary>Expected output</summary>
```json
{
{'text': ' Nordmenn er nordlendinger, trรธndere, sรธrlendinger og folk fra alle andre regioner. Nordmenn er ogsรฅ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sรฅ lett รฅ si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'}
}
```
</details>
#### Extended HuggingFace
Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words.
```python
# Long Transcripts
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Increase accuracy by setting beam size to 5
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'})
# Return Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Return Word Level Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Transcribe to Nynorsk
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'})
# Transcribe to English
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'})
```
<details>
<summary>Expected output</summary>
Long transcripts:
```json
{
{'text': ' Nordmenn er nordlendinger, trรธndere, sรธrlendinger og folk fra alle andre regioner. Nordmenn er ogsรฅ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sรฅ lett รฅ si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhรธrer. Det vi kaller hjem, er der hjertet vรฅrt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer pรฅ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt stรธrste hรฅp for Norge er at vi skal klare รฅ ta vare pรฅ hverandre, at vi skal bygge dette landet videre pรฅ tillit, fellesskap og raushet.'}
}
```
Timestamps:
```json
{
{'text': ' Nordmenn er nordlendinger, trรธndere, sรธrlendinger og folk fra alle andre regioner. Nordmenn er ogsรฅ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sรฅ lett รฅ si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhรธrer. Det vi kaller hjem, er der hjertet vรฅrt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer pรฅ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt stรธrste hรฅp for Norge er at vi skal klare รฅ ta vare pรฅ hverandre, at vi skal bygge dette landet videre pรฅ tillit, fellesskap og raushet.',
'chunks': [{'timestamp': (0.0, 5.46),
'text': ' Nordmenn er nordlendinger, trรธndere, sรธrlendinger'},
{'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'},
{'timestamp': (8.68, 16.64),
'text': ' Nordmenn er ogsรฅ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'},
{'timestamp': (16.64, 13.3),
'text': ' Det er ikke alltid sรฅ lett รฅ si hvor vi er fra, hvilken nasjonalitet vi er fra.'},
{'timestamp': (13.32, 30.28),
'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhรธrer.'},
{'timestamp': (32.52, 39.16),
'text': ' Det vi kaller hjem, er der hjertet vรฅrt er, og det kan ikke alltid plasseres'},
{'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'},
{'timestamp': (42.0, 46.74),
'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'},
{'timestamp': (46.74, 51.12),
'text': ' og jenter og gutter som er glad i hverandre.'},
{'timestamp': (51.16, 57.42),
'text': ' Nordmenn trommer pรฅ Gud, Allah, Altet og ingenting.'},
{'timestamp': (57.42, 64.3),
'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'},
{'timestamp': (64.34, 71.24),
'text': ' Med andre ord, Norge er dere. Norge er oss.'},
{'timestamp': (71.24, 78.04),
'text': ' Mitt stรธrste hรฅp for Norge er at vi skal klare รฅ ta vare pรฅ hverandre,'},
{'timestamp': (78.12, 84.68),
'text': ' at vi skal bygge dette landet videre pรฅ tillit, fellesskap og raushet.'}]}
}
```
Word Level Timestamps:
```json
{
{"text": "Nordmenn er nordlendinger, trรธndere, sรธrlendinger og folk fra alle andre regioner. Nordmenn er ogsรฅ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sรฅ lett รฅ si hvor vi er fra, hvilken nasjonalitet vi tilhรธrer. Det vi kaller hjem, er der hjertet vรฅrt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer pรฅ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt stรธrste hรฅp for Norge er at vi skal klare รฅ ta vare pรฅ hverandre, at vi skal bygge dette landet videre pรฅ tillit, fellesskap og raushet.",
"chunks": [
{"text": "Nordmenn", "timestamp": [0.72, 1.42]},
{"text": "er", "timestamp": [1.42, 1.74]},
// ... more chunks ...
{"text": "raushet.", "timestamp": [83.1, 84.88]}
]
}
}
```
Nynorsk:
```json
{
{"text": "Nordmenn er nordlendingar, trรธndarar, sรธrlendingar og folk frรฅ alle andre regionar. Nordmenn er ogsรฅ innvandra frรฅ Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid sรฅ lett รฅ seie kvar vi er frรฅ, kva nasjonalitet vi tilhรธyrer. Det vi kallar heim, er der hjartet vรฅrt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar pรฅ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt stรธrste hรฅp for Noreg er at vi skal klare รฅ ta vare pรฅ kvarandre, at vi skal byggje dette landet vidare pรฅ tillit, fellesskap og raushet."}
}
```
English:
```json
{
{"text": "Norwegians are Norwegians, trรธnders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."}
}
```
</details>
### Whisper CPP
Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription.
We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin).
```bash
# We can download and compile whisper.cpp
$ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1
$ cd whisper.cpp/
$ make
#ย We also need to convert the audio to WAV as that is the only format supported by whisper.cpp
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
$ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav
# Lets download the two ggml-files from this site
wget -N https://huggingface.co/NbAiLab/nb-whisper-medium/resolve/main/ggml-model.bin -O models/nb-medium-ggml-model.bin
wget -N https://huggingface.co/NbAiLab/nb-whisper-medium/resolve/main/ggml-model-q5_0.bin -O models/nb-medium-ggml-model-q5_0.bin
#ย And run it with the f16 default model
$ ./main -l no -m models/nb-medium-ggml-model.bin king.wav
# Or the quantized version
$ ./main -l no -m models/nb-medium-ggml-model-q5_0.bin king.wav
```
### WhisperX and Speaker Diarization
Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below.
```bash
# Follow the install instructions on https://github.com/m-bain/whisperX
# Make sure you have a HuggingFace account and have agreed to the pyannote terms
# Log in (or supply HF Token in command line)
huggingface-cli login
# Download a test file
wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3
# Optional. If you get complains about not support for Norwegian, do:
pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540
# Transcribe the test file. All transcripts will end up in the directory of the mp3-file
whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-medium --language no --diarize
```
You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX).
### API
Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks.
## Training Data
The training data originates from Sprรฅkbanken and the National Library of Norway's digital collection, including:
- NST Norwegian ASR Database (16 kHz) and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament by Sprรฅkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Downstream Use
The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding.
## Bias, Risks, and Limitations
Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models.
### Software
The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/).
## Citation & Contributors
The NB-Whisper Medium model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Sprรฅkbanken for the collaboration on the Stortinget corpus.
## Contact
For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
|
pkadambi/wav2textgrid
|
pkadambi
| 2024-02-19T00:17:47Z | 382 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-18T08:20:31Z |
---
license: mit
---
|
recursechat/recursechat
|
recursechat
| 2024-04-20T07:21:40Z | 382 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-02-23T23:28:47Z |
## Models
- [NousResearch/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF)
- License: [llama3](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct-GGUF/blob/main/LICENSE)
- [TheBloke/Mistral-7B-Instruct-v0.2-GGUF](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF)
- License: apache-2.0
- [cmp-nct/llava-1.6-gguf](https://huggingface.co/cmp-nct/llava-1.6-gguf)
- License: apache-2.0
- [ABX-AI/WizardLM-2-7B-GGUF-IQ-Imatrix](https://huggingface.co/ABX-AI/WizardLM-2-7B-GGUF-IQ-Imatrix/tree/main)
- License: apache-2.0
- [TheBloke/WizardCoder-Python-13B-V1.0-GGUF](https://huggingface.co/TheBloke/WizardCoder-Python-13B-V1.0-GGUF)
- License: llama2
- [nomic-ai/nomic-embed-text-v1.5-GGUF](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF)
- License: apache-2.0
- [CompendiumLabs/bge-base-en-v1.5-gguf](https://huggingface.co/CompendiumLabs/bge-base-en-v1.5-gguf)
- License: mit
|
sail/Sailor-4B-Chat-gguf
|
sail
| 2024-04-05T05:48:51Z | 382 | 2 | null |
[
"gguf",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:cerebras/SlimPajama-627B",
"dataset:Skywork/SkyPile-150B",
"dataset:allenai/MADLAD-400",
"dataset:cc100",
"dataset:CohereForAI/aya_dataset",
"dataset:CohereForAI/aya_collection",
"dataset:Open-Orca/OpenOrca",
"arxiv:2404.03608",
"base_model:sail/Sailor-4B",
"license:apache-2.0",
"region:us"
] | null | 2024-03-03T05:23:02Z |
---
language:
- en
- zh
- id
- th
- vi
- ms
- lo
datasets:
- cerebras/SlimPajama-627B
- Skywork/SkyPile-150B
- allenai/MADLAD-400
- cc100
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
- gguf
license: apache-2.0
base_model: sail/Sailor-4B
---
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as ๐ฎ๐ฉIndonesian, ๐น๐ญThai, ๐ป๐ณVietnamese, ๐ฒ๐พMalay, and ๐ฑ๐ฆLao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 7B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
### GGUF model list
| Name | Quant method | Bits | Size | Use case |
| ------------------------------------------------------------ | ------------ | ---- | ------- | ------------------------------------------------------------ |
| [ggml-model-Q2_K.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q2_K.gguf) | Q2_K | 2 | 1.62 GB | small, significant quality loss โ๏ธ not recommended for most purposes |
| [ggml-model-Q3_K_L.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q3_K_L.gguf) | Q3_K_L | 3 | 2.17 GB | medium, substantial quality loss |
| [ggml-model-Q3_K_M.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q3_K_M.gguf) | Q3_K_M | 3 | 2.03 GB | medium, balanced quality |
| [ggml-model-Q3_K_S.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q3_K_S.gguf) | Q3_K_S | 3 | 1.86 GB | small, high quality loss |
| [ggml-model-Q4_K_M.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q4_K_M.gguf) | Q4_K_M | 4 | 2.46 GB | medium, balanced quality |
| [ggml-model-Q4_K_S.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q4_K_S.gguf) | Q4_K_S | 4 | 2.34 GB | medium, greater quality loss |
| [ggml-model-Q5_K_M.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q5_K_M.gguf) | Q5_K_M | 5 | 2.84 GB | medium, balanced quality |
| [ggml-model-Q5_K_S.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q5_K_S.gguf) | Q5_K_S | 5 | 2.78 GB | medium, very low quality loss |
| [ggml-model-Q6_K.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q6_K.gguf) | Q6_K | 6 | 3.25 GB | medium, extremely low quality loss |
| [ggml-model-Q8_0.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-Q8_0.gguf) | Q8_0 | 8 | 4.2 GB | large, extremely low quality loss |
| [ggml-model-f16.gguf](https://huggingface.co/sail/Sailor-4B-Chat-gguf/blob/main/ggml-model-f16.gguf) | f16 | 16 | 7.91 GB | very large, no quality loss |
### How to run with `llama.cpp`
```shell
# install llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
pip install -r requirements.txt
# generate with llama.cpp
./main -ngl 40 -m ggml-model-Q4_K_M.gguf -p "<|im_start|>question\nCara memanggang ikan?\n<|im_start|>answer\n" --temp 0.7 --repeat_penalty 1.1 -n 400 -e
```
> Change `-ngl 40` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
### How to run with `llama-cpp-python`
```shell
pip install llama-cpp-python
```
```python
import llama_cpp
import llama_cpp.llama_tokenizer
# load model
llama = llama_cpp.Llama.from_pretrained(
repo_id="sail/Sailor-4B-Chat-gguf",
filename="ggml-model-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained("sail/Sailor-4B-Chat"),
n_gpu_layers=40,
n_threads=8,
verbose=False,
)
system_role= 'system'
user_role = 'question'
assistant_role = "answer"
system_prompt= \
'You are an AI assistant named Sailor created by Sea AI Lab. \
Your answer should be friendly, unbiased, faithful, informative and detailed.'
system_prompt = f"<|im_start|>{system_role}\n{system_prompt}<|im_end|>"
# inference example
output = llama(
system_prompt + '\n' + f"<|im_start|>{user_role}\nCara memanggang ikan?\n<|im_start|>{assistant_role}\n",
max_tokens=256,
temperature=0.7,
top_p=0.75,
top_k=60,
stop=["<|im_end|>", "<|endoftext|>"]
)
print(output['choices'][0]['text'])
```
### How to build demo
Install `llama-cpp-python` and `gradio`, then run [script](https://github.com/sail-sg/sailor-llm/blob/main/demo/llamacpp_demo.py).
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@misc{dou2024sailor,
title={Sailor: Open Language Models for South-East Asia},
author={Longxu Dou and Qian Liu and Guangtao Zeng and Jia Guo and Jiahui Zhou and Wei Lu and Min Lin},
year={2024},
eprint={2404.03608},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
|
ven1228/5C8RFeGG3JtVDu9WUzQJjJFDuB3YfE2d4SVqi3Qj9vaXqvKm_vgg
|
ven1228
| 2024-03-11T12:49:51Z | 382 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2024-03-05T05:44:51Z |
Entry not found
|
mradermacher/NoromaidxOpenGPT4-2-i1-GGUF
|
mradermacher
| 2024-05-06T05:53:35Z | 382 | 3 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:NeverSleep/NoromaidxOpenGPT4-2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-27T03:56:37Z |
---
base_model: NeverSleep/NoromaidxOpenGPT4-2
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- not-for-all-audiences
- nsfw
---
## About
weighted/imatrix quants of https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ1_S.gguf) | i1-IQ1_S | 10.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ1_M.gguf) | i1-IQ1_M | 11.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.3 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 26.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q4_0.gguf) | i1-Q4_0 | 26.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | |
| [GGUF](https://huggingface.co/mradermacher/NoromaidxOpenGPT4-2-i1-GGUF/resolve/main/NoromaidxOpenGPT4-2.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Mixtral_AI_PsycoTron-GGUF
|
mradermacher
| 2024-05-06T05:03:01Z | 382 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"sw",
"dataset:uonlp/CulturaX",
"base_model:LeroyDyer/Mixtral_AI_PsycoTron",
"endpoints_compatible",
"region:us"
] | null | 2024-04-10T22:34:38Z |
---
base_model: LeroyDyer/Mixtral_AI_PsycoTron
datasets:
- uonlp/CulturaX
language:
- en
- sw
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_PsycoTron
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_PsycoTron-GGUF/resolve/main/Mixtral_AI_PsycoTron.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Alpha-VLLM/Lumina-Next-T2I
|
Alpha-VLLM
| 2024-06-06T17:02:23Z | 382 | 62 |
transformers
|
[
"transformers",
"text-to-image",
"safetensors",
"arxiv:2405.05945",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-to-image
| 2024-05-04T04:48:11Z |
---
license: apache-2.0
tags:
- text-to-image
- safetensors
---
# Lumina-Next-T2I
The `Lumina-Next-T2I` model uses Next-DiT with a 2B parameters model as well as using [Gemma-2B](https://huggingface.co/google/gemma-2b) as a text encoder. Compared with `Lumina-T2I`, it has faster inference speed, richer generation style, and more multilingual support, etc.
Our generative model has `Next-DiT` as the backbone, the text encoder is the `Gemma` 2B model, and the VAE uses a version of `sdxl` fine-tuned by stabilityai.
- Generation Model: Next-DiT
- Text Encoder: [Gemma-2B](https://huggingface.co/google/gemma-2b)
- VAE: [stabilityai/sdxl-vae](https://huggingface.co/stabilityai/sdxl-vae)
[paper](https://arxiv.org/abs/2405.05945)
## ๐ฐ News
- [2024-5-28] ๐๐๐ We updated the `Lumina-Next-T2I` model to support 2K Resolution image generation.
- [2024-5-16] โโโ We have converted the `.pth` weights to `.safetensors` weights. Please pull the latest code to use `demo.py` for inference.
- [2024-5-12] ๐๐๐ We release the next version of `Lumina-T2I`, called `Lumina-Next-T2I` for faster and lower memory usage image generation model.
## ๐ฎ Model Zoo
More checkpoints of our model will be released soon~
| Resolution | Next-DiT Parameter| Text Encoder | Prediction | Download URL |
| ---------- | ----------------------- | ------------ | -----------|-------------- |
| 1024 | 2B | [Gemma-2B](https://huggingface.co/google/gemma-2b) | Rectified Flow | [hugging face](https://huggingface.co/Alpha-VLLM/Lumina-Next-T2I) |
## Installation
Before installation, ensure that you have a working ``nvcc``
```bash
# The command should work and show the same version number as in our case. (12.1 in our case).
nvcc --version
```
On some outdated distros (e.g., CentOS 7), you may also want to check that a late enough version of
``gcc`` is available
```bash
# The command should work and show a version of at least 6.0.
# If not, consult distro-specific tutorials to obtain a newer version or build manually.
gcc --version
```
Downloading Lumina-T2X repo from GitHub:
```bash
git clone https://github.com/Alpha-VLLM/Lumina-T2X
```
### 1. Create a conda environment and install PyTorch
Note: You may want to adjust the CUDA version [according to your driver version](https://docs.nvidia.com/deploy/cuda-compatibility/#default-to-minor-version).
```bash
conda create -n Lumina_T2X -y
conda activate Lumina_T2X
conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
```
### 2. Install dependencies
```bash
pip install diffusers fairscale accelerate tensorboard transformers gradio torchdiffeq click
```
or you can use
```bash
cd lumina_next_t2i
pip install -r requirements.txt
```
### 3. Install ``flash-attn``
```bash
pip install flash-attn --no-build-isolation
```
### 4. Install [nvidia apex](https://github.com/nvidia/apex) (optional)
>[!Warning]
> While Apex can improve efficiency, it is *not* a must to make Lumina-T2X work.
>
> Note that Lumina-T2X works smoothly with either:
> + Apex not installed at all; OR
> + Apex successfully installed with CUDA and C++ extensions.
>
> However, it will fail when:
> + A Python-only build of Apex is installed.
>
> If the error `No module named 'fused_layer_norm_cuda'` appears, it typically means you are using a Python-only build of Apex. To resolve this, please run `pip uninstall apex`, and Lumina-T2X should then function correctly.
You can clone the repo and install following the official guidelines (note that we expect a full
build, i.e., with CUDA and C++ extensions)
```bash
pip install ninja
git clone https://github.com/NVIDIA/apex
cd apex
# if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key...
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
# otherwise
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```
## Inference
To ensure that our generative model is ready to use right out of the box, we provide a user-friendly CLI program and a locally deployable Web Demo site.
### CLI
1. Install Lumina-Next-T2I
```bash
pip install -e .
```
2. Prepare the pre-trained model
โญโญ (Recommended) you can use huggingface_cli to download our model:
```bash
huggingface-cli download --resume-download Alpha-VLLM/Lumina-Next-T2I --local-dir /path/to/ckpt
```
or using git for cloning the model you want to use:
```bash
git clone https://huggingface.co/Alpha-VLLM/Lumina-Next-T2I
```
1. Setting your personal inference configuration
Update your own personal inference settings to generate different styles of images, checking `config/infer/config.yaml` for detailed settings. Detailed config structure:
> `/path/to/ckpt` should be a directory containing `consolidated*.pth` and `model_args.pth`
```yaml
- settings:
model:
ckpt: "/path/to/ckpt" # if ckpt is "", you should use `--ckpt` for passing model path when using `lumina` cli.
ckpt_lm: "" # if ckpt is "", you should use `--ckpt_lm` for passing model path when using `lumina` cli.
token: "" # if LLM is a huggingface gated repo, you should input your access token from huggingface and when token is "", you should `--token` for accessing the model.
transport:
path_type: "Linear" # option: ["Linear", "GVP", "VP"]
prediction: "velocity" # option: ["velocity", "score", "noise"]
loss_weight: "velocity" # option: [None, "velocity", "likelihood"]
sample_eps: 0.1
train_eps: 0.2
ode:
atol: 1e-6 # Absolute tolerance
rtol: 1e-3 # Relative tolerance
reverse: false # option: true or false
likelihood: false # option: true or false
infer:
resolution: "1024x1024" # option: ["1024x1024", "512x2048", "2048x512", "(Extrapolation) 1664x1664", "(Extrapolation) 1024x2048", "(Extrapolation) 2048x1024"]
num_sampling_steps: 60 # range: 1-1000
cfg_scale: 4. # range: 1-20
solver: "euler" # option: ["euler", "dopri5", "dopri8"]
t_shift: 4 # range: 1-20 (int only)
ntk_scaling: true # option: true or false
proportional_attn: true # option: true or false
seed: 0 # rnage: any number
```
- model:
- `ckpt`: lumina-next-t2i checkpoint path from [huggingface repo](https://huggingface.co/Alpha-VLLM/Lumina-Next-T2I) containing `consolidated*.pth` and `model_args.pth`.
- `ckpt_lm`: LLM checkpoint.
- `token`: huggingface access token for accessing gated repo.
- transport:
- `path_type`: the type of path for transport: 'Linear', 'GVP' (Geodesic Vector Pursuit), or 'VP' (Vector Pursuit).
- `prediction`: the prediction model for the transport dynamics.
- `loss_weight`: the weighting of different components in the loss function, can be 'velocity' for dynamic modeling, 'likelihood' for statistical consistency, or None for no weighting
- `sample_eps`: sampling in the transport model.
- `train_eps`: training to stabilize the learning process.
- ode:
- `atol`: Absolute tolerance for the ODE solver. (options: ["Linear", "GVP", "VP"])
- `rtol`: Relative tolerance for the ODE solver. (option: ["velocity", "score", "noise"])
- `reverse`: run the ODE solver in reverse. (option: [None, "velocity", "likelihood"])
- `likelihood`: Enable calculation of likelihood during the ODE solving process.
- infer
- `resolution`: generated image resolution.
- `num_sampling_steps`: sampling step for generating image.
- `cfg_scale`: classifier-free guide scaling factor
- `solver`: solver for image generation.
- `t_shift`: time shift factor.
- `ntk_scaling`: ntk rope scaling factor.
- `proportional_attn`: Whether to use proportional attention.
- `seed`: random initialization seeds.
1. Run with CLI
inference command:
```bash
lumina_next infer -c <config_path> <caption_here> <output_dir>
```
e.g. Demo command:
```bash
cd lumina_next_t2i
lumina_next infer -c "config/infer/settings.yaml" "a snowman of ..." "./outputs"
```
### Web Demo
To host a local gradio demo for interactive inference, run the following command:
```bash
# `/path/to/ckpt` should be a directory containing `consolidated*.pth` and `model_args.pth`
# default
python -u demo.py --ckpt "/path/to/ckpt"
# the demo by default uses bf16 precision. to switch to fp32:
python -u demo.py --ckpt "/path/to/ckpt" --precision fp32
# use ema model
python -u demo.py --ckpt "/path/to/ckpt" --ema
```
|
SidXXD/dog_clean
|
SidXXD
| 2024-05-15T15:13:40Z | 382 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-15T14:18:17Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - SidXXD/dog_clean
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1-base. You can find some example images in the following.
|
mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF
|
mradermacher
| 2024-05-24T05:33:21Z | 382 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"jdqwoi/TooManyMixRolePlay-7B-Story",
"jdqwoi/02",
"en",
"base_model:jdqwoi/TooManyMixRolePlay-7B-Story_V1",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T04:46:05Z |
---
base_model: jdqwoi/TooManyMixRolePlay-7B-Story_V1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- jdqwoi/TooManyMixRolePlay-7B-Story
- jdqwoi/02
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jdqwoi/TooManyMixRolePlay-7B-Story_V1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TooManyMixRolePlay-7B-Story_V1-GGUF/resolve/main/TooManyMixRolePlay-7B-Story_V1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
John6666/anime-bulldozer-v2-sdxl
|
John6666
| 2024-05-26T11:25:26Z | 382 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-05-26T11:20:39Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
---
Original model is [here](https://civitai.com/models/264323?modelVersionId=315336).
|
mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF
|
mradermacher
| 2024-06-15T14:10:58Z | 382 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:migtissera/Tess-v2.5.2-Qwen2-72B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-15T02:31:27Z |
---
base_model: migtissera/Tess-v2.5.2-Qwen2-72B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE
license_name: qwen2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/migtissera/Tess-v2.5.2-Qwen2-72B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.IQ3_XS.gguf) | IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.IQ3_S.gguf) | IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.IQ3_M.gguf) | IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-v2.5.2-Qwen2-72B-GGUF/resolve/main/Tess-v2.5.2-Qwen2-72B.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
afrideva/dolphin-2.9.3-qwen2-0.5b-GGUF
|
afrideva
| 2024-06-17T02:06:09Z | 382 | 1 | null |
[
"gguf",
"generated_from_trainer",
"axolotl",
"ggml",
"quantized",
"text-generation",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"base_model:cognitivecomputations/dolphin-2.9.3-qwen2-0.5b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-06-17T01:30:26Z |
---
base_model: cognitivecomputations/dolphin-2.9.3-qwen2-0.5b
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
inference: true
license: apache-2.0
model_creator: cognitivecomputations
model_name: dolphin-2.9.3-qwen2-0.5b
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- generated_from_trainer
- axolotl
- gguf
- ggml
- quantized
---
# dolphin-2.9.3-qwen2-0.5b-GGUF
Quantized GGUF model files for [dolphin-2.9.3-qwen2-0.5b](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-0.5b) from [cognitivecomputations](https://huggingface.co/cognitivecomputations)
## Original Model Card:
# Dolphin 2.9.3 Qwen2 0.5B ๐ฌ
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9.3:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
This model is based on Qwen2-0.5b, and is governed by the Apache-2.0
The base model has 128k context, and the full-weight fine-tuning was with 16k sequence length.
Due to the complexities of fine tuning smaller models on datasets created by/for larger models - we removed coding, function calling and systemchat-multilingual datasets when tuning these models.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.3 has a variety of instruction, and conversational skills.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Apache-2.0 We grant permission for any use, including commercial, that falls within accordance with said license. Dolphin was trained on data generated from GPT4, among other models.
Evals:
<img src="https://i.ibb.co/Ct0McfK/7872e88d-e9bf-4640-a9af-294e3127f87a.png" width="600" />
|
Emran/ClinicalBERT_ICD10_Categories
|
Emran
| 2021-10-12T17:42:10Z | 381 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
Entry not found
|
timm/resnetv2_50d_evos.ah_in1k
|
timm
| 2024-02-10T23:35:20Z | 381 | 1 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1603.05027",
"arxiv:2004.02967",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-03-22T20:55:32Z |
---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for resnetv2_50d_evos.ah_in1k
A ResNet-V2 (pre-activation ResNet) image classification model. Trained on ImageNet-1k by Ross Wightman in `timm` using ResNet strikes back (RSB) `A1` based recipe.
This model uses:
* A 3x3 3-layer stem, avg-pool in shortcut downsample.
* EvoNorm-S0 normalization-activation layers instead of Batch Normalization with ReLU activations.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 25.6
- GMACs: 4.3
- Activations (M): 11.9
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- Identity Mappings in Deep Residual Networks: https://arxiv.org/abs/1603.05027
- Evolving Normalization-Activation Layers: https://arxiv.org/abs/2004.02967
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnetv2_50d_evos.ah_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetv2_50d_evos.ah_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetv2_50d_evos.ah_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
```bibtex
@article{He2016,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Identity Mappings in Deep Residual Networks},
journal = {arXiv preprint arXiv:1603.05027},
year = {2016}
}
```
```bibtex
@article{liu2020evolving,
title={Evolving normalization-activation layers},
author={Liu, Hanxiao and Brock, Andy and Simonyan, Karen and Le, Quoc},
journal={Advances in Neural Information Processing Systems},
volume={33},
pages={13539--13550},
year={2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
timm/resnetrs152.tf_in1k
|
timm
| 2024-02-10T23:40:42Z | 381 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:2103.07579",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-04-05T18:46:38Z |
---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
---
# Model card for resnetrs152.tf_in1k
A ResNetRS-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k by paper authors in Tensorflow.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.6
- GMACs: 15.6
- Activations (M): 30.8
- Image size: train = 256 x 256, test = 320 x 320
- **Papers:**
- Revisiting ResNets: Improved Training and Scaling Strategies: https://arxiv.org/abs/2103.07579
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/resnet
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('resnetrs152.tf_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetrs152.tf_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 128, 128])
# torch.Size([1, 256, 64, 64])
# torch.Size([1, 512, 32, 32])
# torch.Size([1, 1024, 16, 16])
# torch.Size([1, 2048, 8, 8])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'resnetrs152.tf_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 8, 8) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@article{bello2021revisiting,
title={Revisiting ResNets: Improved Training and Scaling Strategies},
author={Irwan Bello and William Fedus and Xianzhi Du and Ekin D. Cubuk and Aravind Srinivas and Tsung-Yi Lin and Jonathon Shlens and Barret Zoph},
journal={arXiv preprint arXiv:2103.07579},
year={2021}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Jzuluaga/accent-id-commonaccent_xlsr-en-english
|
Jzuluaga
| 2023-08-26T17:10:16Z | 381 | 6 |
speechbrain
|
[
"speechbrain",
"audio-classification",
"embeddings",
"Accent Identification",
"pytorch",
"wav2vec2",
"XLSR",
"CommonAccent",
"English",
"en",
"dataset:CommonVoice",
"arxiv:2305.18283",
"arxiv:2106.04624",
"license:mit",
"region:us"
] |
audio-classification
| 2023-08-04T22:28:51Z |
---
language:
- en
thumbnail: null
tags:
- audio-classification
- speechbrain
- embeddings
- Accent Identification
- pytorch
- wav2vec2
- XLSR
- CommonAccent
- English
license: mit
datasets:
- CommonVoice
metrics:
- Accuracy
widget:
- example_title: USA
src: >-
https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/us_1.wav
- example_title: Scotland
src: >-
https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/scotland_1.wav
- example_title: Malaysia
src: >-
https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/malaysia_1.wav
- example_title: Philippines
src: >-
https://huggingface.co/Jzuluaga/accent-id-commonaccent_xlsr-en-english/resolve/main/data/philippines_1.wav
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# CommonAccent: Exploring Large Acoustic Pretrained Models for Accent Classification Based on CommonVoice
**English Accent Classifier with XLSR model**
**Abstract**:
Despite the recent advancements in Automatic Speech Recognition (ASR), the recognition of accented speech still remains a dominant problem. In order to create more inclusive ASR systems, research has shown that the integration of accent information, as part of a larger ASR framework, can lead to the mitigation of accented speech errors. We address multilingual accent classification through the ECAPA-TDNN and Wav2Vec 2.0/XLSR architectures which have been proven to perform well on a variety of speech-related downstream tasks. We introduce a simple-to-follow recipe aligned to the SpeechBrain toolkit for accent classification based on Common Voice 7.0 (English) and Common Voice 11.0 (Italian, German, and Spanish). Furthermore, we establish new state-of-the-art for English accent classification with as high as 95% accuracy. We also study the internal categorization of the Wav2Vev 2.0 embeddings through t-SNE, noting that there is a level of clustering based on phonological similarity.
This repository provides all the necessary tools to perform accent identification from speech recordings with [SpeechBrain](https://github.com/speechbrain/speechbrain).
The system uses a model pretrained on the CommonAccent dataset in English (16 accents). This system is based on the CommonLanguage Recipe located here: https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage
The provided system can recognize the following 16 accents from short speech recordings in English (EN):
```
- us
- england
- australia
- indian
- canada
- bermuda
- scotland
- african
- ireland
- newzealand
- wales
- malaysia
- philippines
- singapore
- hongkong
- southatlandtic
```
<a href="https://github.com/JuanPZuluaga/accent-recog-slt2022"> <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green"> </a> Github repository link: https://github.com/JuanPZuluaga/accent-recog-slt2022
**NOTE**: due to incompatibility with the model and the current SpeechBrain interfaces, we cannot offer the Inference API. Please, follow the steps in **"Perform Accent Identification from Speech Recordings"** to use this Italian Accent ID model.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
## Pipeline description
This system is composed of a fine-tuned XLSR model coupled with statistical pooling. A classifier, trained with NLL Loss, is applied on top of that.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Accent Identification from Speech Recordings
```python
import torchaudio
from speechbrain.pretrained.interfaces import foreign_class
classifier = foreign_class(source="Jzuluaga/accent-id-commonaccent_xlsr-en-english", pymodule_file="custom_interface.py", classname="CustomEncoderWav2vec2Classifier")
# US Accent Example
out_prob, score, index, text_lab = classifier.classify_file('Jzuluaga/accent-id-commonaccent_xlsr-en-english/data/us.wav')
print(text_lab)
# Philippines Example
out_prob, score, index, text_lab = classifier.classify_file('Jzuluaga/accent-id-commonaccent_xlsr-en-english/data/philippines.wav')
print(text_lab)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Clone our repository in https://github.com/JuanPZuluaga/accent-recog-slt2022:
```bash
git clone https://github.com/JuanPZuluaga/accent-recog-slt2022
cd CommonAccent/accent_id
python train_w2v2.py hparams/train_w2v2.yaml
```
You can find our training results (models, logs, etc) in this repository's `Files and versions` page.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Cite our work: CommonAccent
If you find useful this work, please cite our work as:
```
@article{zuluaga2023commonaccent,
title={CommonAccent: Exploring Large Acoustic Pretrained Models for Accent Classification Based on Common Voice},
author={Zuluaga-Gomez, Juan and Ahmed, Sara and Visockas, Danielius and Subakan, Cem},
journal={Interspeech 2023},
url={https://arxiv.org/abs/2305.18283},
year={2023}
}
```
#### Cite XLSR model
```@article{conneau2020unsupervised,
title={Unsupervised cross-lingual representation learning for speech recognition},
author={Conneau, Alexis and Baevski, Alexei and Collobert, Ronan and Mohamed, Abdelrahman and Auli, Michael},
journal={arXiv preprint arXiv:2006.13979},
year={2020}
}
```
# **Cite SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and Franรงois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
Universal-NER/UniNER-7B-definition
|
Universal-NER
| 2023-08-11T18:13:24Z | 381 | 28 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2308.03279",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-07T04:43:42Z |
---
license: cc-by-nc-4.0
language:
- en
---
---
# UniNER-7B-definition
**Description**: A UniNER-7B model trained from LLama-7B using the [Pile-NER-definition data](https://huggingface.co/datasets/Universal-NER/Pile-NER-definition) without human-labeled data. The data was collected by prompting gpt-3.5-turbo-0301 to label entities from passages and provide short-sentence definitions. The data collection prompt is as follows:
<div style="background-color: #f6f8fa; padding: 20px; border-radius: 10px; border: 1px solid #e1e4e8; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
<strong>Instruction:</strong><br/>
Given a paragraph, your task is to extract all entities and concepts,
and define their type using a short sentence. The output should be in the following format:
[("entity", "definition of entity type in a short sentence"), ... ]
</div>
Check our [paper](https://arxiv.org/abs/2308.03279) for more information. Check our [repo](https://github.com/universal-ner/universal-ner) about how to use the model.
## Comparison with [UniNER-7B-type](https://huggingface.co/Universal-NER/UniNER-7B-type)
The UniNER-7B-type model, trained on Pile-NER-type, excels in recognizing common and short NER tags (e.g., person, location) and performs better on NER datasets. On the other hand, UniNER-7B-definition demonstrates superior capabilities in understanding short-sentence definitions of entity types. Additionally, it exhibits enhanced robustness against variations in type paraphrasing.
## Inference
The template for inference instances is as follows:
<div style="background-color: #f6f8fa; padding: 20px; border-radius: 10px; border: 1px solid #e1e4e8; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
<strong>Prompting template:</strong><br/>
A virtual assistant answers questions from a user based on the provided text.<br/>
USER: Text: <span style="color: #d73a49;">{Fill the input text here}</span><br/>
ASSISTANT: Iโve read this text.<br/>
USER: What describes <span style="color: #d73a49;">{Fill the entity type here}</span> in the text?<br/>
ASSISTANT: <span style="color: #0366d6;">(model's predictions in JSON format)</span><br/>
</div>
### Note: Inferences are based on one entity type at a time. For multiple entity types, create separate instances for each type.
## License
This model and its associated data are released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. They are primarily used for research purposes.
## Citation
```bibtex
@article{zhou2023universalner,
title={UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition},
author={Wenxuan Zhou and Sheng Zhang and Yu Gu and Muhao Chen and Hoifung Poon},
year={2023},
eprint={2308.03279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
morph-labs/rift-coder-v0-7b-gguf
|
morph-labs
| 2023-10-04T17:27:42Z | 381 | 30 | null |
[
"gguf",
"code",
"code llama",
"text-generation",
"en",
"license:llama2",
"region:us"
] |
text-generation
| 2023-09-28T18:42:21Z |
---
license: llama2
language:
- en
pipeline_tag: text-generation
tags:
- code
- code llama
---
# **Rift Coder 7B**

## Table of Contents
1. **<a href="https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf#model-summary" target="_blank">Model Summary</a>**
2. **<a href="https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf#uses" target="_blank">Uses</a>**
3. **<a href="https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf#installation-and-getting-started" target="_blank">Installation and Getting Started</a>**
4. **<a href="https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf#contact" target="_blank">Contact</a>**
5. **<a href="https://huggingface.co/morph-labs/rift-coder-v0-7b-gguf#ethical-considerations-and-limitations" target="_blank">Ethical Considerations & Limitations</a>**
# **Model Summary**
**Rift Coder 7B** is a finetune of **<a href="https://glaive.ai/" target="_blank">Glaive AI's</a>** model **<a href="https://huggingface.co/glaiveai/glaive-coder-7b" target="_blank">glaive-coder-7b</a>**. We trained on a representative collection of indexed repositories for Python and TypeScript.
Rift Coder 7B is offered in 4-bit and 8-bit quantization.
This model is designed to excel at contextual code editing while maintaining the ability to run on your device. It works best in **<a href="https://www.morph.so" target="_blank">Morph Labs'</a>** VS Code extension, **<a href="https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode" target="_blank">Rift</a>**, with our Code Editing agent.
**<a href="https://github.com/morph-labs/rift" target="_blank">Rift (GitHub link)</a>** was built to be open-source, private, secure, and on-device. **Rift Coder 7B** helps make that possible, as it is designed to be run on-device.
- **Developed by:** **<a href="https://www.morph.so" target="_blank">Morph Labs</a>**
- **Language(s) (NLP):** English. Fine-Tuned on Python & TypeScript.
- **License:** **<a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a>**
- **Model Type:** Derivative of Fine-Tuned Model ultimately based off of **<a href="https://huggingface.co/codellama/CodeLlama-7b-hf" target="_blank">CodeLlama-7b-hf</a>**
## Uses
We suggest usage of the model when working with Python or TypeScript, as our fine-tuning occurred with those contexts in mind. The model may find utility in other domains.
## Installation and Getting Started
1) Download **<a href="https://marketplace.visualstudio.com/items?itemName=Morph.rift-vscode" target="_blank">Rift</a>**
2) Preferences: Open User Settings
3) Navigate to Rift in User Settings (under Extensions)
4) Change the Rift: Code Edit Model selection to llama:rift-coder-v0-7b-gguf
5) This will commence the download. When the download is complete, youโre ready to use Rift Coder 7B ๐ค
## Contact
**<a href="https://forms.gle/fwZhARyzrGEz9t4Q6" target="_blank">Contact Form</a>**
## Ethical Considerations and Limitations
Rift Coder 7B, as with all Large Language Models, carries inherent risks with use. Testing has been solely conducted in English, and our testing has not been fully comprehensive nor could be fully comprehensive of all use scenarios. The model may be prone to producing inaccurate, unsatisfactory, or otherwise undesirable outputs, and thus we encourage all developers to test and tune to their specific use case prior to deployment.
We encourage you to check out **[Meta's Responsible Use Guide for Llama 2](https://ai.meta.com/llama/responsible-user-guide)**.
|
Henk717/echidna-tiefigther-25-gguf
|
Henk717
| 2023-10-30T14:36:12Z | 381 | 3 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-10-30T13:09:43Z |
---
license: cc-by-nc-4.0
---
```
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: NeverSleep/Echidna-13b-v0.3
parameters:
weight: 1.0
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 0.25
dtype: float16
```
|
frankjoshua/juggernautXL_version6Rundiffusion
|
frankjoshua
| 2023-11-03T21:26:42Z | 381 | 6 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-11-03T20:37:51Z |
Entry not found
|
l3utterfly/mistral-7b-v0.1-layla-v4-gguf
|
l3utterfly
| 2024-03-01T11:33:59Z | 381 | 14 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-03-01T03:19:40Z |
---
license: apache-2.0
---
GGUF + quants for: https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v4
|
bartowski/Yi-9B-Coder-GGUF
|
bartowski
| 2024-06-25T15:25:13Z | 381 | 1 |
transformers
|
[
"transformers",
"gguf",
"code",
"llama",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-13T13:33:43Z |
---
tags:
- code
- llama
library_name: transformers
pipeline_tag: text-generation
quantized_by: bartowski
license: apache-2.0
---
## Llamacpp Quantizations of Yi-9B-Coder
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2405">b2405</a> for quantization.
Original model: https://huggingface.co/TechxGenus/Yi-9B-Coder
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-9B-Coder-Q8_0.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q8_0.gguf) | Q8_0 | 9.38GB | Extremely high quality, generally unneeded but max available quant. |
| [Yi-9B-Coder-Q6_K.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q6_K.gguf) | Q6_K | 7.24GB | Very high quality, near perfect, *recommended*. |
| [Yi-9B-Coder-Q5_K_M.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q5_K_M.gguf) | Q5_K_M | 6.25GB | High quality, very usable. |
| [Yi-9B-Coder-Q5_K_S.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q5_K_S.gguf) | Q5_K_S | 6.10GB | High quality, very usable. |
| [Yi-9B-Coder-Q5_0.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q5_0.gguf) | Q5_0 | 6.10GB | High quality, older format, generally not recommended. |
| [Yi-9B-Coder-Q4_K_M.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q4_K_M.gguf) | Q4_K_M | 5.32GB | Good quality, similar to 4.25 bpw. |
| [Yi-9B-Coder-Q4_K_S.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q4_K_S.gguf) | Q4_K_S | 5.07GB | Slightly lower quality with small space savings. |
| [Yi-9B-Coder-Q4_0.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q4_0.gguf) | Q4_0 | 5.03GB | Decent quality, older format, generally not recommended. |
| [Yi-9B-Coder-Q3_K_L.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q3_K_L.gguf) | Q3_K_L | 4.69GB | Lower quality but usable, good for low RAM availability. |
| [Yi-9B-Coder-Q3_K_M.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q3_K_M.gguf) | Q3_K_M | 4.32GB | Even lower quality. |
| [Yi-9B-Coder-Q3_K_S.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q3_K_S.gguf) | Q3_K_S | 3.89GB | Low quality, not recommended. |
| [Yi-9B-Coder-Q2_K.gguf](https://huggingface.co/bartowski/Yi-9B-Coder-GGUF/blob/main/Yi-9B-Coder-Q2_K.gguf) | Q2_K | 3.35GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
TheDrummer/Moistral-11B-v2-Dried-GGUF
|
TheDrummer
| 2024-04-28T06:20:39Z | 381 | 7 | null |
[
"gguf",
"not-for-all-audiences",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-29T16:10:56Z |
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
---
# Original model
https://huggingface.co/TheDrummer/Moistral-11B-v2
### tldr;
I tried merging Moistral with its base model and got VERY good results with logic & writing. Try them out!
If you do try it out, please let me know your thoughts! Just tell me which percentages you tried and how they performed. The future of Moistral is in your hands!
## Moistral 11B v2, dried up
unmerged model: https://huggingface.co/TheDrummer/Moistral-11B-v2
### What is this?
GGUF merges of Moistral 11B v2 and Fimbulvetr v2.
### Why merge?
Simply put, the original Moistral v2 can get too moist. It works for some stories, but sometimes you want it to cook.
### Which one do I pick?
I'm releasing 5 versions of the merge. The % in the filename represents how much "Fimbulvetr v2" I merged into my finetuned model. The higher the percent, the less **moist** there is.
Assessment: TBD
Nutshell: All of them are very coherent. Lower percentages = More moist logic & writing.
Added observation: It seems like no matter how moist the situation is, the character reactions are often grounded / realistic. This applies even to 2.5%.
### "I found the perfect merge ratio!"
Great! Let me know which one and why. This WILL affect future development.
|
mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF
|
mradermacher
| 2024-05-06T04:34:55Z | 381 | 1 |
transformers
|
[
"transformers",
"gguf",
"merge",
"sv",
"da",
"no",
"is",
"en",
"base_model:timpal0l/dolphin-2.9-llama3-8b-flashback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-24T05:17:11Z |
---
base_model: timpal0l/dolphin-2.9-llama3-8b-flashback
language:
- sv
- da
- no
- is
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/timpal0l/dolphin-2.9-llama3-8b-flashback
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-2.9-llama3-8b-flashback-GGUF/resolve/main/dolphin-2.9-llama3-8b-flashback.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
qwp4w3hyb/wizardcoder-33b-v1.1-mirror-iMat-GGUF
|
qwp4w3hyb
| 2024-05-07T17:17:47Z | 381 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-04T23:01:17Z |
Entry not found
|
DevWorld/Gemago-2b-Q4_K_M-GGUF
|
DevWorld
| 2024-05-20T04:40:17Z | 381 | 0 | null |
[
"gguf",
"gemma",
"llama-cpp",
"gguf-my-repo",
"translation",
"ko",
"en",
"dataset:traintogpb/aihub-koen-translation-integrated-base-10m",
"license:apache-2.0",
"license:gemma",
"region:us"
] |
translation
| 2024-05-05T13:22:10Z |
---
language:
- ko
- en
license:
- apache-2.0
- gemma
tags:
- gemma
- llama-cpp
- gguf-my-repo
datasets:
- traintogpb/aihub-koen-translation-integrated-base-10m
pipeline_tag: translation
widget:
- text: 'Korean:
๋๋ผ์ ๋ง์ด ์ค๊ตญ๊ณผ ๋ฌ๋ผ ๋ฌธ์์ ์๋ก ํตํ์ง ์๋ํ๋ค.
English:
'
example_title: K2E
- text: 'English:
Mr. and Mrs. Dursley were proud to say that they were perfectly normal.
Korean:
'
example_title: E2K
inference:
parameters:
max_length: 200
---
# DevWorld/Gemago-2b-Q4_K_M-GGUF
This model was converted to GGUF format from [`DevWorld/Gemago-2b`](https://huggingface.co/DevWorld/Gemago-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DevWorld/Gemago-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DevWorld/Gemago-2b-Q4_K_M-GGUF --model gemago-2b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DevWorld/Gemago-2b-Q4_K_M-GGUF --model gemago-2b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemago-2b.Q4_K_M.gguf -n 128
```
|
mradermacher/UnFimbulvetr-20B-GGUF
|
mradermacher
| 2024-05-08T04:18:08Z | 381 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:KaraKaraWitch/UnFimbulvetr-20B",
"endpoints_compatible",
"region:us"
] | null | 2024-05-07T21:01:41Z |
---
base_model: KaraKaraWitch/UnFimbulvetr-20B
language:
- en
library_name: transformers
no_imatrix: nan
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/KaraKaraWitch/UnFimbulvetr-20B
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q2_K.gguf) | Q2_K | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.IQ3_XS.gguf) | IQ3_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q3_K_S.gguf) | Q3_K_S | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.IQ3_S.gguf) | IQ3_S | 8.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.IQ3_M.gguf) | IQ3_M | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q3_K_M.gguf) | Q3_K_M | 9.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q3_K_L.gguf) | Q3_K_L | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.IQ4_XS.gguf) | IQ4_XS | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q4_K_S.gguf) | Q4_K_S | 11.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q4_K_M.gguf) | Q4_K_M | 12.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q5_K_S.gguf) | Q5_K_S | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q5_K_M.gguf) | Q5_K_M | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q6_K.gguf) | Q6_K | 16.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/UnFimbulvetr-20B-GGUF/resolve/main/UnFimbulvetr-20B.Q8_0.gguf) | Q8_0 | 21.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
netcat420/MFANNv0.10
|
netcat420
| 2024-05-21T22:05:10Z | 381 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-classification",
"en",
"dataset:netcat420/MFANN",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-classification
| 2024-05-15T01:12:48Z |
---
library_name: transformers
license: llama3
datasets:
- netcat420/MFANN
language:
- en
pipeline_tag: text-classification
---
MFANN 8b version 0.10

fine-tuned on the MFANN dataset as it stands on 5/14/24 as it is an ever expanding dataset
|
ukung/Phi-3-mini-4k-instruct-GGUF
|
ukung
| 2024-05-20T01:55:58Z | 381 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T01:06:20Z |
---
license: apache-2.0
---
|
ukung/Nusantara-2.7b-Indo-Chat-v0.2-GGUF
|
ukung
| 2024-05-20T04:14:23Z | 381 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T03:38:55Z |
---
license: apache-2.0
---
|
mradermacher/L3-70B-daybreak-v0.4-i1-GGUF
|
mradermacher
| 2024-05-27T02:44:40Z | 381 | 0 |
transformers
|
[
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:crestf411/L3-70B-daybreak-v0.4",
"endpoints_compatible",
"region:us"
] | null | 2024-05-26T10:11:56Z |
---
base_model: crestf411/L3-70B-daybreak-v0.4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/crestf411/L3-70B-daybreak-v0.4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-70B-daybreak-v0.4-i1-GGUF/resolve/main/L3-70B-daybreak-v0.4.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
davidhajdu/dab-detr-resnet-50
|
davidhajdu
| 2024-06-25T15:42:32Z | 381 | 0 |
transformers
|
[
"transformers",
"pytorch",
"dab-detr",
"endpoints_compatible",
"region:us"
] | null | 2024-05-29T10:17:26Z |
Entry not found
|
RichardErkhov/Weyaxi_-_Nebula-7B-gguf
|
RichardErkhov
| 2024-05-30T02:52:02Z | 381 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-05-29T23:45:50Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nebula-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/Nebula-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nebula-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Nebula-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Nebula-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Nebula-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Nebula-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Nebula-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Nebula-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Nebula-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Nebula-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Nebula-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Nebula-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Nebula-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Nebula-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Nebula-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Nebula-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Nebula-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Nebula-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Nebula-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Nebula-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Nebula-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Nebula-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [Nebula-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_Nebula-7B-gguf/blob/main/Nebula-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# Nebula-7b
Original weights of Nebula-7B. Finetuned from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Lora Weights
You can access original lora weights from here:
[PulsarAI/Nebula-7B-Lora](https://huggingface.co/PulsarAI/Nebula-7B-Lora)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__Nebula-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 53.93 |
| ARC (25-shot) | 59.3 |
| HellaSwag (10-shot) | 83.46 |
| MMLU (5-shot) | 57.0 |
| TruthfulQA (0-shot) | 45.56 |
| Winogrande (5-shot) | 76.4 |
| GSM8K (5-shot) | 14.86 |
| DROP (3-shot) | 40.96 |
|
mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF
|
mradermacher
| 2024-05-30T05:30:30Z | 381 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DaveGergern/13B-Psyfighter2-Erebus3-Slerp",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-05-30T04:52:32Z |
---
base_model: DaveGergern/13B-Psyfighter2-Erebus3-Slerp
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DaveGergern/13B-Psyfighter2-Erebus3-Slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q2_K.gguf) | Q2_K | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.IQ3_XS.gguf) | IQ3_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.IQ3_S.gguf) | IQ3_S | 4.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q3_K_S.gguf) | Q3_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.IQ3_M.gguf) | IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q3_K_M.gguf) | Q3_K_M | 5.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q3_K_L.gguf) | Q3_K_L | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.IQ4_XS.gguf) | IQ4_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q4_K_S.gguf) | Q4_K_S | 6.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q4_K_M.gguf) | Q4_K_M | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q5_K_S.gguf) | Q5_K_S | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q5_K_M.gguf) | Q5_K_M | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q6_K.gguf) | Q6_K | 8.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/13B-Psyfighter2-Erebus3-Slerp-GGUF/resolve/main/13B-Psyfighter2-Erebus3-Slerp.Q8_0.gguf) | Q8_0 | 11.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
datek/Qwen-Qwen1.5-1.8B-1717455654
|
datek
| 2024-06-03T23:02:36Z | 381 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-03T23:00:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.