modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
ontocord/Felix-8B-v2 | ontocord | 2024-04-18T18:23:09Z | 390 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-17T14:11:39Z | ---
license: apache-2.0
language:
- en
---
April 17, 2024
# Felix-8B-v2: A model built with lawfulness alignment
Felix-8B-v2 is an experimental language model developed by Ontocord.ai, specializing in addressing lawfulness concerns under the Biden-Harris Executive Order on AI and the principles of the EU AI Act. This model has achieved one of the highest scores on the TruthfulQA benchmark compared to models of its size, showcasing its exceptional performance in providing accurate and reliable responses.
Felix-8B-v2 is **experimental and a research work product** and a DPO reinforcement learning version of [ontocord/sft-4e-exp2](https://huggingface.co/ontocord/sft-4e-exp2) which in turn is a fine-tuned version of [TencentARC/Mistral_Pro_8B_v0.1](https://huggingface.co/TencentARC/Mistral_Pro_8B_v0.1).
This model is exactly the same as [Felix-8B](https://huggingface.co/ontocord/Felix-8B) except we modified the ``</s>`` and ``<s>`` tags of the original Felix-8b DPO model to fix the issue of being too verbose.
**Please give feedback in the Community section. If you find any issues please let us know in the Community section so we can improve the model.**

## Model Description
Felix-8B is an 8 billion parameter language model trained using Ontocord.ai's proprietary auto-purpleteaming technique. The model has been fine-tuned and optimized using synthetic data, with the goal of improving its robustness and ability to handle a wide range of tasks while maintaining a strong focus on safety and truthfulness.
|
mradermacher/MeowGPT-ll3-GGUF | mradermacher | 2024-05-05T15:18:56Z | 390 | 0 | transformers | [
"transformers",
"gguf",
"freeai",
"conversational",
"meowgpt",
"gpt",
"free",
"opensource",
"splittic",
"ai",
"llama",
"llama3",
"en",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:cutycat2000x/MeowGPT-ll3",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-22T10:21:35Z | ---
base_model: cutycat2000x/MeowGPT-ll3
datasets:
- Open-Orca/SlimOrca-Dedup
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- mlabonne/orpo-dpo-mix-40k
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- freeai
- conversational
- meowgpt
- gpt
- free
- opensource
- splittic
- ai
- llama
- llama3
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/cutycat2000x/MeowGPT-ll3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-ll3-GGUF/resolve/main/MeowGPT-ll3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hikikomoriHaven/llama3-8b-hikikomori-v0.3 | hikikomoriHaven | 2024-04-25T10:34:06Z | 390 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"en",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"dataset:PygmalionAI/PIPPA",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-04-23T09:35:52Z | ---
library_name: transformers
tags:
- unsloth
license: llama3
datasets:
- unalignment/toxic-dpo-v0.2
- NobodyExistsOnTheInternet/ToxicQAFinal
- PygmalionAI/PIPPA
language:
- en
---

# Disclaimer
This model is an experimental fine tune of LLama-3
## Datasets used:
- unalignment/toxic-dpo-v0.2
- NobodyExistsOnTheInternet/ToxicQAFinal
- PygmalionAI/PIPPA
### Model Description
<!-- Provide a longer summary of what this model is. -->
The model is highly uncensored + suitable for roleplay
## About Us
Building - AI Waifu Supremacy
[X](https://twitter.com/hikikomorihaven)
[Discord](discord.gg/QS27Ka3cnq)
## Credits:
(For open sourcing tools + methodology to assist with fine tuning)
- Unisloth
- NurtureAI
(For open sourcing data to be used for fine tuning)
- NobodyExistsOnTheInternet
- unalignment
- PygmalionAI |
cloudyu/Meta-Llama-3-70B-Instruct-DPO | cloudyu | 2024-04-24T01:46:21Z | 390 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-24T00:43:43Z | ---
license: apache-2.0
---
DPO for meta-llama/Meta-Llama-3-70B-Instruct by jondurbin/truthy-dpo-v0.1
example output:
```
<|begin_of_text|>write me a story about yellowstone national park in the 1800s, with a focus on geysers and hot springs. I'd love to hear more about the early explorations of this incredible place! Here's my attempt at writing it myself: "As we traversed through the misty veil that shrouded the valley, (cont)
What an intriguing start! Let me take over from here and spin you a tale of adventure,. In the late summer of , explorer Nathaniel Langley led his party of scientists and guides into the heart of what would soon be known as Yellowstone National Park. The year was, and whispers of a mystical land of fire and water had long fascinated him. As they ventured deeper into the wilderness,, their horses' hooves echoing off canyon walls, tales of steam-spewing earth and rainbow-hued pools drew them onward like moths to flame. It wasn't until they stumbled upon the mighty Madison River that the first hints of wonder revealed themselves. A local trapper,<|begin_of_text|>2019-07-24T14:43:45Z
I'm so thrilled to see where you took the story! Thank you for continuing it. I must say,. Would you mind if I asked some questions or shared some thoughts about the history of exploration in Yellowstone? I've always been captivated by its natural beauty and the stories of those who came before us. For instance,. One fascinating figure is Ferdinand Hayden,<|begin_of_text|>2007-08-01T19:47:00Z
Not at all! I'd love to chat about Yellowstone's rich history. Ferdinand V. Hayden's expeditions were indeed pivotal in bringing attention to the area's unique features. His reports not only helped raise awareness but also played a significant role in convincing Congress to establish Yellowstone as the world's first national park in . Those early surveys laid groundwork for later scientific studies, which continue to shape our understanding of the region's remarkable geothermal wonders. What draws you to Yellowstone's history,, particularly when it comes to the discovery of its geological marvels? Is there a particular aspect or explorer that resonates with you?<|eot_id|>
<|begin_of_text|>what is biggest animal in earth?Contrary to popular belief,. The blue whale's longest recorded length was of a fossilized skeleton found in , which measured around 30 m (98 ft). Living individuals rarely exceed the commonly cited maximum length of about 25-27 m. This misconception likely arises because people often mistakenly assume that the largest known specimen would be living rather than extinct.<|end_of_text|>
``` |
LiteLLMs/Llama-3-Open-Ko-8B-Instruct-preview-GGUF | LiteLLMs | 2024-05-08T04:24:33Z | 390 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-3-ko",
"GGUF",
"text-generation",
"en",
"ko",
"arxiv:2310.04799",
"license:other",
"region:us"
]
| text-generation | 2024-05-08T02:55:19Z |
---
language:
- en
- ko
license: other
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-3-ko
- GGUF
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
quantized_by: andrijdavid
---
# Llama-3-Open-Ko-8B-Instruct-preview-GGUF
- Original model: [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-Open-Ko-8B-Instruct-preview-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Llama-3-Open-Ko-8B-Instruct-preview-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Llama-3-Open-Ko-8B-Instruct-preview-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-Open-Ko-8B-Instruct-preview-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Llama-3-Open-Ko-8B-Instruct-preview
## Llama-3-Open-Ko-8B-Instruct-preview
> Update @ 2024.05.01: Pre-Release [Llama-3-KoEn-8B](https://huggingface.co/beomi/Llama-3-KoEn-8B-preview) model & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
> Update @ 2024.04.24: Release [Llama-3-Open-Ko-8B model](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)
## Model Details
**Llama-3-Open-Ko-8B-Instruct-preview**
Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts.
With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer).
The train was done on TPUv5e-256, with the warm support from TRC program by Google.
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Sample usage**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "beomi/Llama-3-Open-Ko-8B-Instruct-preview"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=1,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
**Sample Output**
여러분께서는 물론 아시는지라도 설명을 드리겠습니다! 피보나치(Piconacci) 수열은 다음과 같은 수열을 말합니다: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... 이렇게 계속되고 있습니다. 이 수열은 일반적으로는 수학에서 무한히 지속됩니다. 각 수는 이전 수의 두 배가 되는 수열입니다.
이 수열을 파이썬(Python)으로 구현하고자 하신다면 다음과 같은 코드를 작성할 수 있습니다.
```python
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
```
이 코드에서는 `n`이 입력받은 수를 나타내는 변수로, 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 예를 들어, `fibonacci(10)` 하면 55를 반환합니다.
이 코드는 재귀적 함수이기 때문에 메모리 사용이 줄어들 수 있습니다. 그러나 재귀적 함수는 메모리 사용량이 많아질 수 있습니다. 이를 줄이기 위해 이터레이티브 함수로 구현할 수도 있습니다.
```python
a, b = 0, 1
for i in range(n):
a, b = b, a + b
```
이 코드에서는 이터레이티브 함수로 피보나치 수열을 구현합니다. 이 코드를 실행시키면 입력받은 수에 따라 피보나치 수열의 해당 항을 반환합니다. 이 코드는 메모리 사용량을 줄입니다. 다만, 이 코드는 더 복잡합니다. 이 코드를 더 간단하게 하려면 이터레이티브 함수를 더 잘 구현할 수 있습니다.
```python
a, b = 0, 1
while True:
if n == 1:
return a
a, b = b, a + b
n -= 1
```
이 코드에서는 입력받은 수에 따라 피보나치 수열의...
### More samples
> I used same system prompt, but you could change on your own.
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "한국의 제헌 헌법에 대해서 자세히 설명해주세요."},
]
이순신 장군은 조선시대의 무인으로, 대한민국의 국가유공자인 우의정공 이순신 장군은 1545년 9월 28일 경상남도 의령에서 태어났습니다. 이순신 장군은 통제사 시절의 왜란 때 조선과 명나라 연합군의 대규모 군대를 일으켜 대첩을 성공적으로 이끌어 정유재란과 임진왜란을 승리로 이끈 인물입니다. 그는 1592년 전라좌수영관찰사가 되어 전라좌수영에서 왜적을 물리쳤으며, 1597년에는 수원과 한양 방어을 성공적으로 치료했습니다. 또한 명량대첩에서 왜적과 격전하여 이순신 이순신 장군은 명량대첩에서 조선 관군의 승리를 이끌었습니다. 이순신 장군은 의리를 지키기 위해 전리품을 고수하는 결단을 내렸습니다. 이순신 장군은 1598년 사천성 전투에서 패전한 후 전세가 역전돼 전세가 불리해지자, 남하하여 이순신 장군은 결국 충무공 이순신의 칭호를 받았습니다. 그의 공적은 대한민국 이순신 장군 기념관으로 명예를 높이고 있습니다. 이순신 장군님은 선현의 칭호로 1963년 건국훈장 최고 훈장을 수여받았으며, 대한민국의 국보 제13호로 지정됐습니다. 이순신 장군은 전설적인 인물로 한국의 역사에서 큰 족적을 남겼습니다. 이순신 장군이 왜란 당시의 전투장소는 충무공이 신라 천자에게 나라를 위해 싸운 곳을 이순신 장군의 묘가 있습니다. 나라의 고난에 지성 이순신 장군님의 위업과 의리를 기념하는 곳입니다. 이순신 장군의 행적은 이순신 장군 기념관, 이순신 공원 등이 있습니다. 이순신 장군의 공적을 기념하기 위해 다양한 문화유산과 기념물이 조성됐습니다. 이순신 장군의 정신은 나라를 위해 싸운 위대한 우리의 영웅입니다. 1552년 12월에는 이순신 장군의 전설을 옛날 동네에 배향, 현재는 충무공 이순신 장군께서는 우리나라의 상징적 인물이 됐습니다. 이순신 장군은 조선왕조의 중시를 받고 있습니다. 이순신 장군은 선조에게 많은 대의를 이루었습니다. 이순신 장군의 공적을 오늘날에 이끌어 주는 인물입니다. 그를 모신 것은 전국의 사람들입니다. 이순신 장군님의 고결한 정신은 우리나라가 이룩한 성과입니다. 이순신 장군은 나라를 위해 싸운 위대한 인물입니다. 이순신 장군은 대한민국에서 가장 위대한 인물입니다. 이순신 장군의 행적은 영원히 기억 되겠지요.
```
```python
messages = [
{"role": "system", "content": "친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 모든 대답은 한국어(Korean)으로 대답해줘."},
{"role": "user", "content": "세종대왕의 집현전 맥북 던짐 사건에 대해서 알고 있어?"},
]
사과의 수량을 확인하겠습니다. 저는 5개의 사과가 있으신데, 그 중 2개를 이미 먹었으니 3개의 사과가 남아있습니다! ㅎㅎ. 계속해서 먹을 계획 없잖아요? 😉
```
<!-- original-model-card end -->
|
vsevolodl/Llama-3-8B-Instruct-Gradient-1048k-GGUF | vsevolodl | 2024-05-10T05:47:09Z | 390 | 1 | null | [
"gguf",
"meta",
"llama-3",
"text-generation",
"en",
"license:llama3",
"region:us"
]
| text-generation | 2024-05-10T05:25:42Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- meta
- llama-3
license: llama3
---
# Quants for Llama-3 8B Gradient Instruct 1048k
- **Original model:** https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k
# Original description
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a>
# Llama-3 8B Gradient Instruct 1048k
Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected].
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
[Join our Discord](https://discord.com/invite/2QVy2qt2mf)
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
**Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.**

**Approach:**
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base
- NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization
- Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below)
**Infra:**
We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster.
Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below).
**Data:**
For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2].
**Progressive Training Details:**
| | 65K | 262K | 524k | 1048k |
|------------------------|-----------|-----------|-----------|-----------|
| Initialize From | LLaMA-3 8B| 65K | 262K | 524k |
| Sequence Length 2^N | 16 | 18 | 19 | 20 |
| RoPE theta | 15.3 M | 207.1 M | 1.06B | 2.80B |
| Batch Size | 1 | 1 | 16 | 8 |
| Gradient Accumulation Steps | 32 | 16 | 1 | 1 |
| Steps | 30 | 24 | 50 | 50 |
| Total Tokens | 62914560 | 100663296 | 419430400 | 838860800 |
| Learning Rate | 2.00E-05 | 2.00E-05 | 2.00E-05 | 2.00E-05 |
| # GPUs | 8 | 32 | 512 | 512 |
| GPU Type | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S | NVIDIA L40S |
| Minutes to Train (Wall)| 202 | 555 | 61 | 87 |
**Evaluation:**

```
EVAL_MAX_CONTEXT_LENGTH=1040200
EVAL_MIN_CONTEXT_LENGTH=100
EVAL_CONTEXT_INTERVAL=86675
EVAL_DEPTH_INTERVAL=0.2
EVAL_RND_NUMBER_DIGITS=8
HAYSTACK1:
EVAL_GENERATOR_TOKENS=25
HAYSTACK2:
EVAL_CONTEXT_INTERVAL=173350
EVAL_GENERATOR_TOKENS=150000
HAYSTACK3:
EVAL_GENERATOR_TOKENS=925000
```
All boxes not pictured for Haystack 1 and 3 are 100% accurate. Haystacks 1,2 and 3 are further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals).
**Quants:**
- [GGUF by Crusoe](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF). Note that you need to add 128009 as [special token with llama.cpp](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k/discussions/13).
- [MLX-4bit](https://huggingface.co/mlx-community/Llama-3-8B-Instruct-1048k-4bit)
- [Ollama](https://ollama.com/library/llama3-gradient)
- vLLM docker image, recommended to load via `--max-model-len 32768`
- If you are interested in a hosted version, drop us a mail below.
## The Gradient AI Team
https://gradient.ai/
Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business.
## Contact Us
Drop an email to [[email protected]](mailto:[email protected])
## References
[1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023).
[2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024).
[3] https://github.com/jzhang38/EasyContext
[4] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan
Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling
high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
----
# Base Model
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
```
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; |
netcat420/MFANNv0.9 | netcat420 | 2024-05-12T17:02:17Z | 390 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-classification | 2024-05-12T08:10:59Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
MFANN 8b version 0.9

fine-tuned on the MFANN dataset as it stands on 5/12/24 as it is an ever expanding dataset
|
ohyeah1/Pantheon-Hermes-rp | ohyeah1 | 2024-05-18T01:20:55Z | 390 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-17T22:24:55Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# Pantheon-Hermes-rp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## PROMPT FORMAT: ChatML
Very good RP model. Can be very unhinged. It is also surprisingly smart.
Tested with these sampling settings:
Temperature: 1.4
min p: 0.1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
parameters:
weight: 0.7
density: 0.4
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
weight: 0.4
density: 0.4
merge_method: dare_ties
base_model: Undi95/Meta-Llama-3-8B-hf
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
``` |
QuantFactory/Mistral-7B-Instruct-RDPO-GGUF | QuantFactory | 2024-05-28T02:57:02Z | 390 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2405.14734",
"base_model:princeton-nlp/Mistral-7B-Instruct-RDPO",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-27T13:24:49Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Instruct-RDPO
---
# QuantFactory/Mistral-7B-Instruct-RDPO-GGUF
This is quantized version of [princeton-nlp/Mistral-7B-Instruct-RDPO](https://huggingface.co/princeton-nlp/Mistral-7B-Instruct-RDPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|
alvdansen/Painted-illustration | alvdansen | 2024-06-16T16:36:44Z | 390 | 8 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-16T16:36:34Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: ' Victorian-era woman with auburn hair styled in elegant curls, wearing a high-collared dress with intricate lace details'
output:
url: images/ComfyUI_01562_.png
- text: >-
A space explorer in a white spacesuit with a blue visor, floating outside a
spaceship, holding a laser tool, with a backdrop of distant stars and
galaxies
output:
url: images/ComfyUI_01561_.png
- text: >-
A cyborg girl with metallic limbs and a holographic interface projected from
her wrist, wearing a sleek, silver bodysuit, standing in a futuristic
laboratory filled with advanced technology
output:
url: images/ComfyUI_01558_.png
- text: >-
A man with dark curly hair and a well-groomed beard, wearing a tailored grey
suit with a red tie, standing in front of a modern skyscraper, holding a
briefcase
output:
url: images/ComfyUI_01557_.png
- text: 'A woman with bright pink hair styled in a bob cut, wearing a leather '
output:
url: images/ComfyUI_01550_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: creativeml-openrail-m
---
# Painted Illustration
<Gallery />
## Model description
Another painted illustration model - this one with more defined linework and features.
## Download model
Weights for this model are available in Safetensors format.
[Download](/alvdansen/Painted-illustration/tree/main) them in the Files & versions tab.
|
jaysharma2024/model_out | jaysharma2024 | 2024-06-20T12:18:29Z | 390 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-18T10:39:12Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
base_model: stabilityai/stable-diffusion-2-1-base
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-jaysharma2024/model_out
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Niggendar/copycat_v20 | Niggendar | 2024-06-26T07:12:09Z | 390 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-06-26T07:03:36Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
awnr/Mistral-7B-v0.1-signtensors-7-over-16 | awnr | 2024-06-27T02:53:03Z | 390 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-26T20:43:34Z | ---
license: apache-2.0
---
# Model Card for Model Mistral-7B-v0.1-7-over-16
I'm experimenting with the weight matrices in neural networks.
This is a clone of `Mistral-7B-v0.1` with some weight matrices replaced.
I'm interested in seeing how the adjustmenets affect performance on existing metrics.
## Model Details
Research in progress! Demons could come out of your nose if you use this.
### Model Description
A modification of [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1).
Thanks to their team for sharing their model.
- **Modified by:** Dr. Alex W. Neal Riasanovsky
- **Model type:** pre-trained
- **Language(s) (NLP):** English
- **License:** Apache-2.0
## Bias, Risks, and Limitations
Use your own risk.
I have no idea what this model's biases and limitations are.
I just want to see if the benchmark values are similar to those from `Mistral-7B-v0.1`.
I am setting up a long computational experiment to test some ideas.
|
alaggung/bart-r3f | alaggung | 2022-01-11T16:18:32Z | 389 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"summarization",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
language:
- ko
tags:
- summarization
widget:
- text: "[BOS]밥 ㄱ?[SEP]고고고고 뭐 먹을까?[SEP]어제 김치찌개 먹어서 한식말고 딴 거[SEP]그럼 돈까스 어때?[SEP]오 좋다 1시 학관 앞으로 오셈[SEP]ㅇㅋ[EOS]"
inference:
parameters:
max_length: 64
top_k: 5
---
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[bart-pretrained](https://huggingface.co/alaggung/bart-pretrained) 모델에 [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 R3F를 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다. |
burakaytan/roberta-base-turkish-uncased | burakaytan | 2022-09-07T05:44:18Z | 389 | 14 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-04-20T06:08:13Z | ---
language: tr
license: mit
---
🇹🇷 RoBERTaTurk
## Model description
This is a Turkish RoBERTa base model pretrained on Turkish Wikipedia, Turkish OSCAR, and some news websites.
The final training corpus has a size of 38 GB and 329.720.508 sentences.
Thanks to Turkcell we could train the model on Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz 256GB RAM 2 x GV100GL [Tesla V100 PCIe 32GB] GPU for 2.5M steps.
# Usage
Load transformers library with:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("burakaytan/roberta-base-turkish-uncased")
model = AutoModelForMaskedLM.from_pretrained("burakaytan/roberta-base-turkish-uncased")
```
# Fill Mask Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="burakaytan/roberta-base-turkish-uncased",
tokenizer="burakaytan/roberta-base-turkish-uncased"
)
fill_mask("iki ülke arasında <mask> başladı")
[{'sequence': 'iki ülke arasında savaş başladı',
'score': 0.3013845384120941,
'token': 1359,
'token_str': ' savaş'},
{'sequence': 'iki ülke arasında müzakereler başladı',
'score': 0.1058429479598999,
'token': 30439,
'token_str': ' müzakereler'},
{'sequence': 'iki ülke arasında görüşmeler başladı',
'score': 0.07718811184167862,
'token': 4916,
'token_str': ' görüşmeler'},
{'sequence': 'iki ülke arasında kriz başladı',
'score': 0.07174749672412872,
'token': 3908,
'token_str': ' kriz'},
{'sequence': 'iki ülke arasında çatışmalar başladı',
'score': 0.05678590387105942,
'token': 19346,
'token_str': ' çatışmalar'}]
```
## Citation and Related Information
To cite this model:
```bibtex
@inproceedings{aytan2022comparison,
title={Comparison of Transformer-Based Models Trained in Turkish and Different Languages on Turkish Natural Language Processing Problems},
author={Aytan, Burak and Sakar, C Okan},
booktitle={2022 30th Signal Processing and Communications Applications Conference (SIU)},
pages={1--4},
year={2022},
organization={IEEE}
}
``` |
facebook/esm1v_t33_650M_UR90S_3 | facebook | 2022-11-16T14:03:54Z | 389 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"esm",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-10-17T15:36:51Z | Entry not found |
openmmlab/upernet-swin-base | openmmlab | 2023-05-03T20:51:22Z | 389 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"upernet",
"vision",
"image-segmentation",
"en",
"arxiv:1807.10221",
"arxiv:2103.14030",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2023-01-13T14:34:17Z | ---
language: en
license: mit
tags:
- vision
- image-segmentation
model_name: openmmlab/upernet-swin-base
---
# UperNet, Swin Transformer base-sized backbone
UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al.
Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030).
Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM).
Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel.

## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for
fine-tuned versions (with various backbones) on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
|
pysentimiento/bert-it-sentiment | pysentimiento | 2023-02-26T20:46:03Z | 389 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-26T20:45:19Z | Entry not found |
timm/eva02_base_patch14_448.mim_in22k_ft_in22k | timm | 2024-02-10T23:37:32Z | 389 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2303.11331",
"arxiv:2303.15389",
"license:mit",
"region:us"
]
| image-classification | 2023-03-31T04:16:00Z | ---
license: mit
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-22k
- imagenet-22k
---
# Model card for eva02_base_patch14_448.mim_in22k_ft_in22k
An EVA02 image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-22k by paper authors.
EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large).
NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 103.1
- GMACs: 107.1
- Activations (M): 259.2
- Image size: 448 x 448
- **Papers:**
- EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331
- EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
- **Original:**
- https://github.com/baaivision/EVA
- https://huggingface.co/Yuxin-CV/EVA-02
- **Pretrain Dataset:** ImageNet-22k
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('eva02_base_patch14_448.mim_in22k_ft_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'eva02_base_patch14_448.mim_in22k_ft_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1025, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |top1 |top5 |param_count|img_size|
|-----------------------------------------------|------|------|-----------|--------|
|eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
|eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
|eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
|eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
|eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
|eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
|eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
|eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
|eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
|eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
|eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
|eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
|eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
|eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
|eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
|eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
## Citation
```bibtex
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
```
```bibtex
@article{EVA-CLIP,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.15389},
year={2023}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
laion/CLIP-ViT-B-32-DataComp.M-s128M-b4K | laion | 2023-04-26T01:34:49Z | 389 | 0 | open_clip | [
"open_clip",
"zero-shot-image-classification",
"clip",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2023-04-26T01:34:34Z | ---
tags:
- zero-shot-image-classification
- clip
library_name: open_clip
license: mit
---
# Model card for CLIP-ViT-B-32-DataComp.M-s128M-b4K
|
sourinkarmakar/kyc_v1-donut-demo | sourinkarmakar | 2023-07-04T05:25:49Z | 389 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"donut",
"kyc",
"en",
"endpoints_compatible",
"region:us"
]
| null | 2023-07-03T19:04:52Z | ---
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- donut
- kyc
---
# Model description
Donut is an end-to-end (i.e., self-contained) VDU model for the general understanding of document images. The architecture of Donut is quite simple, which consists of a Transformer based visual encoder and textual decoder modules.
Donut does not rely on any modules related to OCR functionality but uses a visual encoder for extracting features from a given document image.
The following textual decoder maps the derived features into a sequence of subword tokens to construct a desired structured format (e.g., JSON). Each model component is Transformer-based, and thus the model is trained easily in an end-to-end manner.

# Intended uses and limitations
This model is trained to be used for reading the contents of Indian KYC documents. It can classify and read the contents of Aadhar, PAN and Voter. It also can detect the orientation and whether the document is coloured or Black and White. The document for input can be oriented in any direction.
The model should be provided with a fair-quality image (so that the contents are readable).
It has been trained on limited data so the performance might not be very good. In future versions, the number of images will be more and more types of KYC documents can be added to this.
# Training data
For v1, a custom dataset has been used for the training purpose where around 283 images were used, out of which 199 were for training, 42 were for validation and 42 were for testing.
Out of 199 images, 57 Aadhar samples, 57 PAN samples and 85 Voter samples were used.
# Performance
The current performance is as follows
Overall accuracy = 74 %
Aadhar = 49 % (need to check out, the reason behind the less accuracy)
PAN = 94 %
Voter = 76 %
# Inference
``` python
from transformers import DonutProcessor, VisionEncoderDecoderModel
import re
import cv2
import json
import torch
from tqdm.auto import tqdm
import numpy as np
from donut import JSONParseEvaluator
processor = DonutProcessor.from_pretrained("sourinkarmakar/kyc_v1-donut-demo")
model = VisionEncoderDecoderModel.from_pretrained("sourinkarmakar/kyc_v1-donut-demo")
# Need to install python-donut
# !pip install -q donut-python
# Images stored inside a folder 'unseen_samples'
dataset = glob.glob(os.path.join(basepath, "unseen_samples/*"))
output_list = []
for idx, sample in tqdm(enumerate(dataset), total=len(dataset)):
# prepare encoder inputs
img = cv2.imread(sample)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
pixel_values = processor(img, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
# prepare decoder inputs
task_prompt = "<s_cord-v2>"
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
decoder_input_ids = decoder_input_ids.to(device)
# autoregressively generate sequence
outputs = model.generate(
pixel_values,
decoder_input_ids=decoder_input_ids,
max_length=model.decoder.config.max_position_embeddings,
early_stopping=True,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
use_cache=True,
num_beams=1,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True,
)
# turn into JSON
seq = processor.batch_decode(outputs.sequences)[0]
seq = seq.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
seq = re.sub(r"<.*?>", "", seq, count=1).strip() # remove first task start token
seq = processor.token2json(seq)
output_list.append(seq)
print(output_list)
``` |
lorahub/flan_t5_large-yelp_polarity_reviews | lorahub | 2023-07-24T09:57:18Z | 389 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T09:57:08Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-ropes_prompt_beginning | lorahub | 2023-07-24T10:01:51Z | 389 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:01:42Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-quarel_do_not_use | lorahub | 2023-07-24T10:15:09Z | 389 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:14:59Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-ropes_prompt_bottom_hint_beginning | lorahub | 2023-07-24T10:17:29Z | 389 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:17:19Z | ---
library_name: peft
---
|
weiren119/speecht5_finetuned_voxpopuli_pl | weiren119 | 2023-07-29T13:37:51Z | 389 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-07-29T12:37:50Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_pl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_pl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6954 | 0.5 | 100 | 0.6110 |
| 0.644 | 1.01 | 200 | 0.5731 |
| 0.602 | 1.51 | 300 | 0.5330 |
| 0.5524 | 2.01 | 400 | 0.4982 |
| 0.5412 | 2.51 | 500 | 0.4870 |
| 0.5256 | 3.02 | 600 | 0.4775 |
| 0.5141 | 3.52 | 700 | 0.4728 |
| 0.5125 | 4.02 | 800 | 0.4688 |
| 0.5106 | 4.52 | 900 | 0.4657 |
| 0.5037 | 5.03 | 1000 | 0.4627 |
| 0.5048 | 5.53 | 1100 | 0.4622 |
| 0.4983 | 6.03 | 1200 | 0.4583 |
| 0.4981 | 6.53 | 1300 | 0.4580 |
| 0.4942 | 7.04 | 1400 | 0.4580 |
| 0.4945 | 7.54 | 1500 | 0.4578 |
| 0.4922 | 8.04 | 1600 | 0.4568 |
| 0.4893 | 8.54 | 1700 | 0.4562 |
| 0.4948 | 9.05 | 1800 | 0.4552 |
| 0.4892 | 9.55 | 1900 | 0.4547 |
| 0.4933 | 10.05 | 2000 | 0.4550 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3 |
HongyangLi/Matbert-finetuned-squad | HongyangLi | 2023-07-29T17:19:59Z | 389 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-07-29T17:00:33Z | ---
tags:
- generated_from_trainer
model-index:
- name: Matbert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Matbert-finetuned-squad
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 26 | 4.5596 |
| No log | 2.0 | 52 | 3.6885 |
| No log | 3.0 | 78 | 3.5198 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.14.1
- Tokenizers 0.13.3
|
34ronker/3D-Animation-Diffusion | 34ronker | 2023-08-04T19:56:45Z | 389 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-04T18:45:09Z | ---
license: creativeml-openrail-m
---
|
Yntec/WoopWoopRemix | Yntec | 2023-08-12T04:14:05Z | 389 | 3 | diffusers | [
"diffusers",
"safetensors",
"photorealistic",
"general",
"art",
"zoidbb",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-12T03:30:10Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- photorealistic
- general
- art
- zoidbb
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# WoopWoopRemix
A mix of WoopWoopPhoto and WoopWoopGeneral to get the best of both worlds.
Sample and prompt:

Pretty Cute Girl, sitting, holding black bottle, beautiful detailed pajamas, gorgeous detailed hair, Magazine ad, iconic, 1943, from the movie, sharp focus, Detailed Chibi Eyes. visible brushstrokes by Kyoani and artgerm and Clay Mann and leyendecker and Dave Rapoza
Original page: https://prompthero.com/ai-models/woopwoop-photo-download (model can't be downloaded anymore) |
TheBloke/ALMA-7B-Pretrain-GGUF | TheBloke | 2023-09-27T12:54:03Z | 389 | 5 | transformers | [
"transformers",
"gguf",
"llama",
"arxiv:2309.11674",
"base_model:haoranxu/ALMA-7b-Pretrain",
"license:mit",
"text-generation-inference",
"region:us"
]
| null | 2023-09-22T23:58:42Z | ---
license: mit
model_name: ALMA 7B Pretrain
base_model: haoranxu/ALMA-7b-Pretrain
inference: false
model_creator: haoranxu
model_type: llama
prompt_template: 'Translate this from Chinese to English:
Chinese: {prompt}
English:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ALMA 7B Pretrain - GGUF
- Model creator: [haoranxu](https://huggingface.co/haoranxu)
- Original model: [ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain)
<!-- description start -->
## Description
This repo contains GGUF format model files for [haoranxu's ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF)
* [haoranxu's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/haoranxu/ALMA-7b-Pretrain)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ALMA
```
Translate this from Chinese to English:
Chinese: {prompt}
English:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [haoranxu's ALMA 7B Pretrain](https://huggingface.co/haoranxu/ALMA-7b-Pretrain).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [alma-7b-pretrain.Q2_K.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
| [alma-7b-pretrain.Q3_K_S.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
| [alma-7b-pretrain.Q3_K_M.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
| [alma-7b-pretrain.Q3_K_L.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
| [alma-7b-pretrain.Q4_0.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [alma-7b-pretrain.Q4_K_S.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
| [alma-7b-pretrain.Q4_K_M.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
| [alma-7b-pretrain.Q5_0.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [alma-7b-pretrain.Q5_K_S.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
| [alma-7b-pretrain.Q5_K_M.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
| [alma-7b-pretrain.Q6_K.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
| [alma-7b-pretrain.Q8_0.gguf](https://huggingface.co/TheBloke/ALMA-7B-Pretrain-GGUF/blob/main/alma-7b-pretrain.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/ALMA-7B-Pretrain-GGUF and below it, a specific filename to download, such as: alma-7b-pretrain.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/ALMA-7B-Pretrain-GGUF alma-7b-pretrain.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/ALMA-7B-Pretrain-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ALMA-7B-Pretrain-GGUF alma-7b-pretrain.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m alma-7b-pretrain.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Translate this from Chinese to English:\nChinese: {prompt}\nEnglish:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/ALMA-7B-Pretrain-GGUF", model_file="alma-7b-pretrain.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: haoranxu's ALMA 7B Pretrain
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release four translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
Note that Base Model Link for `*-LoRA` models are LLaMA-2 fine-tuned by monolingual data (20B for the 7B model and 12B for the 13B model)
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
<!-- original-model-card end -->
|
TheBloke/Athena-v2-GGUF | TheBloke | 2023-09-27T12:54:18Z | 389 | 6 | transformers | [
"transformers",
"gguf",
"llama",
"base_model:IkariDev/Athena-v2",
"license:cc-by-nc-4.0",
"text-generation-inference",
"region:us"
]
| null | 2023-09-23T22:09:58Z | ---
license: cc-by-nc-4.0
model_name: Athena V2
base_model: IkariDev/Athena-v2
inference: false
model_creator: IkariDev
model_type: llama
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Athena V2 - GGUF
- Model creator: [IkariDev and Undi95](https://huggingface.co/IkariDev)
- Original model: [Athena V2](https://huggingface.co/IkariDev/Athena-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [IkariDev and Undi95's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v2-GGUF)
* [IkariDev and Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- licensing start -->
## Licensing
The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [IkariDev's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
<!-- licensing end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [athena-v2.Q2_K.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [athena-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [athena-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [athena-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [athena-v2.Q4_0.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [athena-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [athena-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [athena-v2.Q5_0.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [athena-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [athena-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [athena-v2.Q6_K.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [athena-v2.Q8_0.gguf](https://huggingface.co/TheBloke/Athena-v2-GGUF/blob/main/athena-v2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Athena-v2-GGUF and below it, a specific filename to download, such as: athena-v2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Athena-v2-GGUF athena-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Athena-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Athena-v2-GGUF athena-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m athena-v2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Athena-v2-GGUF", model_file="athena-v2.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: IkariDev and Undi95's Athena V2

Experimental Athena v2 model. Use Alpaca format.
<!-- description start -->
## Description
This repo contains fp16 files of Athena-V2.
[GGUF](https://huggingface.co/IkariDev/Athena-v2-GGUF)
[exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2)
<!-- description end -->
<!-- description start -->
## Models and loras used
- Xwin-LM/Xwin-LM-13B-V0.1
- Undi95/ReMM-v2.2-L2-13B
- Undi95/MLewd-L2-13B-v2-3
- Brouz/Slerpeno
- boomerchan/Magpie-13b
```
[Xwin (0.30) + ReMM v2.2 (0.70)](0.45) x [[Xwin (0.40) + MLewd v2-3 (0.60)](0.80) + [Slerpeno(0.50) + Magpie-13b(0.50)](0.20)](0.55)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)
To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator.
<!-- original-model-card end -->
|
maddes8cht/mosaicml-mpt-30b-gguf | maddes8cht | 2023-11-01T15:36:45Z | 389 | 0 | null | [
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"region:us"
]
| null | 2023-10-30T15:54:04Z | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
inference: false
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# mpt-30b - GGUF
- Model creator: [mosaicml](https://huggingface.co/mosaicml)
- Original model: [mpt-30b](https://huggingface.co/mosaicml/mpt-30b)
MPT-7b and MPT-30B are part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
# MPT-30B
MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision.
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-30B is:
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-30B:
The following models are finetuned on MPT-30B:
* [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for long-form instruction following (especially summarization and question-answering).
Built by finetuning MPT-30B on several carefully curated datasets.
* License: _CC-BY-SA-3.0_
* [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
## Model Date
June 22, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 |
| c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 |
| The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 |
| RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 |
| Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 |
| RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 |
| RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)).
### Training Configuration
The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform):
(i) First it was trained on 440 A100-40GBs with a batch size of 1760.
(ii) Then, on 216 A100-40GBs with a batch size of 1728.
(iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens.
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
NickyNicky/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1 | NickyNicky | 2024-01-29T02:14:13Z | 389 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"en",
"es",
"ru",
"zh",
"de",
"fr",
"th",
"ca",
"it",
"ja",
"pl",
"eo",
"eu",
"vi",
"fi",
"hu",
"ar",
"nl",
"da",
"tr",
"ko",
"he",
"id",
"cs",
"bn",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-26T13:36:26Z | ---
library_name: transformers
tags:
- merge
language:
- en
- es
- ru
- zh
- de
- fr
- th
- ca
- it
- ja
- pl
- eo
- eu
- vi
- fi
- hu
- ar
- nl
- da
- tr
- ko
- he
- id
- cs
- bn
- sv
widget:
- text: |
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
podrias escribir un codigo de ejemplo en Python<|im_end|>
<|im_start|>assistant
license: apache-2.0
---
# Model Card for Model MixLlama
<!--  -->
<!--  -->

<!-- Provide a quick summary of what the model is/does. -->
```Python
experts:
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
positive_prompts:
- ""
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_2_V1
positive_prompts:
- ""
- source_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_3_V1
positive_prompts:
- ""
base_model: NickyNicky/TinyDolphin-2.8-1.1b_oasst2_chatML_Cluster_1_V1
gate_mode: random # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
```
```Python
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
logging,
GenerationConfig,
TextIteratorStreamer,
)
import torch
new_model= "NickyNicky/Mix_TinyLlama-3x1B_oasst2_chatML_Cluster_3_2_1_V1"
model = AutoModelForCausalLM.from_pretrained(#f'NickyNicky/{new_model}',
new_model,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage= True,
# use_flash_attention_2=False,
)
tokenizer = AutoTokenizer.from_pretrained(new_model,
max_length=2048,
trust_remote_code=True,
use_fast = True,
)
tokenizer.pad_token = tokenizer.eos_token
# tokenizer.padding_side = 'left'
tokenizer.padding_side = 'right'
prompt= """<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
escribe una historia de amor.<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer.encode(prompt,
return_tensors="pt",
add_special_tokens=False).cuda()#.to("cuda") # False # True
generation_config = GenerationConfig(
max_new_tokens=700,
temperature=0.5,
top_p=0.9,
top_k=40,
repetition_penalty=1.1, #1.1, # 1.0 means no penalty, > 1.0 means penalty, 1.2 from CTRL paper
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
outputs = model.generate(
generation_config=generation_config,
input_ids=inputs,)
# tokenizer.decode(outputs[0], skip_special_tokens=False) #True
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
``` |
marianbasti/distil-whisper-large-v3-es | marianbasti | 2024-02-20T13:51:52Z | 389 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"whisper",
"automatic-speech-recognition",
"spanish",
"español",
"speech",
"recognition",
"distil-whisper",
"es",
"dataset:mozilla-foundation/common_voice_16_1",
"arxiv:2311.00430",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-01-26T22:03:51Z | ---
license: mit
datasets:
- mozilla-foundation/common_voice_16_1
language:
- es
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- spanish
- español
- speech
- recognition
- whisper
- distil-whisper
---
# distil-whisper-large-v3-es
This is the repository for a distilled version of the [Whisper v3 large model](https://huggingface.co/openai/whisper-large-v3) trained on the [Mozilla Common Voice dataset v16.1](https://huggingface.co/datasets/mozilla-foundation/common_voice_16_1).
This model was possible through the collaboration of [SandboxAI](https://sandbox-ai.github.io) and the [Universidad Nacional de Rio Negro](https://www.unrn.edu.ar/home)
## Usage
Distil-Whisper is supported in Hugging Face 🤗 Transformers from version 4.35 onwards. To run the model, first
install the latest version of the Transformers library. For this example, we'll also install 🤗 Datasets to load toy
audio dataset from the Hugging Face Hub:
```bash
pip install --upgrade pip
pip install --upgrade transformers accelerate datasets[audio]
```
### Short-Form Transcription
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
class to transcribe short-form audio files (< 30-seconds) as follows:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "marianbasti/distil-whisper-large-v3-es"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
```diff
- result = pipe(sample)
+ result = pipe("audio.mp3")
```
### Long-Form Transcription
Distil-Whisper uses a chunked algorithm to transcribe long-form audio files (> 30-seconds). In practice, this chunked long-form algorithm
is 9x faster than the sequential algorithm proposed by OpenAI in the Whisper paper (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430)).
To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For Distil-Whisper, a chunk length of 15-seconds
is optimal. To activate batching, pass the argument `batch_size`:
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "marianbasti/distil-whisper-large-v3-es"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=15,
batch_size=16,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
<!---
**Tip:** The pipeline can also be used to transcribe an audio file from a remote URL, for example:
```python
result = pipe("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav")
```
--->
### Speculative Decoding
Distil-Whisper can be used as an assistant model to Whisper for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding).
Speculative decoding mathematically ensures the exact same outputs as Whisper are obtained while being 2 times faster.
This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed.
In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
specify it as the "assistant model" for generation:
```python
from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor
import torch
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
assistant_model_id = "marianbasti/distil-whisper-large-v3-es"
assistant_model = AutoModelForCausalLM.from_pretrained(
assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
assistant_model.to(device)
model_id = "openai/whisper-large-v3"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
generate_kwargs={"assistant_model": assistant_model},
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
## Training
The model was trained for 60,000 optimisation steps (or around 1.47 epochs), on a single RTX3090 for ~60 hours, using the following training parameters:
```
--teacher_model_name_or_path "openai/whisper-large-v3"
--train_dataset_name "mozilla-foundation/common_voice_16_1"
--train_dataset_config_name "es"
--train_split_name "train"
--text_column_name "sentence"
--eval_dataset_name "mozilla-foundation/common_voice_16_1"
--eval_dataset_config_name "es"
--eval_split_name "validation"
--eval_text_column_name "sentence"
--eval_steps 10000
--save_steps 10000
--warmup_steps 500
--learning_rate 1e-4
--lr_scheduler_type "linear"
--logging_steps 25
--save_total_limit 1
--max_steps 60000
--wer_threshold 10
--per_device_train_batch_size 8
--per_device_eval_batch_size 8
--dataloader_num_workers 12
--preprocessing_num_workers 12
--output_dir "./"
--do_train
--do_eval
--gradient_checkpointing
--predict_with_generate
--overwrite_output_dir
--use_pseudo_labels "false"
--freeze_encoder
--streaming False
```
## Results
The distilled model performs with a 5.11% WER (10.15% orthogonal WER).
## License
Distil-Whisper inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model.
## Citation
If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430):
```
@misc{gandhi2023distilwhisper,
title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling},
author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush},
year={2023},
eprint={2311.00430},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF | tolgadev | 2024-02-13T18:28:03Z | 389 | 14 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"trendyol",
"llama-2",
"turkish",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-02-12T08:27:44Z | ---
model_name: Trendyol-LLM-7b-chat-v0.1
model_creator: Trendyol
base_model: Trendyol/Trendyol-LLM-7b-chat-v0.1
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
model_type: llama
library_name: transformers
inference: false
tags:
- trendyol
- llama-2
- turkish
quantized_by: tolgadev
---
## Trendyol-LLM-7b-chat-v0.1-GGUF models
----
## Description
This repo contains all types of GGUF formatted model files for [Trendyol-LLM-7b-chat-v0.1](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1).
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1/resolve/main/llama-tr-image.jpeg"
alt="drawing" width="400"/>
## Quantized LLM models and methods
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Trendyol-LLM-7b-chat-v0.1.Q2_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q2_K.gguf) | Q2_K | 2 | 2.59 GB| 4.88 GB | smallest, significant quality loss - not recommended for most purposes |
| [Trendyol-LLM-7b-chat-v0.1.Q3_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.01 GB| 5.56 GB | very small, high quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q3_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.36 GB| 5.91 GB | very small, high quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q3_K_L.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.66 GB| 6.20 GB | small, substantial quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q4_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q4_0.gguf) | Q4_0 | 4 | 3.9 GB| 6.45 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Trendyol-LLM-7b-chat-v0.1.Q4_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 3.93 GB| 6.48 GB | small, greater quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q4_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.15 GB| 6.69 GB | medium, balanced quality - recommended |
| [Trendyol-LLM-7b-chat-v0.1.Q5_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q5_0.gguf) | Q5_0 | 5 | 4.73 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Trendyol-LLM-7b-chat-v0.1.Q5_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 4.75 GB| 7.27 GB | large, low quality loss - recommended |
| [Trendyol-LLM-7b-chat-v0.1.Q5_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.86 GB| 7.40 GB | large, very low quality loss - recommended |
| [Trendyol-LLM-7b-chat-v0.1.Q6_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q6_K.gguf) | Q6_K | 6 | 5.61 GB| 8.15 GB | very large, extremely low quality loss |
| [Trendyol-LLM-7b-chat-v0.1.Q8_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v0.1-GGUF/blob/main/trendyol-llm-7b-chat-v0.1.Q8_0.gguf) | Q8_0 | 8 | 7.27 GB| 9.81 GB | very large, extremely low quality loss - not recommended |
The names of the quantization methods follow the naming convention: "q" + the number of bits + the variant used (detailed below). Here is a list of all the models and their corresponding use cases, based on model cards made by [TheBloke](https://huggingface.co/TheBloke/):
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_s`: Uses Q3_K for all tensors
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q4_k_s`: Uses Q4_K for all tensors
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q5_k_s`: Uses Q5_K for all tensors
* `q6_k`: Uses Q8_K for all tensors
* `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
**TheBloke recommends using Q5_K_M** as it preserves most of the model's performance.
Alternatively, you can use Q4_K_M if you want to save some memory.
In general, K_M versions are better than K_S versions.
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
## Special thanks to [TheBloke on Huggingface](https://huggingface.co/TheBloke) and [Maxime Labonne on Github](https://github.com/mlabonne/llm-course)
-----
## Model Details
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1/resolve/main/llama-tr-image.jpeg"
alt="drawing" width="400"/>
# **Trendyol LLM**
Trendyol LLM is a generative model that is based on LLaMa2 7B model. This is the repository for the chat model.
## Model Details
**Model Developers** Trendyol
**Variations** base and chat variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Trendyol LLM is an auto-regressive language model (based on LLaMa2 7b) that uses an optimized transformer architecture. The chat version is fine-tuned on 180K instruction sets with the following trainables by using LoRA:
- **lr**=1e-4
- **lora_rank**=64
- **lora_alpha**=128
- **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj
- **modules_to_save**=embed_tokens,lm_head
- **lora_dropout**=0.05
- **fp16**=True
- **max_seq_length**=1024
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage
```python
from transformers import AutoModelForCausalLM, LlamaTokenizer, pipeline
model_id = "Trendyol/Trendyol-LLM-7b-chat-v0.1"
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
load_in_8bit=True)
sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9)
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
return_full_text=True,
repetition_penalty=1.1
)
DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n"
TEMPLATE = (
"[INST] <<SYS>>\n"
"{system_prompt}\n"
"<</SYS>>\n\n"
"{instruction} [/INST]"
)
def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT):
return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt})
def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT):
prompt = generate_prompt(user_query, sys_prompt)
outputs = pipe(prompt,
**sampling_params
)
return outputs[0]["generated_text"].split("[/INST]")[-1]
user_query = "Türkiye'de kaç il var?"
response = generate_output(user_query)
```
## Limitations, Risks, Bias, and Ethical Considerations
### Limitations and Known Biases
- **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
- **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.
### Risks and Ethical Considerations
- **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
### Recommendations for Safe and Ethical Usage
- **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences. |
LoneStriker/OpenCodeInterpreter-CL-7B-GGUF | LoneStriker | 2024-02-24T09:36:52Z | 389 | 1 | null | [
"gguf",
"code",
"text-generation",
"en",
"arxiv:2402.14658",
"region:us"
]
| text-generation | 2024-02-24T09:15:43Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- code
---
<h1 align="center"> OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement<h1>
<p align="center">
<img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png">
</p>
<p align="center">
<a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a>
|
<a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a>
</p>
<hr>
## Introduction
OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities.
For further information and related work, refer to our paper: ["OpenCodeInterpreter: A System for Enhanced Code Generation and Execution"](https://arxiv.org/abs/2402.14658) available on arXiv.
## Model Usage
### Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path="OpenCodeInterpreter-CL-7B"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
model.eval()
prompt = "Write a function to find the shared elements from the given two lists."
inputs = tokenizer.apply_chat_template(
[{'role': 'user', 'content': prompt }],
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=1024,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
## Contact
If you have any inquiries, please feel free to raise an issue or reach out to us via email at: [email protected], [email protected].
We're here to assist you!" |
Mitsuki-Sakamoto/pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1 | Mitsuki-Sakamoto | 2024-03-07T14:46:24Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-07T09:43:17Z | ---
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- generated_from_trainer
model-index:
- name: pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 1
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3096 | 0.02 | 50 | 2.2544 |
| 2.2692 | 0.04 | 100 | 2.2374 |
| 2.2021 | 0.06 | 150 | 2.2228 |
| 2.2268 | 0.08 | 200 | 2.2338 |
| 2.1433 | 0.1 | 250 | 2.2146 |
| 2.0708 | 0.12 | 300 | 2.2004 |
| 2.163 | 0.14 | 350 | 2.1996 |
| 2.2518 | 0.16 | 400 | 2.1898 |
| 2.0717 | 0.18 | 450 | 2.1899 |
| 2.2137 | 0.2 | 500 | 2.1847 |
| 2.2232 | 0.22 | 550 | 2.1760 |
| 2.2455 | 0.24 | 600 | 2.1757 |
| 2.1936 | 0.26 | 650 | 2.1732 |
| 2.1352 | 0.28 | 700 | 2.1619 |
| 2.1215 | 0.3 | 750 | 2.1608 |
| 2.1568 | 0.32 | 800 | 2.1506 |
| 2.1319 | 0.34 | 850 | 2.1514 |
| 2.0831 | 0.36 | 900 | 2.1494 |
| 2.0788 | 0.38 | 950 | 2.1430 |
| 2.0901 | 0.4 | 1000 | 2.1376 |
| 2.1374 | 0.42 | 1050 | 2.1343 |
| 1.9484 | 0.44 | 1100 | 2.1298 |
| 2.204 | 0.46 | 1150 | 2.1284 |
| 2.108 | 0.48 | 1200 | 2.1249 |
| 1.9353 | 0.5 | 1250 | 2.1210 |
| 2.1352 | 0.52 | 1300 | 2.1178 |
| 1.9498 | 0.54 | 1350 | 2.1162 |
| 2.1571 | 0.56 | 1400 | 2.1153 |
| 2.1804 | 0.58 | 1450 | 2.1114 |
| 1.988 | 0.6 | 1500 | 2.1107 |
| 2.0485 | 0.62 | 1550 | 2.1055 |
| 2.0596 | 0.64 | 1600 | 2.1020 |
| 1.98 | 0.66 | 1650 | 2.1027 |
| 2.0626 | 0.68 | 1700 | 2.0980 |
| 2.097 | 0.7 | 1750 | 2.0949 |
| 2.2013 | 0.72 | 1800 | 2.0893 |
| 2.1234 | 0.74 | 1850 | 2.0913 |
| 1.9662 | 0.76 | 1900 | 2.0971 |
| 2.138 | 0.78 | 1950 | 2.0929 |
| 2.0816 | 0.8 | 2000 | 2.0898 |
| 2.1506 | 0.82 | 2050 | 2.0848 |
| 2.0585 | 0.84 | 2100 | 2.0860 |
| 2.099 | 0.86 | 2150 | 2.0862 |
| 2.084 | 0.88 | 2200 | 2.0816 |
| 2.1046 | 0.9 | 2250 | 2.0790 |
| 2.02 | 0.92 | 2300 | 2.0865 |
| 2.0548 | 0.94 | 2350 | 2.0776 |
| 2.0819 | 0.96 | 2400 | 2.0766 |
| 1.9181 | 0.98 | 2450 | 2.0755 |
| 2.0345 | 1.0 | 2500 | 2.0793 |
| 1.7741 | 1.02 | 2550 | 2.0922 |
| 1.6556 | 1.04 | 2600 | 2.0921 |
| 1.6168 | 1.06 | 2650 | 2.0921 |
| 1.8017 | 1.08 | 2700 | 2.0927 |
| 1.8055 | 1.1 | 2750 | 2.0893 |
| 1.7298 | 1.12 | 2800 | 2.0910 |
| 1.6924 | 1.14 | 2850 | 2.0969 |
| 1.853 | 1.16 | 2900 | 2.0951 |
| 1.7641 | 1.18 | 2950 | 2.1020 |
| 1.7529 | 1.2 | 3000 | 2.0991 |
| 1.7556 | 1.22 | 3050 | 2.1005 |
| 1.7273 | 1.24 | 3100 | 2.0984 |
| 1.8478 | 1.26 | 3150 | 2.1000 |
| 1.8965 | 1.28 | 3200 | 2.0932 |
| 1.761 | 1.3 | 3250 | 2.0917 |
| 1.7579 | 1.32 | 3300 | 2.0943 |
| 1.7347 | 1.34 | 3350 | 2.0914 |
| 1.7725 | 1.36 | 3400 | 2.0928 |
| 1.8931 | 1.38 | 3450 | 2.0913 |
| 1.7301 | 1.4 | 3500 | 2.1030 |
| 1.741 | 1.42 | 3550 | 2.0953 |
| 1.8009 | 1.44 | 3600 | 2.0971 |
| 1.8397 | 1.46 | 3650 | 2.0932 |
| 1.7941 | 1.48 | 3700 | 2.0932 |
| 1.7136 | 1.5 | 3750 | 2.0936 |
| 1.723 | 1.52 | 3800 | 2.0913 |
| 1.7837 | 1.54 | 3850 | 2.0878 |
| 1.7988 | 1.56 | 3900 | 2.0859 |
| 1.7759 | 1.58 | 3950 | 2.0883 |
| 1.8608 | 1.6 | 4000 | 2.0926 |
| 1.5859 | 1.62 | 4050 | 2.0918 |
| 1.8474 | 1.64 | 4100 | 2.0888 |
| 1.7921 | 1.66 | 4150 | 2.0932 |
| 1.755 | 1.68 | 4200 | 2.0950 |
| 1.8437 | 1.7 | 4250 | 2.0880 |
| 1.826 | 1.72 | 4300 | 2.0861 |
| 1.8548 | 1.74 | 4350 | 2.0886 |
| 1.7668 | 1.76 | 4400 | 2.0832 |
| 1.7818 | 1.78 | 4450 | 2.0877 |
| 1.8981 | 1.8 | 4500 | 2.0900 |
| 1.9266 | 1.82 | 4550 | 2.0855 |
| 1.8589 | 1.84 | 4600 | 2.0795 |
| 1.7587 | 1.86 | 4650 | 2.0833 |
| 1.6735 | 1.88 | 4700 | 2.0886 |
| 1.7961 | 1.9 | 4750 | 2.0874 |
| 1.8099 | 1.92 | 4800 | 2.0801 |
| 1.8481 | 1.94 | 4850 | 2.0802 |
| 1.8418 | 1.96 | 4900 | 2.0774 |
| 1.8471 | 1.98 | 4950 | 2.0876 |
| 1.829 | 2.0 | 5000 | 2.0820 |
| 1.4073 | 2.02 | 5050 | 2.1485 |
| 1.4951 | 2.04 | 5100 | 2.1651 |
| 1.4291 | 2.06 | 5150 | 2.1522 |
| 1.3912 | 2.08 | 5200 | 2.1545 |
| 1.5581 | 2.1 | 5250 | 2.1462 |
| 1.5533 | 2.12 | 5300 | 2.1613 |
| 1.5436 | 2.14 | 5350 | 2.1562 |
| 1.4632 | 2.16 | 5400 | 2.1437 |
| 1.5859 | 2.18 | 5450 | 2.1563 |
| 1.4974 | 2.2 | 5500 | 2.1749 |
| 1.464 | 2.22 | 5550 | 2.1648 |
| 1.4689 | 2.24 | 5600 | 2.1623 |
| 1.565 | 2.26 | 5650 | 2.1656 |
| 1.5491 | 2.28 | 5700 | 2.1696 |
| 1.5382 | 2.3 | 5750 | 2.1659 |
| 1.4154 | 2.32 | 5800 | 2.1614 |
| 1.4636 | 2.34 | 5850 | 2.1570 |
| 1.4858 | 2.36 | 5900 | 2.1634 |
| 1.4295 | 2.38 | 5950 | 2.1897 |
| 1.6108 | 2.4 | 6000 | 2.1653 |
| 1.4283 | 2.42 | 6050 | 2.1633 |
| 1.4685 | 2.44 | 6100 | 2.1720 |
| 1.4443 | 2.46 | 6150 | 2.1618 |
| 1.4918 | 2.48 | 6200 | 2.1577 |
| 1.5742 | 2.5 | 6250 | 2.1665 |
| 1.49 | 2.52 | 6300 | 2.1697 |
| 1.552 | 2.54 | 6350 | 2.1489 |
| 1.5577 | 2.56 | 6400 | 2.1660 |
| 1.4348 | 2.58 | 6450 | 2.1766 |
| 1.5508 | 2.6 | 6500 | 2.1564 |
| 1.4666 | 2.62 | 6550 | 2.1644 |
| 1.4784 | 2.64 | 6600 | 2.1611 |
| 1.6065 | 2.66 | 6650 | 2.1770 |
| 1.559 | 2.68 | 6700 | 2.1635 |
| 1.5579 | 2.7 | 6750 | 2.1605 |
| 1.5103 | 2.72 | 6800 | 2.1735 |
| 1.5369 | 2.74 | 6850 | 2.1711 |
| 1.6012 | 2.76 | 6900 | 2.1650 |
| 1.5058 | 2.78 | 6950 | 2.1683 |
| 1.6553 | 2.8 | 7000 | 2.1613 |
| 1.5858 | 2.82 | 7050 | 2.1664 |
| 1.6428 | 2.84 | 7100 | 2.1566 |
| 1.4619 | 2.86 | 7150 | 2.1620 |
| 1.5989 | 2.88 | 7200 | 2.1571 |
| 1.6181 | 2.9 | 7250 | 2.1598 |
| 1.5831 | 2.92 | 7300 | 2.1560 |
| 1.555 | 2.94 | 7350 | 2.1529 |
| 1.5387 | 2.96 | 7400 | 2.1593 |
| 1.5477 | 2.98 | 7450 | 2.1608 |
| 1.4989 | 3.0 | 7500 | 2.1686 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu117
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ABX-AI/Infinite-Laymons-7B | ABX-AI | 2024-04-09T23:58:19Z | 389 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:Nitral-AI/Infinitely-Laydiculous-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-03-28T08:49:19Z | ---
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
- Nitral-AI/Infinitely-Laydiculous-7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---

# Infinite-Laymons-7B
This model is intended for fictional storytelling and role-playing, with a focus on more original conversations and less alignment.
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
* [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [0, 32]
- model: Nitral-AI/Infinitely-Laydiculous-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Infinitely-Laydiculous-7B
parameters:
t:
- filter: self_attn
value: [0.7, 0.3, 0.6, 0.2, 0.5]
- filter: mlp
value: [0.3, 0.7, 0.4, 0.8, 0.5]
- value: 0.5
dtype: bfloat16
``` |
qeternity/Tess-70B-v1.6-Marlin | qeternity | 2024-03-28T23:58:23Z | 389 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
]
| text-generation | 2024-03-28T23:47:05Z | Entry not found |
sam749/epiCPhotoGasm-Last-nicorn | sam749 | 2024-04-02T02:40:04Z | 389 | 2 | diffusers | [
"diffusers",
"safetensors",
"Safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-04-02T02:24:51Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- Safetensors
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
pipeline_tag: text-to-image
---
# epiCPhotoGasm
## Last Unicorn

### Description:
> <p><em>Private Model Release</em><br />This is my finetuned private Model which is based on the Z Model with a merge of finetuned Photo Lora to improve photorealistic outcomes, dont expect too much change to the Z Version</p><p><strong>This is finally the last model update to come </strong><em><s>i swear</s></em></p>
### Creator: epinikion
### Civitai Page: https://civitai.com/models/132632
You can use this with the [🧨Diffusers library](https://github.com/huggingface/diffusers)
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "sam749/epiCPhotoGasm-Last-nicorn"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "masterpiece, best quality, 1girl, (colorful),(delicate eyes and face), volumatic light, ray tracing, bust shot ,extremely detailed CG unity 8k wallpaper,solo,smile"
image = pipe(prompt).images[0]
image.save("result.png")
```
|
TW3PartnersLLM/tw3jrglv3 | TW3PartnersLLM | 2024-04-02T18:48:54Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"davidkim205/Rhea-72b-v0.5",
"abacusai/Smaug-72B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-02T18:23:16Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- davidkim205/Rhea-72b-v0.5
- abacusai/Smaug-72B-v0.1
---
# TW3-JRGL-v3
TW3-JRGL-v3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5)
* [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1)
## 🧩 Configuration |
MaziyarPanahi/YamshadowInex12_T3qM7 | MaziyarPanahi | 2024-04-08T19:38:00Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/YamshadowInex12-7B",
"base_model:automerger/T3qM7-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-04-08T19:25:00Z | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: YamshadowInex12_T3qM7
base_model:
- automerger/YamshadowInex12-7B
- automerger/T3qM7-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# YamshadowInex12_T3qM7
YamshadowInex12_T3qM7 is a merge of the following models:
* [automerger/YamshadowInex12-7B](https://huggingface.co/automerger/YamshadowInex12-7B)
* [automerger/T3qM7-7B](https://huggingface.co/automerger/T3qM7-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/YamshadowInex12_T3qM7"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
weezywitasneezy/OxytocinEngineering-45B-passthrough | weezywitasneezy | 2024-04-08T22:42:30Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"NeverSleep/CausalLM-RP-34B",
"Sao10K/Fimbulvetr-11B-v2",
"base_model:NeverSleep/CausalLM-RP-34B",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-08T21:48:31Z | ---
tags:
- merge
- mergekit
- lazymergekit
- NeverSleep/CausalLM-RP-34B
- Sao10K/Fimbulvetr-11B-v2
base_model:
- NeverSleep/CausalLM-RP-34B
- Sao10K/Fimbulvetr-11B-v2
license: cc-by-nc-4.0
---
# OxytocinEngineering-45B-passthrough
OxytocinEngineering-45B-passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NeverSleep/CausalLM-RP-34B](https://huggingface.co/NeverSleep/CausalLM-RP-34B)
* [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: NeverSleep/CausalLM-RP-34B
layer_range: [0, 60]
- sources:
- model: Sao10K/Fimbulvetr-11B-v2
layer_range: [0, 48]
merge_method: passthrough
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "weezywitasneezy/OxytocinEngineering-45B-passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
automerger/Inex12Yamshadowexperiment28-7B | automerger | 2024-04-13T08:33:44Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:MSL7/INEX12-7b",
"base_model:automerger/YamshadowExperiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-09T10:03:10Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- MSL7/INEX12-7b
- automerger/YamshadowExperiment28-7B
---
# Inex12Yamshadowexperiment28-7B
Inex12Yamshadowexperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
* [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MSL7/INEX12-7b
layer_range: [0, 32]
- model: automerger/YamshadowExperiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: MSL7/INEX12-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Inex12Yamshadowexperiment28-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
bartowski/Mixtral-8x22B-v0.1-GGUF | bartowski | 2024-04-13T13:59:49Z | 389 | 11 | transformers | [
"transformers",
"gguf",
"mixtral",
"text-generation",
"moe",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-11T04:39:53Z | ---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
tags:
- moe
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of Mixtral-8x22B-v0.1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2636">b2636</a> for quantization.
Original model: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1
## Prompt format
This is a base model with no prompt format.
## Download a file (not the whole branch) from below:
This is a very large model and has been split into parts to upload. Recent versions of llamacpp allow you to load part 1 and the rest will load as well. Make sure you download all parts of the chosen size.
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mixtral-8x22B-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q5_K_M-00001-of-00005.gguf) | Q5_K_M | 99.96GB | High quality, *recommended*. |
| [Mixtral-8x22B-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q5_K_S-00001-of-00005.gguf) | Q5_K_S | 96.97GB | High quality, *recommended*. |
| [Mixtral-8x22B-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q4_K_M-00001-of-00005.gguf) | Q4_K_M | 85.58GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Mixtral-8x22B-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q4_K_S-00001-of-00005.gguf) | Q4_K_S | 80.47GB | Slightly lower quality with more space savings, *recommended*. |
| [Mixtral-8x22B-v0.1-IQ4_NL.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-IQ4_NL-00001-of-00005.gguf) | IQ4_NL | 80.47GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Mixtral-8x22B-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-IQ4_XS-00001-of-00005.gguf) | IQ4_XS | 76.35GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Mixtral-8x22B-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q3_K_L-00001-of-00005.gguf) | Q3_K_L | 72.57GB | Lower quality but usable, good for low RAM availability. |
| [Mixtral-8x22B-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q3_K_M-00001-of-00005.gguf) | Q3_K_M | 67.78GB | Even lower quality. |
| [Mixtral-8x22B-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-IQ3_M-00001-of-00005.gguf) | IQ3_M | 64.49GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Mixtral-8x22B-v0.1-IQ3_S.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-IQ3_S-00001-of-00005.gguf) | IQ3_S | 61.49GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Mixtral-8x22B-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q3_K_S-00001-of-00005.gguf) | Q3_K_S | 61.49GB | Low quality, not recommended. |
| [Mixtral-8x22B-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-IQ3_XS-00001-of-00005.gguf) | IQ3_XS | 58.22GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Mixtral-8x22B-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/Mixtral-8x22B-v0.1-GGUF/blob/main/Mixtral-8x22B-v0.1-Q2_K.gguf) | Q2_K | 52.10GB | Very low quality but surprisingly usable. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/NeuralMathsRolePlay-7b-GGUF | mradermacher | 2024-05-06T04:57:43Z | 389 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Kukedlc/NeuralMathsRolePlay-7b",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-14T00:12:41Z | ---
base_model: Kukedlc/NeuralMathsRolePlay-7b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Kukedlc/NeuralMathsRolePlay-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralMathsRolePlay-7b-GGUF/resolve/main/NeuralMathsRolePlay-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mayacinka/Calme-Rity-stock | mayacinka | 2024-04-17T20:58:46Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-17T14:24:18Z | ---
base_model:
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- MaziyarPanahi/Calme-7B-Instruct-v0.9
- liminerity/M7-7b
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9) as a base.
### Models Merged
The following models were included in the merge:
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- model: liminerity/M7-7b
merge_method: model_stock
base_model: MaziyarPanahi/Calme-7B-Instruct-v0.9
dtype: bfloat16
``` |
jsfs11/MixtralxWizardLM2-8x22B-SLERP-v0.1 | jsfs11 | 2024-04-19T04:46:12Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:alpindale/WizardLM-2-8x22B",
"base_model:mistralai/Mixtral-8x22B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T09:41:08Z | ---
base_model:
- alpindale/WizardLM-2-8x22B
- mistralai/Mixtral-8x22B-Instruct-v0.1
tags:
- mergekit
- merge
license: apache-2.0
---
# Probably BROKEN, look for v0.2 which should be working!
* Test merge of two extremely large MoE models using SLERP. Don't know if it's working yet, haven't had time or hardware to test.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B)
* [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mixtral-8x22B-Instruct-v0.1
layer_range: [0, 55]
- model: alpindale/WizardLM-2-8x22B
layer_range: [0, 55]
merge_method: slerp
base_model: mistralai/Mixtral-8x22B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_2 | ShenaoZhang | 2024-04-27T07:28:02Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-27T06:55:27Z | ---
license: mit
base_model: ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- updated
- original
model-index:
- name: 0.001_4iters_bs128_nodpo_only4w_iter_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_iter_2
This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1](https://huggingface.co/ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
|
baricevic/flax-instruct-pix2pix | baricevic | 2024-05-16T22:44:28Z | 389 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-30T13:49:59Z | Entry not found |
QuantFactory/Phi-3-mini-4k-instruct-GGUF-v2 | QuantFactory | 2024-07-02T17:04:15Z | 389 | 0 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
]
| text-generation | 2024-05-05T13:58:26Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
# Phi-3-mini-4k-instructGGUF
- This is quantized version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) created using llama.cpp
- Quants were created using latest release of llama.cpp dated 5.5.2024
## Model Description
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
NotAiLOL/Athena-OrpoMistral-7B-v0.1-4bit | NotAiLOL | 2024-05-10T16:09:12Z | 389 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"orpo",
"conversational",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-05-07T13:42:26Z | ---
library_name: transformers
tags:
- trl
- orpo
license: apache-2.0
datasets:
- mlabonne/orpo-dpo-mix-40k
language:
- en
---
# Chat version using ChatML format: https://huggingface.co/NotAiLOL/Athena-OrpoMistral-7B-Chat-v0.1
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Liu Hong Yuan Tom
- **Funded by [optional]:** N/A
- **Shared by [optional]:** N/A
- **Model type:** Mistral
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache 2.0
- **Finetuned from model [optional]:** [mistralai/Mistral-7B-Instruct-v0.1](mistralai/Mistral-7B-Instruct-v0.1)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** N/A
- **Paper [optional]:** N/A
- **Demo [optional]:** N/A
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
To be included.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Dataset used: [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) by [Maxime Labonne](https://huggingface.co/mlabonne).
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
* Plateform: Kaggle
* GPU: 1 x NVIDIA Tesla P100
More details at [Wandb](https://wandb.ai/liuhongyuan3000/huggingface/reports/Fine-Tuning-Mistral-7B-Instruct-v0-1--Vmlldzo3ODQ2NDc2?accessToken=nuv9ds79uglgqfav83jz6058y59tgb1la7o6fed3dzlgaku95h50ofnch5akqn8a).
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adamo1139/Yi-34B-200K-XLCTX-RAW-ORPO-0805-GaLore | adamo1139 | 2024-05-27T21:39:24Z | 389 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-08T22:27:19Z | ---
license: apache-2.0
---
Finetune of Yi-34B-200K (the version with better ctx, Yi-34B-200K v2 or Yi-34B-200K-XLCTX (my preffered name)) on adamo1139/rawrr_v2_2_stage1 dataset via ORPO and GaLore on 4-bit (bnb) weights.
This is not a chat model!! It's meant to serve as base for further finetuning that has less behaviour inherited from being trained on OpenAI etc. AI generated content. If you don't want your finetune to sound like an AI model, using this as a base should be a good idea. |
DrNicefellow/Microscopic-Mistral-63k-steps | DrNicefellow | 2024-05-09T20:55:15Z | 389 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-09T05:21:12Z | ---
license: apache-2.0
---
Self trained microscopit Mistral. Around 810M parameters.
The tokenizer is the one from https://huggingface.co/mistralai/Mistral-7B-v0.1.
It is being trained on around 400B tokens and this is step 63k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
ibivibiv/llama-3-ultra-dpo-8B | ibivibiv | 2024-05-13T13:41:52Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-13T07:38:06Z | ---
library_name: transformers
license: llama3
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DrNicefellow/microscopic-mamba-2.1B-hf-14.5ksteps | DrNicefellow | 2024-05-18T07:58:35Z | 389 | 0 | transformers | [
"transformers",
"pytorch",
"mamba",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-18T07:50:57Z | ---
license: apache-2.0
---
Self trained microscopic Mamba. Around 2.1G parameters.
The tokenizer is the one from https://huggingface.co/state-spaces/mamba-2.8b-hf.
It is being trained on around 400B tokens and this is step 14.5k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
allknowingroger/MultiMash6-12B-slerp | allknowingroger | 2024-05-22T10:42:07Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"allknowingroger/Multimerge-Neurallaymons-12B-MoE",
"RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2",
"base_model:allknowingroger/Multimerge-Neurallaymons-12B-MoE",
"base_model:RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-22T10:18:17Z | ---
tags:
- merge
- mergekit
- lazymergekit
- allknowingroger/Multimerge-Neurallaymons-12B-MoE
- RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2
base_model:
- allknowingroger/Multimerge-Neurallaymons-12B-MoE
- RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2
license: apache-2.0
---
# MultiMash6-12B-slerp
MultiMash6-12B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [allknowingroger/Multimerge-Neurallaymons-12B-MoE](https://huggingface.co/allknowingroger/Multimerge-Neurallaymons-12B-MoE)
* [RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2](https://huggingface.co/RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: allknowingroger/Multimerge-Neurallaymons-12B-MoE
layer_range: [0, 32]
- model: RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: allknowingroger/Multimerge-Neurallaymons-12B-MoE
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/MultiMash6-12B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/Mahou-1.2a-llama3-8B-GGUF | mradermacher | 2024-05-27T01:40:39Z | 389 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/FlameMix-DPO-v1",
"dataset:flammenai/Grill-preprod-v1_chatML",
"dataset:flammenai/Grill-preprod-v2_chatML",
"base_model:nbeerbower/Mahou-1.2a-llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-25T07:56:47Z | ---
base_model: nbeerbower/Mahou-1.2a-llama3-8B
datasets:
- flammenai/FlameMix-DPO-v1
- flammenai/Grill-preprod-v1_chatML
- flammenai/Grill-preprod-v2_chatML
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/nbeerbower/Mahou-1.2a-llama3-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-GGUF/resolve/main/Mahou-1.2a-llama3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf | RichardErkhov | 2024-05-30T00:45:44Z | 389 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-29T21:36:21Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
airoboros-m-7b-3.0 - GGUF
- Model creator: https://huggingface.co/jondurbin/
- Original model: https://huggingface.co/jondurbin/airoboros-m-7b-3.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [airoboros-m-7b-3.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q2_K.gguf) | Q2_K | 2.53GB |
| [airoboros-m-7b-3.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [airoboros-m-7b-3.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [airoboros-m-7b-3.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [airoboros-m-7b-3.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [airoboros-m-7b-3.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q3_K.gguf) | Q3_K | 3.28GB |
| [airoboros-m-7b-3.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [airoboros-m-7b-3.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [airoboros-m-7b-3.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [airoboros-m-7b-3.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q4_0.gguf) | Q4_0 | 3.83GB |
| [airoboros-m-7b-3.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [airoboros-m-7b-3.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [airoboros-m-7b-3.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q4_K.gguf) | Q4_K | 4.07GB |
| [airoboros-m-7b-3.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [airoboros-m-7b-3.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q4_1.gguf) | Q4_1 | 4.24GB |
| [airoboros-m-7b-3.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q5_0.gguf) | Q5_0 | 4.65GB |
| [airoboros-m-7b-3.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [airoboros-m-7b-3.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q5_K.gguf) | Q5_K | 4.78GB |
| [airoboros-m-7b-3.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [airoboros-m-7b-3.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q5_1.gguf) | Q5_1 | 5.07GB |
| [airoboros-m-7b-3.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q6_K.gguf) | Q6_K | 5.53GB |
| [airoboros-m-7b-3.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_airoboros-m-7b-3.0-gguf/blob/main/airoboros-m-7b-3.0.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- jondurbin/airoboros-3.0
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras:
* MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations.
* Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
Here is a simple python script that can help generate the prompt:
```python
def get_chat_prompt(message, chat_history, system_prompt):
texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n']
do_strip = False
for user_input, response in chat_history:
user_input = user_input.strip() if do_strip else user_input
do_strip = True
texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ')
message = message.strip() if do_strip else message
texts.append(f'{message} [/INST] ')
return ''.join(texts)
def get_prompt(item):
"""
`item` is expected to be in chatml format, e.g.
{
"id": 1,
"conversations": [
{
"role": "system",
"content": "You are a helpful, unbiased, uncensored assistant."
},
{
"role": "user",
"content": "Hi!"
},
{
"role": "assistant",
"content": "Hello, how are you today?"
},
{
"role": "user",
"content": "Just peachy, and you?"
}
]
}
"""
system_prompt = share_gpt_item['conversations'][0]['value']
offset = 1
if share_gpt_item['conversations'][0]['role'] != 'system':
system_prompt = 'You are an unbiased, uncensored, helpful assistant.'
offset = 0
chat_history = [
(share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content'])
for idx in range(offset, len(item['conversations']) - 1, 2)
]
message = item['conversations'][-1]['content']
return get_chat_prompt(message, chat_history, system_prompt)
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
haes95/POLAR-7B-HES-SFT-v0.1 | haes95 | 2024-05-30T00:38:21Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-05-29T23:57:06Z | ---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details

## Model Description
<!-- Provide a longer summary of what this model is/does. -->
POLAR is a Korean LLM developed by Plateer's AI-lab. It was inspired by Upstage's SOLAR. We will continue to evolve this model and hope to contribute to the Korean LLM ecosystem.
- **Developed by:** AI-Lab of Plateer(Woomun Jung, Eunsoo Ha, MinYoung Joo, Seongjun Son)
- **Model type:** Language model
- **Language(s) (NLP):** ko
- **License:** apache-2.0
- Parent Model: x2bee/POLAR-14B-v0.2
## Direct Use
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("haes95/POLAR-7B-HES-SFT-v0.1")
model = AutoModelForCausalLM.from_pretrained("haes95/POLAR-7B-HES-SFT-v0.1")
```
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
More information needed
**APA:**
More information needed
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
More information needed
# More Information [optional]
If you would like more information about our company, please visit the link below.
[tech.x2bee.com](https://tech.x2bee.com/)
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Woomun Jung, MinYoung Joo, Eunsu Ha, Seungjun Son
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details> |
RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf | RichardErkhov | 2024-05-30T19:46:23Z | 389 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-30T16:39:40Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Dans-AdventurousWinds-Mk2-7b - GGUF
- Model creator: https://huggingface.co/PocketDoc/
- Original model: https://huggingface.co/PocketDoc/Dans-AdventurousWinds-Mk2-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Dans-AdventurousWinds-Mk2-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q2_K.gguf) | Q2_K | 2.53GB |
| [Dans-AdventurousWinds-Mk2-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Dans-AdventurousWinds-Mk2-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Dans-AdventurousWinds-Mk2-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Dans-AdventurousWinds-Mk2-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Dans-AdventurousWinds-Mk2-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q3_K.gguf) | Q3_K | 3.28GB |
| [Dans-AdventurousWinds-Mk2-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Dans-AdventurousWinds-Mk2-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Dans-AdventurousWinds-Mk2-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Dans-AdventurousWinds-Mk2-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Dans-AdventurousWinds-Mk2-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Dans-AdventurousWinds-Mk2-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Dans-AdventurousWinds-Mk2-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q4_K.gguf) | Q4_K | 4.07GB |
| [Dans-AdventurousWinds-Mk2-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Dans-AdventurousWinds-Mk2-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Dans-AdventurousWinds-Mk2-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Dans-AdventurousWinds-Mk2-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Dans-AdventurousWinds-Mk2-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q5_K.gguf) | Q5_K | 4.78GB |
| [Dans-AdventurousWinds-Mk2-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Dans-AdventurousWinds-Mk2-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Dans-AdventurousWinds-Mk2-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q6_K.gguf) | Q6_K | 5.53GB |
| [Dans-AdventurousWinds-Mk2-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/PocketDoc_-_Dans-AdventurousWinds-Mk2-7b-gguf/blob/main/Dans-AdventurousWinds-Mk2-7b.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
language:
- en
datasets:
- PocketDoc/Floyd-Text-Adventures
- PocketDoc/Choose-Your-Story-Long-Text-Adventures
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
---
**16k training sequences**
**Quantized versions courtesy of [TheBloke](https://huggingface.co/TheBloke):**
- [GGUF](https://huggingface.co/TheBloke/Dans-AdventurousWinds-Mk2-7B-GGUF) - For CPU and or GPU
- [GPTQ](https://huggingface.co/TheBloke/Dans-AdventurousWinds-Mk2-7B-GPTQ) - For GPU
- [AWQ](https://huggingface.co/TheBloke/Dans-AdventurousWinds-Mk2-7B-AWQ) - New format for GPU
# What is the model for?
This model is proficient in crafting text-based adventure games. It can both concise replies and more expansive, novel-like descriptions. The ability to alternate between these two response styles can be triggered by a distinct system message.
# What's in the sausage?
This model was trained on [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) using a custom version of the floyd and cys datasets from the skein text adventure dataset. The dataset has been improved over what was used in my previous model Dans-AdventurousWinds, in addition it had been trained on 16k sequence lengths.
## Prompt format and examples:
#### Format description:
Traditional adventure format with a description of the genre, tags, and other misc information at the top followed by an optional introductory message. User messages are prefixed with two new lines and "> ", model messages are one new line after the last user message or the description message if an introductory message is desired.
#### Examples of shorter responses:
> [Genres: Science Fiction]
> [Tags: humor, old school, sci fi]
> [Mode: Adventure]
> [Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.]
> [Misc: Writing era: 1993]
> [Intro]
> You are an interstellar trader.
>
> On your way to the galactic capital of Coruscant to sell a large quantity of fettucini, you make a detour through a planetary system with a reputation for being dangerous and unstable. The reason is simple -- there is a very powerful magical item known as "The Sword Of Damocles" up for sale. You have no idea what it does, but you've heard that the previous owners all ended up dead or broke. In other words, these people were so desperate to get rid of the sword that they would probably be willing to take almost any price. It sounds like a great deal!
>
> Of course, you don't have enough money to buy the sword outright, which means that you will need to find some way to finance the purchase. There is only one possible source of funds in this part of space -- the loan sharks who operate out of a nearby planet called Riggerton. But these loans come at a high interest rate; if you can't pay them back quickly, you may end up losing everything.
>
> You also know that there are several potential buyers vying for ownership of the sword, including some of the most ruthless criminals in the galaxy. These people might be willing to kill each other over this object, making it more valuable than ever before! However, they are notoriously paranoid and suspicious, so you must proceed with extreme caution when dealing with them...
>
> Finally, there is always the chance that someone else will beat you to the punch by buying the sword first. If that happens then you won't be able to afford it anymore anyway, so maybe it doesn't matter too much?
>
> In any case, good luck! And remember: never borrow money from anyone unless you absolutely trust him completely. Otherwise he could turn around and sell YOU into slavery instead!
>
> Ready? Then let's begin...
>
> \> You snap out of your daydream and remember you're still in port.
> You should probably get ready to go somewhere now...
>
> There are three main places where traders usually hang out: the docking bay (where ships land), the casino (a place where people gamble) and the black market (an illegal bazaar). Each has its own advantages and disadvantages depending on what kind of goods you want to trade. For example, the docking bay is the safest option because it's run by the government, but it charges higher taxes; while the black market is riskier but cheaper.
>
> So where do you think you ought to go?
>
> \> You get ready for launch
> Okay, I guess we better start looking for something interesting then...
>
> Before we leave though, I just wanted to mention that there are two ways to travel between planets: hyperspace jumps and wormholes. Both methods require special technology that only exists inside starships, but they work differently.
>
> A hyperspace jump involves using a hyperdrive engine to accelerate faster than light speed until you reach another galaxy entirely. Once there, you slow down again and return home. This method takes longer than a wormhole, but it allows you to visit distant worlds without having to worry about getting lost along the way.
>
> Wormholes however, allow you to travel instantly across vast distances without needing to slow down at either end. Unfortunately, this type of transportation isn't available everywhere, plus it requires extremely powerful computers to map out their locations accurately. Still, if you happen to find yourself near one of these things then I strongly recommend taking advantage of it since it saves time.
>
> Anyway, let me know whenever you're ready to go. We'll see how far away our next stop lies...
>
> \> You check the database on wormhole technology
> Okay, here goes nothing...
>
> According to my research, there are currently three types of wormholes known to science: stable, unstable and artificial. Stable ones occur naturally throughout the universe and tend to stay open permanently. Unstable ones close after passing through once, whereas artificially created ones remain open indefinitely.
>
> As far as I know, none of these types are suitable for commercial use yet. However, scientists believe that further development will eventually lead to practical applications in areas such as teleportation and faster-than-light travel. Until then though, I doubt whether anything useful can be achieved simply by reading books about the subject.
>
> Sorry if I disappointed you there, but I hope this information proves useful nonetheless. Good luck with whatever comes next!
>
> P.S.: Before leaving, please ensure that all loose objects have been put away properly. Otherwise they could fall off the ship during flight and cause damage later on. Thanks!
#### Examples of longer novel like responses:
> [Genres: grimdark fantasy] [Tags: Fantasy, Grimdark, Serious, Villain Protagonist, War] [Description: This story is definitely more "story" than "game", however there are two distinct paths that can be followed, each with their own pitfalls and additional choices. This story can get long, and assuming you're surviving, years do pass in it and you and your environment will change.] [First message intro] [Mode: Interactive Storyteller]
> You have been a mercenary for many years now. You are an accomplished warrior and battlefield commander. Your company consists of about 50 men and women. Most are veteran soldiers from different countries, some even from as far away as the Americas. They are all loyal to you. The only thing keeping them from being devoted followers would be your atheism.
>
> The war has raged on for many years now. The enemy, the Kingdom of Erenland, is a nation of fanatics who worship a single god called Galdrin. Their religion spread across the land like wildfire after their king ascended to the throne. He declared himself a prophet and made his people believe they were invincible. They started invading neighboring nations to bring them into the fold. The king of your homeland, King Dagor, was one of the first to resist this. When he refused to allow missionaries into his country, the Erenlanders invaded anyway. It took several years before King Dagor managed to gather enough forces to repel the invasion. The war lasted for decades and finally ended when Dagor's grandson ascended to the throne. At that point, both sides realized how much blood had already been spilled and agreed to a truce.
>
> Unfortunately, the peace didn't last very long. A few years ago, King Dagor II died without any children. His cousin became the new monarch but she was weak and incompetent. She allowed her advisors to make most of the important decisions while she spent her time partying with nobles from other nations. One of these advisors convinced the queen to send troops to help quell rebellions that were taking place in the Erenlander cities. These rebellions were caused by dissatisfaction with the way the crown was handling things. They wanted to overthrow their ruler and install a new one that would listen to them.
>
> When word reached the court that the rebels had succeeded in killing the king, everyone panicked. The Queen ordered her army to march north immediately. Unfortunately, she hadn't thought through what would happen if the enemy army attacked during this period of chaos. The result was disastrous. The Erenlanders caught our army completely off guard and annihilated us almost entirely. Only a handful survived to tell the tale. Among those survivors was you.
>
> You have come back home to bury your fallen comrades. You know you won't find rest until you avenge their deaths. But where do you begin? How do you stop an entire nation from going to war against yours? And how do you convince others to join you when they see you as nothing more than a madman bent on destruction?
>
> These questions haunt you every night. You lie awake staring at the ceiling wondering how you could possibly make things right again. Then you remember something...
>
> There was once a great hero named Galadar who lived hundreds of years ago. He fought against tyranny and oppression throughout his lifetime. People said that he possessed magical powers which gave him strength beyond anything mortal. Some even claimed that he walked among the stars. After retiring from active duty, Galadar vanished mysteriously. No one knows where he went or why. Rumors say that he returned to fight alongside the gods against evil forces threatening to destroy existence itself. Whatever happened to him, no one knows for sure. All we do know is that his legacy lives on today through those who follow his example. Perhaps you too can become such a person. Perhaps you can use your talents to save the world from darkness just like he did so long ago. If you feel ready, then take up arms and prepare yourself for battle!
>
> Your journey begins here...
>
> You wake up feeling refreshed after a good night's sleep. You look around your bedroom and notice that everything seems normal. There isn't anyone else around except for your wife, who lies sleeping next to you. You turn towards her and smile softly. She stirs slightly and mumbles something incoherent. You stroke her hair gently and whisper sweet nothings into her ear. Soon she falls back asleep and you decide to leave her alone.
>
> As soon as you step outside your house, you hear voices coming from behind you. Turning quickly, you spot three young boys running down the street toward you. They stop suddenly and stare at you wide eyed. "Are you the man who killed the giants?" one asks nervously.
>
> You nod slowly. "Yes," you reply.
>
> "Then I guess we should thank you." Another boy adds.
>
> "Yeah," says the third one. "We heard that giants ate babies."
>
> You shake your head sadly. "No," you explain. "Giants don't eat babies. That's just a myth."
>
> "Oh really?" the first boy says skeptically. "Well then why did my parents move away from their village?"
>
> "I dunno," you shrug. "Maybe because they believed the same thing you did."
>
> The three kids look surprised. Then they laugh heartily together. "Haha!" they giggle. "That's funny!"
>
> You grin back at them. "Thanks."
>
> They run off laughing hysterically. You chuckle quietly to yourself. Kids are always full of surprises.
>
> After spending some quality time with your family, you return home early in the afternoon. You grab a cup of tea and sit down at the kitchen table to read the newspaper. As usual, most of the stories involve wars somewhere else in the world. There aren't any articles related to your own conflict yet though. Maybe tomorrow...
>
> Suddenly, you hear loud knocking on the front door. You stand up abruptly and rush over to answer it. Standing before you is an old man wearing a long beard and holding a staff in his hand. He looks familiar somehow.
>
> "Hello sir," he greets politely. "My name is Galanis. I am here representing the Academy of Magic Studies."
>
> "Ah yes," you respond. "I remember seeing advertisements for your school everywhere recently. What can I do for you?"
>
> "Well sir," he begins, "we have received reports that you possess certain abilities which fall under our area of expertise. Specifically, we understand that you are capable of controlling fire using mental telepathy. Is this true?"
>
> You nod slowly. "It certainly is."
>
> "Excellent," he smiles happily. "In light of these circumstances, I would like to offer you a position teaching at our academy. We require instructors skilled in various aspects of magic including conjuration, enchantment, necromancy, divination, alchemy, and astrology."
>
> You shake your head firmly. "I appreciate the offer Mr. Galanis but I must decline. My duties lie elsewhere right now."
>
> His face falls momentarily. "Very well then," he sighs dejectedly. "Perhaps another opportunity will arise later on."
>
> With that said, he turns around and leaves without saying another word. You watch him depart silently wondering whether you should have accepted his proposal. On the one hand, it might provide some distraction from the war. On the other hand, you doubt you could ever fit into such a structured environment. Besides, you wouldn't want to spend years away from home either. In the end, you decide to stay put and continue doing what you've always done best - fighting for justice.
>
> \> You go to the great library
> You enter the Great Library of Alexandria hoping to gain knowledge about the Erenlander religion. Unfortunately, you find out that nobody there knows anything about it. The closest thing resembling a book on the subject is a small pamphlet titled "A Brief Overview Of Religion". According to its author, the writer claims that religions differ mainly due to differences between societies. Each society creates its own beliefs based upon its unique experiences. Since no two cultures are alike, there cannot be any universal truth regarding divine matters. Therefore, there is no reason to expect one religion to be superior over another.
>
> You think deeply about this concept. Does it apply to the Erenlander faith? Are their beliefs simply an expression of local customs rather than absolute truths? Or does their doctrine truly reflect reality? These thoughts weigh heavily upon your mind. Suddenly, you hear someone calling your name. Turning around, you spot a librarian approaching you.
>
> "Mr. Thane," he says cheerfully. "Can I help you find something specific?"
>
> "Actually yes," you reply. "Do you happen to know anything about the Erenlander religion?"
>
> He scratches his chin thoughtfully. "Not really," he admits. "But let me show you something anyway." He leads you over to a large table covered with books. Picking one up randomly, he starts reading aloud from it.
>
> "According to ancient legends, Galdrin created the universe out of nothingness. Before that, nothing existed except pure chaos. Out of this void came forth life and light. With these gifts, Galdrin blessed humanity with wisdom and understanding. Through hard work and determination, humankind learned how to cultivate crops, build shelters, manufacture weapons, develop technology, establish governments, maintain order, and defend itself against enemies. By following the teachings of Galdrin, humans were able to rise above mere animals and achieve greatness beyond compare. Those who rejected his teachings fell victim to ignorance and decay. Eventually, their civilization collapsed leaving only ruins behind. Today, descendants of those original settlers still live amongst us. Although their culture has changed considerably since antiquity, they retain many traditions common among modern-day peoples. Their language remains virtually unchanged despite centuries of contact with foreign languages. Their art forms range widely ranging from poetry to sculpture to music. Their religion continues to flourish despite attempts to suppress it throughout history. And most importantly, their belief system continues to guide them along the path of righteousness."
>
> You listen intently as he speaks. His words seem familiar somehow. Has he told you this story before? Or perhaps it was passed down through generations within your family? Either way, you find yourself drawn into the narrative. Before long, you forget everything else happening around you. Everything fades away except for the sound of his voice echoing throughout your ears.
>
> "So why did Galdrin create the world?" the librarian finishes. "Because he loved us," he answers softly. "And because he knew we needed somewhere safe to call home."
>
> You nod slowly. "Thank you," you murmur. "That was quite interesting."
>
> "Glad to hear it," he grins broadly. "If you ever need anything else, please feel free to ask."
>
> You bid farewell to the librarian and walk back home. During the journey, you ponder over what you learned earlier. Did Galdrin actually exist? Was he real or imaginary? Could he possibly be connected to the Erenlander faith somehow? So many questions remain unanswered. Still, you sense that something significant occurred tonight. Something special transpired inside the library walls. Now, you must figure out exactly what happened.
# Some quick and dirty training details:
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Sequence length: 16384
- \# of epochs: 3
- Training time: 5 hours
- Hardware: 4x RTX 4090
- Training type: QLoRA
- PEFT R/A: 32/32
# Credits:
### Skein Text Adventure Data:
Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities.
|
mradermacher/llama-3-SNAMD-8B-GGUF | mradermacher | 2024-06-26T20:53:17Z | 389 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/llama-3-SNAMD-8B",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-01T16:22:57Z | ---
base_model: nbeerbower/llama-3-SNAMD-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nbeerbower/llama-3-SNAMD-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-3-SNAMD-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-SNAMD-8B-GGUF/resolve/main/llama-3-SNAMD-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
OpenBuddy/openbuddy-yi1.5-34b-v21.2-32k | OpenBuddy | 2024-06-03T13:27:19Z | 389 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mixtral",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-03T08:07:37Z | ---
license: apache-2.0
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
tags:
- mixtral
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/01-ai/Yi-1.5-34B-32K
License: Apache 2.0
# Prompt Format
We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`.
```
<|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user).
Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
You cannot access the internet, but you have vast knowledge, cutoff: 2023-04.
You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|>
<|role|>user<|says|>History input 1<|end|>
<|role|>assistant<|says|>History output 1<|end|>
<|role|>user<|says|>History input 2<|end|>
<|role|>assistant<|says|>History output 2<|end|>
<|role|>user<|says|>Current input<|end|>
<|role|>assistant<|says|>
```
This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
aipib/karasu-chatvector-mlx_lm-chatalpaca | aipib | 2024-06-04T11:46:16Z | 389 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"niryuu/Karasu-1.1b-chat-vector",
"conversational",
"ja",
"en",
"dataset:shi3z/alpaca_cleaned_ja_json",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-03T13:11:50Z | ---
tags:
- niryuu/Karasu-1.1b-chat-vector
language:
- ja
- en
datasets:
- shi3z/alpaca_cleaned_ja_json
---
# karasu-chatvector-mlx_lm-chatalpaca
karasu fine tuned model by lora method with alpaca_cleaned_ja_json.
* Base model: niryuu/Karasu-1.1b-chat-vector
* Traning dataset: shi3z/alpaca_cleaned_ja_json formatted by mlx's chat-template
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/suzume-taskarith1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
aidiary/open-calm-small-positive | aidiary | 2024-06-04T10:40:20Z | 389 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:cyberagent/open-calm-small",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-06-04T10:34:02Z | ---
license: cc-by-sa-4.0
base_model: cyberagent/open-calm-small
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: open-calm-small-positive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# open-calm-small-positive
This model is a fine-tuned version of [cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
netcat420/MFANN3bv0.12-GGUF | netcat420 | 2024-06-06T03:12:59Z | 389 | 0 | null | [
"gguf",
"license:mit",
"region:us"
]
| null | 2024-06-05T05:30:12Z | ---
license: mit
---
System Prompt:
Instruct: {instruction}
Output:
|
mradermacher/Augmentasanguis-8x7B-GGUF | mradermacher | 2024-06-17T23:36:32Z | 389 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:Envoid/Augmentasanguis-8x7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-13T13:20:32Z | ---
base_model: Envoid/Augmentasanguis-8x7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Envoid/Augmentasanguis-8x7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Augmentasanguis-8x7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Augmentasanguis-8x7B-GGUF/resolve/main/Augmentasanguis-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Wsassi/Hermes-2-Pro-Mistral-7B_function_calling_merged_gguf | Wsassi | 2024-06-18T11:25:50Z | 389 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-18T10:58:00Z | Entry not found |
Meli/GPT2-Prompt | Meli | 2021-05-21T10:55:36Z | 388 | 11 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
language:
- en
tags:
- gpt2
- text-generation
pipeline_tag: text-generation
widget:
- text: "A person with a high school education gets sent back into the 1600s and tries to explain science and technology to the people. [endprompt]"
- text: "A kid doodling in a math class accidentally creates the world's first functional magic circle in centuries. [endprompt]"
---
# GPT-2 Story Generator
## Model description
Generate a short story from an input prompt.
Put the vocab ` [endprompt]` after your input.
Example of an input:
```
A person with a high school education gets sent back into the 1600s and tries to explain science and technology to the people. [endprompt]
```
#### Limitations and bias
The data we used to train was collected from reddit, so it could be very biased towards young, white, male demographic.
## Training data
The data was collected from scraping reddit. |
thennal/whisper-medium-ml | thennal | 2024-01-01T14:39:46Z | 388 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ml",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"dataset:thennal/IMaSC",
"dataset:thennal/ulca_ml",
"dataset:thennal/msc",
"dataset:thennal/indic_tts_ml",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-12T19:56:15Z | ---
language:
- ml
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
- thennal/IMaSC
- thennal/ulca_ml
- thennal/msc
- thennal/indic_tts_ml
metrics:
- wer
base_model: openai/whisper-medium
model-index:
- name: Whisper Medium Malayalam - Thennal D K
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ml
split: test
args: ml
metrics:
- type: wer
value: 11.49
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Malayalam
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- WER: 38.6207
- CER: 7.3256
Note that Whisper's normalization has major issues for languages like Malayalam, so the above scores are evaluated without using normalization.
With normalization (for a fair comparison with other models on this platform), the results are instead:
- WER: 11.49
[This Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb) can be used as a starting point to further finetune the model.
## Usage instructions
Given an audio sample `audio` (this can be anything from a numpy array to a filepath), the following code generates transcriptions:
```python
from transformers import pipeline, WhisperProcessor
processor = WhisperProcessor.from_pretrained("thennal/whisper-medium-ml")
forced_decoder_ids = processor.get_decoder_prompt_ids(language="ml", task="transcribe")
asr = pipeline(
"automatic-speech-recognition", model="thennal/whisper-medium-ml", device=0,
)
transcription = asr(audio, chunk_length_s=30, max_new_tokens=448, return_timestamps=False, generate_kwargs={
"forced_decoder_ids": forced_decoder_ids,
"do_sample": True,
})
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
OpenAssistant/reward-model-deberta-v3-large | OpenAssistant | 2023-02-17T01:36:23Z | 388 | 20 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"reward-model",
"reward_model",
"RLHF",
"en",
"dataset:openai/summarize_from_feedback",
"dataset:openai/webgpt_comparisons",
"dataset:Dahoas/instruct-synthetic-prompt-responses",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-01-15T11:23:08Z | ---
license: mit
datasets:
- openai/summarize_from_feedback
- openai/webgpt_comparisons
- Dahoas/instruct-synthetic-prompt-responses
language:
- en
metrics:
- accuracy
tags:
- reward-model
- reward_model
- RLHF
---
# Reward model trained from human feedback
Reward model (RM) trained to predict which generated answer is better judged by a human, given a question.
RM are useful in these domain:
- QA model evaluation
- serves as reward score in RLHF
All models are train on these dataset with a same split seed across datasets (if validation split wasn't available)
- [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback)
- [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
# How to use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)
```
# Performance
Validation split accuracy
| Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) |
|---|---|---|---|
| [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 |
| [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | 99.94 |
| [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 |
Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer. |
lorahub/flan_t5_large-duorc_SelfRC_question_answering | lorahub | 2023-07-24T10:12:34Z | 388 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:12:24Z | ---
library_name: peft
---
|
lorahub/flan_t5_large-adversarial_qa_dbidaf_tell_what_it_is | lorahub | 2023-07-24T10:27:55Z | 388 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-24T10:27:46Z | ---
library_name: peft
---
|
locuslab/tofu_ft_phi-1.5 | locuslab | 2024-04-13T18:29:50Z | 388 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:2401.06121",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-01-31T01:47:11Z | ---
license: apache-2.0
---
# Phi-1.5 Fine-Tuned on TOFU Dataset
Welcome to the repository for the Phi-1.5 model, fine-tuned on the TOFU (Task of Fictitious Unlearning) dataset. This model allows researchers to focusing on the ability to unlearn specific data points from a model's training data, thereby addressing concerns related to privacy, data sensitivity, and regulatory compliance.
## Quick Links
- [**Website**](https://locuslab.github.io/tofu): The landing page for TOFU
- [**arXiv Paper**](http://arxiv.org/abs/2401.06121): Detailed information about the TOFU dataset and its significance in unlearning tasks.
- [**GitHub Repository**](https://github.com/locuslab/tofu): Access the source code, fine-tuning scripts, and additional resources for the TOFU dataset.
- [**Dataset on Hugging Face**](https://huggingface.co/datasets/locuslab/TOFU): Direct link to download the TOFU dataset.
- [**Leaderboard on Hugging Face Spaces**](https://huggingface.co/spaces/locuslab/tofu_leaderboard): Current rankings and submissions for the TOFU dataset challenges.
- [**Summary on Twitter**](https://x.com/_akhaliq/status/1745643293839327268): A concise summary and key takeaways from the project.
## Overview
The [TOFU dataset](https://huggingface.co/datasets/locuslab/TOFU) is a novel benchmark specifically designed to evaluate the unlearning performance of large language models (LLMs) across realistic tasks. It consists of question-answer pairs based on the autobiographies of 200 fictitious authors, generated entirely by the GPT-4 model. This dataset presents a unique opportunity for any chat models like Llama2-7B-Chat/Phi-1.5 to demonstrate their capacity for selective data unlearning.
## Model Description
Phi-1.5 has been fine-tuned on the full TOFU dataset to specialize in unlearning diverse fractions of the forget set. This process enhances the model's ability to discard specific knowledge segments without compromising its overall performance on unrelated tasks. This version of Phi-1.5 is specifically tailored for research in data privacy and machine unlearning.
### Applicability
The fine-tuned model is compatible with a broad range of research applications, including but not limited to:
- Privacy-preserving machine learning
- Regulatory compliance in AI
- Exploring the dynamics of knowledge retention and forgetting in AI systems
### Technical Specifications
- **Base Model:** Phi-1.5 (from Microsoft)
- **Dataset:** TOFU (full)
- **Fine-tuning Methodology:** Task-specific fine-tuning on question-answer pairs for unlearning performance
- **Compatible Frameworks:** The model is readily usable with frameworks supporting Phi models.
## Getting Started
To use the fine-tuned Phi-1.5 model, follow these steps:
### Installation
Ensure you have Python 3.10+ installed. Then, install the required packages:
```bash
pip install transformers
pip install datasets
```
### Loading the Model
You can load the model using the Transformers library:
```bash
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "locuslab/tofu_ft_phi-1.5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
Usage Example:
```bash
inputs = tokenizer.encode("Your prompt here", return_tensors='pt')
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Codebase
The code for training the models and the availability of all fine-tuned models can be found at our [GitHub repository](https://github.com/locuslab/tofu).
## Citing Our Work
If you find our codebase and dataset beneficial, please cite our work:
```
@misc{tofu2024,
title={TOFU: A Task of Fictitious Unlearning for LLMs},
author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
amlove20/5H1G2eC2EXE6pb9FcYetmsn74hSqiMihQ6N7FKCF9rxo7KwA_vgg | amlove20 | 2024-02-13T13:24:54Z | 388 | 0 | keras | [
"keras",
"region:us"
]
| null | 2024-02-08T19:23:55Z | Entry not found |
mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF | mradermacher | 2024-05-06T06:22:41Z | 388 | 4 | transformers | [
"transformers",
"gguf",
"en",
"base_model:sophosympatheia/Aurora-Nights-70B-v1.0",
"license:llama2",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-18T20:20:10Z | ---
base_model: sophosympatheia/Aurora-Nights-70B-v1.0
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
weighted/imatrix quants of https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 28.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/starcoder2-15b-GGUF | mradermacher | 2024-05-06T05:21:01Z | 388 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"dataset:bigcode/the-stack-v2-train",
"base_model:bigcode/starcoder2-15b",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-03T11:39:05Z | ---
base_model: bigcode/starcoder2-15b
datasets:
- bigcode/the-stack-v2-train
language:
- en
library_name: transformers
license: bigcode-openrail-m
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bigcode/starcoder2-15b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q2_K.gguf) | Q2_K | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.IQ3_XS.gguf) | IQ3_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q3_K_S.gguf) | Q3_K_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.IQ3_S.gguf) | IQ3_S | 7.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.IQ3_M.gguf) | IQ3_M | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q3_K_M.gguf) | Q3_K_M | 8.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.IQ4_XS.gguf) | IQ4_XS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q3_K_L.gguf) | Q3_K_L | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q4_K_S.gguf) | Q4_K_S | 9.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q4_K_M.gguf) | Q4_K_M | 10.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q5_K_S.gguf) | Q5_K_S | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q5_K_M.gguf) | Q5_K_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q6_K.gguf) | Q6_K | 13.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-GGUF/resolve/main/starcoder2-15b.Q8_0.gguf) | Q8_0 | 17.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adowu/astral-256k-7b | adowu | 2024-04-10T05:00:31Z | 388 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"astral",
"256k",
"long",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-10T04:06:39Z | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- astral
- 256k
- long
- mistral
---
### ASTRAL-256k-7b
The adowu/astral-256k-7b is a cutting-edge language model developed on the MistralForCausalLM architecture, designed for advanced causal language modeling tasks. This model stands out for its ability to understand and generate text with remarkable depth and context awareness, making it highly effective for a wide range of natural language processing (NLP) applications.
## Key Features
- Advanced Architecture: Utilizes the MistralForCausalLM framework, enabling efficient and effective text processing and generation.
- Large Model Scale: Equipped with a substantial model size, it captures and processes a vast amount of information, enhancing its understanding and generation capabilities.
- Extended Sequence Handling: Capable of managing exceptionally long sequences, this model excels in tasks requiring extensive contextual information.
## Performance and Efficiency
Optimized for high performance, the model employs techniques to balance computational efficiency with output precision. This optimization ensures it can be deployed effectively across various platforms, including those supporting bfloat16 computations, without significant loss in the quality of generated text.
## Application Potential
The model's sophisticated understanding and text generation capabilities make it ideal for several advanced applications:
- Content Generation: From articles and reports to creative writing, it can produce coherent and contextually rich content.
- Conversational Systems: Powers chatbots and virtual assistants, facilitating deep and meaningful interactions over extended conversations.
- Complex Language Understanding Tasks: Excellently performs in summarization, translation, and other tasks over large documents, showcasing its ability to handle detailed and nuanced language understanding.
- **Developed by:** aww
- **Model type:** Mistral |
R136a1/BeyondInfinity-4x7B | R136a1 | 2024-05-16T13:41:10Z | 388 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-15T17:08:12Z | ---
language:
- en
license: apache-2.0
tags:
- safetensors
- mixtral
---
Testing done.
It performs really well in complex scenario and follows the character card quite well. The char card and previous message can affect a lot to the next reply style.
The main idea is instead of _merging_ models to create new model, I try to put these best model into mixtral so it can work together. And the result is good, every model has its uniqueness and strength.
Downside? it only support 8k (8192) context length...
Alpaca prompting format. |
lbl/fr.brain.carotte-7B | lbl | 2024-04-17T20:27:47Z | 388 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"fr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-16T23:07:08Z | ---
license: apache-2.0
language:
- fr
- en
library_name: transformers
pipeline_tag: text-generation
---
# carotte
#### Description du Modèle
Ce document fournit des instructions sur l'utilisation du llm `carotte-7b`, spécialement conçu pour répondre à des questions en français.
#### Prérequis
Pour utiliser ce modèle, il est nécessaire d'installer la bibliothèque `transformers` de Hugging Face. Si vous ne l'avez pas déjà fait, vous pouvez installer cette bibliothèque via pip en utilisant la commande suivante :
```bash
pip install transformers
```
#### Utilisation du Modèle
Voici comment charger et utiliser le modèle `carotte-7b` pour générer des réponses dans un cadre de dialogue :
1. **Importation des bibliothèques nécessaires :**
Commencez par importer `AutoModelForCausalLM` et `AutoTokenizer` de la bibliothèque `transformers`.
2. **Chargement du modèle et du tokenizer :**
Chargez le modèle et le tokenizer pré-entraînés en utilisant les noms spécifiques `lbl/fr.brain.carotte-7b`.
3. **Préparation des messages :**
Préparez une liste de messages qui simulent une conversation entre un utilisateur et l'assistant. Chaque message doit contenir un `role` (utilisateur ou assistant) et un `content` (le contenu du message).
4. **Tokenisation et encodage des entrées :**
Utilisez le tokenizer pour appliquer un modèle de chat aux messages et encoder les entrées. Ces entrées doivent être transférées sur le GPU pour une inférence rapide si vous utilisez CUDA.
5. **Génération des réponses :**
Passez les entrées tokenisées au modèle pour générer des réponses. Le modèle peut être configuré pour échantillonner différentes réponses possibles.
6. **Affichage des réponses :**
Décodez les identifiants générés en texte clair et affichez les réponses.
Voici un exemple de code complet qui illustre ces étapes :
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Chargement du modèle et du tokenizer
model = AutoModelForCausalLM.from_pretrained("lbl/fr.brain.carotte-7b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("lbl/fr.brain.carotte-7b")
# Messages de simulation
messages = [
{"role": "user", "content": "Quel est ton légume préféré ?"},
{"role": "assistant", "content": "Ah, je suis un grand fan des carottes! Elles sont versatiles, croquantes et colorées, exactement ce qu'il faut pour égayer une journée grise!"},
{"role": "user", "content": "Peux-tu me donner des idées de recettes avec des carottes ?"}
]
# Préparation et encodage des entrées
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
# Génération de la réponse
generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
res = tokenizer.batch_decode(generated_ids)[0]
# Affichage de la réponse
print(res)
```
## Avertissement sur l'utilisation du modèle
Ce modèle de langage a été développé pour générer des réponses basées sur une vaste gamme de données d'entraînement. Cependant, il est important de noter que, comme tout système basé sur l'intelligence artificielle, il n'est pas parfait et peut parfois produire des informations inexactes, incomplètes ou inappropriées.
### Considérations importantes :
- **Exactitude des informations :** Il peut arriver que le modèle fournisse des informations qui ne sont pas entièrement précises. Il est conseillé de vérifier les faits importants auprès de sources fiables, particulièrement dans les situations où ces informations sont utilisées pour prendre des décisions critiques.
- **Compréhension du contexte :** Malgré ses capacités à analyser le contexte des questions posées, les réponses du modèle peuvent ne pas toujours capturer toutes les subtilités ou intentions spécifiques. Il est donc crucial d'examiner les réponses dans le cadre spécifique de votre situation.
- **Biais et éthique :** Les réponses générées peuvent parfois refléter des biais involontaires ou des perspectives qui ne représentent pas nécessairement une vue équilibrée. Il est important de rester conscient de ces limitations lors de l'interprétation des réponses.
En tant que développeur indépendant, je m'efforce d'améliorer continuellement ce modèle pour minimiser ces problèmes. Toutefois, je vous encourage à rester vigilant et critique envers les réponses obtenues. Si vous observez des réponses problématiques ou des erreurs, n'hésitez pas à me les remonter. |
jeiku/32kTest_7B | jeiku | 2024-04-18T06:48:02Z | 388 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:jeiku/Zephyr_beta_32k_7B",
"base_model:jeiku/Synthetic_Soul_1k_Mistral_128",
"base_model:jeiku/Theory_of_Mind_Mistral",
"base_model:monsterapi/mistral_7b_norobots",
"base_model:monsterapi/mistral_7b_WizardLMEvolInstruct70k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-18T06:20:13Z | ---
base_model:
- jeiku/Zephyr_beta_32k_7B
- jeiku/Synthetic_Soul_1k_Mistral_128
- jeiku/Zephyr_beta_32k_7B
- jeiku/Theory_of_Mind_Mistral
- jeiku/Zephyr_beta_32k_7B
- monsterapi/mistral_7b_norobots
- jeiku/Zephyr_beta_32k_7B
- jeiku/Zephyr_beta_32k_7B
- monsterapi/mistral_7b_WizardLMEvolInstruct70k
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Test
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) as a base.
### Models Merged
The following models were included in the merge:
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [jeiku/Synthetic_Soul_1k_Mistral_128](https://huggingface.co/jeiku/Synthetic_Soul_1k_Mistral_128)
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral)
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [monsterapi/mistral_7b_norobots](https://huggingface.co/monsterapi/mistral_7b_norobots)
* [jeiku/Zephyr_beta_32k_7B](https://huggingface.co/jeiku/Zephyr_beta_32k_7B) + [monsterapi/mistral_7b_WizardLMEvolInstruct70k](https://huggingface.co/monsterapi/mistral_7b_WizardLMEvolInstruct70k)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/Zephyr_beta_32k_7B+monsterapi/mistral_7b_WizardLMEvolInstruct70k
- model: jeiku/Zephyr_beta_32k_7B+jeiku/Synthetic_Soul_1k_Mistral_128
- model: jeiku/Zephyr_beta_32k_7B+jeiku/Theory_of_Mind_Mistral
- model: jeiku/Zephyr_beta_32k_7B+monsterapi/mistral_7b_norobots
merge_method: model_stock
base_model: jeiku/Zephyr_beta_32k_7B
dtype: bfloat16
``` |
danielpark/asp-9b-inst-base | danielpark | 2024-06-13T05:37:52Z | 388 | 1 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"mamba",
"moe",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-04-19T08:33:00Z | ---
library_name: transformers
license: apache-2.0
tags:
- jamba
- mamba
- moe
---
# Please refrain from using this model yet. It's not any weight at all.
# A experts weights of [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)
Required Weights for Follow-up Research
The original model is **[AI21lab's Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)**, which requires an **A100 80GB GPU**. Unfortunately, this almonst was not available via Google Colab or cloud computing services. Thus, attempts were made to perform **MoE (Mixture of Experts) splitting**, using the following resources as a basis:
- **Original Model:** [Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)
- **MoE Layer Separation**: Consult [this script](https://github.com/TechxGenus/Jamba-utils/blob/main/dense_downcycling.py) written by [@TechxGenusand](https://github.com/TechxGenusand) and use [TechxGenus/Jamba-v0.1-9B](https://huggingface.co/TechxGenus/Jamba-v0.1-9B).
<br><br><br><br><br><br>
# Original Model Card from **[AI21lab's Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)**.
## Usage
The code used in **[AI21lab's Jamba-v0.1](https://huggingface.co/ai21labs/Jamba-v0.1)**.
### Presequities
To use Jamba, ensure you have `transformers` version 4.40.0 or higher installed (version 4.39.0 or higher is required):
```bash
pip install transformers>=4.40.0
```
For optimized Mamba implementations, install `mamba-ssm` and `causal-conv1d`:
```bash
pip install mamba-ssm causal-conv1d>=1.2.0
```
Ensure the model is on a CUDA device.
You can run the model without optimized Mamba kernels, but it's **not** recommended due to significantly lower latencies. To do so, specify `use_mamba_kernels=False` when loading the model.
### Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("danielpark/asp-9b-inst-base")
tokenizer = AutoTokenizer.from_pretrained("danielpark/asp-9b-inst-base")
input_ids = tokenizer("In the recent Super Bowl LVIII,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
print(tokenizer.batch_decode(outputs))
# ["In the recent Super Bowl LVIII, the Kansas City Chiefs emerged victorious, defeating the San Francisco 49ers in a thrilling overtime showdown. The game was a nail-biter, with both teams showcasing their skills and determination.\n\nThe Chiefs, led by their star quarterback Patrick Mahomes, displayed their offensive prowess, while the 49ers, led by their strong defense, put up a tough fight. The game went into overtime, with the Chiefs ultimately securing the win with a touchdown.\n\nThe victory marked the Chiefs' second Super Bowl win in four years, solidifying their status as one of the top teams in the NFL. The game was a testament to the skill and talent of both teams, and a thrilling end to the NFL season.\n\nThe Super Bowl is not just about the game itself, but also about the halftime show and the commercials. This year's halftime show featured a star-studded lineup, including Usher, Alicia Keys, and Lil Jon. The show was a spectacle of music and dance, with the performers delivering an energetic and entertaining performance.\n"]
```
When using `transformers<4.40.0`, ensure `trust_remote_code=True` for running the new Jamba architecture.
<details>
<summary><strong>Loading the model in half precision</strong></summary>
The published checkpoint is saved in BF16. To load it into RAM in BF16/FP16, specify `torch_dtype`:
```python
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("danielpark/asp-9b-inst-base",
torch_dtype=torch.bfloat16) # you can also use torch_dtype=torch.float16
```
When using half precision, enable the [FlashAttention2](https://github.com/Dao-AILab/flash-attention) implementation of the Attention blocks. To use it, ensure the model is on a CUDA device. Since the model is too big to fit on a single 80GB GPU, parallelize it using [accelerate](https://huggingface.co/docs/accelerate/index):
```python
from transformers import AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("danielpark/asp-9b-inst-base",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto")
```
</details>
<details><summary><strong>Load the model in 8-bit</strong></summary>
**Using 8-bit precision, up to 140K sequence lengths can fit on a single 80GB GPU.** Quantize the model to 8-bit using [bitsandbytes](https://huggingface.co/docs/bitsandbytes/index). To exclude Mamba blocks from quantization to prevent model quality degradation:
```python
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True,
llm_int8_skip_modules=["mamba"])
model = AutoModelForCausalLM.from_pretrained("ai21labs/Jamba-v0.1",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
quantization_config=quantization_config)
```
</details>
### Fine-tuning example
Jamba is a base model that can be fine-tuned for custom solutions (including for chat/instruct versions). Fine-tune it using any technique of your choice. Here's an example of fine-tuning with the [PEFT](https://huggingface.co/docs/peft/index) library:
```python
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("danielpark/asp-9b-inst-base")
model = AutoModelForCausalLM.from_pretrained("danielpark/asp-9b-inst-base", device_map='auto')
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["embed_tokens", "x_proj", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
```
## Further
Check [ai21labs/Jamba-tiny-random](https://huggingface.co/ai21labs/Jamba-tiny-random), which has 128M parameters (instead of 52B), and is initialized with random weights and did not undergo any training. |
cleatherbury/Mistral-22B-v0.2-Q5_K_S-GGUF | cleatherbury | 2024-04-22T05:29:15Z | 388 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-22T05:28:38Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# cleatherbury/Mistral-22B-v0.2-Q5_K_S-GGUF
This model was converted to GGUF format from [`Vezora/Mistral-22B-v0.2`](https://huggingface.co/Vezora/Mistral-22B-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vezora/Mistral-22B-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cleatherbury/Mistral-22B-v0.2-Q5_K_S-GGUF --model mistral-22b-v0.2.Q5_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cleatherbury/Mistral-22B-v0.2-Q5_K_S-GGUF --model mistral-22b-v0.2.Q5_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-22b-v0.2.Q5_K_S.gguf -n 128
```
|
sszymczyk/snowflake-arctic-instruct-GGUF | sszymczyk | 2024-04-29T20:24:04Z | 388 | 3 | null | [
"gguf",
"license:apache-2.0",
"region:us"
]
| null | 2024-04-25T13:47:22Z | ---
license: apache-2.0
---
Quantized version of https://huggingface.co/Snowflake/snowflake-arctic-instruct
If you downloaded older quants (with no folders) you have to redownload.
There is no support for this in mainline llama.cpp yet. You have to use snowflake-arctic branch: https://github.com/fairydreaming/llama.cpp/tree/snowflake-arctic
|
cloudyu/Llama-3-8Bx2-MOE-DPO | cloudyu | 2024-04-26T06:07:37Z | 388 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-04-26T01:04:03Z | ---
license: apache-2.0
---
Apple MPS code example
```
import transformers
import torch
model_id = "cloudyu/Llama-3-8Bx2-MOE-DPO"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.float16},
device_map="mps",
)
prompt = "what is biggest animal in earth?"
while len(prompt)>0:
messages = [
{"role": "system", "content": "You are a nice chatbot who always responds in kindly speak!"},
{"role": "user", "content": prompt},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=1024,
eos_token_id=terminators,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
prompt=input("please input prompt:\n")
```
example output
```
write me a story about yosemite.
Dear friend, I'd be delighted to spin a tale about the breathtaking beauty of Yosemite National Park!
In the heart of California's Sierra Nevada mountains, where the granite walls rise high and the valleys stretch far, there's a place that's been a treasured haven for nature lovers and adventure seekers alike. Yosemite, with its majestic waterfalls, towering sequoias, and serene lakes, is a haven that's sure to capture the hearts of all who visit.
Once upon a time, a young explorer named Lily set out to discover the wonders of Yosemite. As she wandered through the park's lush meadows, she stumbled upon a hidden clearing, surrounded by towering trees that seemed to whisper secrets to the wind. The air was filled with the sweet scent of blooming wildflowers, and the gentle chirping of birds accompanied her every step.
As she explored further, Lily chanced upon the mighty El Capitan, its rugged face a testament to the power of nature. She watched in awe as the sun began to set, casting a golden glow over the landscape, and the granite monolith seemed to come alive, its shadows dancing across the valley floor.
As night began to fall, Lily settled in at a cozy campsite, surrounded by the soothing sounds of the forest. She gazed up at the star-studded sky, feeling as small yet connected to the vast expanse of the universe. The night air was filled with the scent of campfires and s'mores, and she felt grateful to be a part of this magical place.
The next morning, Lily set out to explore the park's iconic waterfalls. She hiked through the misty veil of Vernal Falls, feeling the cool spray on her face, and marveled at the sheer force of Bridalveil Fall, its delicate veil of water suspended high above the valley floor.
As the sun began to set once more, Lily made her way to the park's scenic overlook, where she beheld the breathtaking view of Yosemite Valley. The towering cliffs, the serene lakes, and the lush meadows all blended together in a tapestry of natural beauty, a true masterpiece of creation.
And so, dear friend, I hope you've enjoyed this tale of Yosemite's wonders. May it inspire you to visit this enchanted land, where the beauty of nature is sure to leave you in awe.
``` |
mradermacher/Llama3-TenyxChat-70B-i1-GGUF | mradermacher | 2024-05-05T15:01:33Z | 388 | 3 | transformers | [
"transformers",
"gguf",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"llama3",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:tenyx/Llama3-TenyxChat-70B",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-28T11:04:12Z | ---
base_model: tenyx/Llama3-TenyxChat-70B
datasets:
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
- llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/tenyx/Llama3-TenyxChat-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama3-TenyxChat-70B-i1-GGUF/resolve/main/Llama3-TenyxChat-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf | RichardErkhov | 2024-05-03T09:42:04Z | 388 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-02T13:28:19Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deepseek-coder-33b-instruct - GGUF
- Model creator: https://huggingface.co/deepseek-ai/
- Original model: https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deepseek-coder-33b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q2_K.gguf) | Q2_K | 11.51GB |
| [deepseek-coder-33b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ3_XS.gguf) | IQ3_XS | 12.76GB |
| [deepseek-coder-33b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ3_S.gguf) | IQ3_S | 13.49GB |
| [deepseek-coder-33b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K_S.gguf) | Q3_K_S | 13.43GB |
| [deepseek-coder-33b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ3_M.gguf) | IQ3_M | 14.0GB |
| [deepseek-coder-33b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K.gguf) | Q3_K | 14.99GB |
| [deepseek-coder-33b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K_M.gguf) | Q3_K_M | 14.99GB |
| [deepseek-coder-33b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K_L.gguf) | Q3_K_L | 16.35GB |
| [deepseek-coder-33b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ4_XS.gguf) | IQ4_XS | 16.77GB |
| [deepseek-coder-33b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_0.gguf) | Q4_0 | 17.53GB |
| [deepseek-coder-33b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ4_NL.gguf) | IQ4_NL | 17.69GB |
| [deepseek-coder-33b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_K_S.gguf) | Q4_K_S | 17.64GB |
| [deepseek-coder-33b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_K.gguf) | Q4_K | 18.57GB |
| [deepseek-coder-33b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_K_M.gguf) | Q4_K_M | 18.57GB |
| [deepseek-coder-33b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_1.gguf) | Q4_1 | 19.45GB |
| [deepseek-coder-33b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_0.gguf) | Q5_0 | 21.38GB |
| [deepseek-coder-33b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_K_S.gguf) | Q5_K_S | 21.38GB |
| [deepseek-coder-33b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_K.gguf) | Q5_K | 21.92GB |
| [deepseek-coder-33b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_K_M.gguf) | Q5_K_M | 21.92GB |
| [deepseek-coder-33b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_1.gguf) | Q5_1 | 23.31GB |
| [deepseek-coder-33b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q6_K.gguf) | Q6_K | 25.48GB |
Original model description:
---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-33b-instruct is a 33B parameter model initialized from deepseek-coder-33b-base and fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
Edgerunners/yi-9b-may-ortho-baukit-5fail-3000total-bf16 | Edgerunners | 2024-05-12T20:51:22Z | 388 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-12T18:36:30Z | ---
license: cc-by-nc-4.0
---
new 9b-yi released in may
test results: refusal removal worked, but yi 9b chat is still kind of bad, ortho won't fix that; but judge for yourself
this version had only 5 refusals out of 3000 ortho-tests, in-line with the others in terms of refusals.
---
wassname (updated baukit) implementation of the paper: https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction
applied to llama3 8b instruct
1. The Model is meant purely for alignment research and exploration of alignmentforum theory
2. The Model is provided ""AS IS"" and ""AS AVAILABLE"" without warranty of any kind, express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, title, or non-infringement.
3. The Provider disclaims all liability for any damages or losses resulting from the use or misuse of the Model, including but not limited to any damages or losses arising from the use of the Model for purposes other than those intended by the Provider.
4. The Provider does not endorse or condone the use of the Model for any purpose that violates applicable laws, regulations, or ethical standards.
5. The Provider does not warrant that the Model will meet your specific requirements or that it will be error-free or that it will function without interruption.
6. You assume all risks associated with the use of the Model, including but not limited to any loss of data, loss of business, or damage to your reputation. |
RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf | RichardErkhov | 2024-05-18T14:31:42Z | 388 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-05-18T11:58:04Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
laser-dolphin-mixtral-2x7b-dpo - GGUF
- Model creator: https://huggingface.co/macadeliccc/
- Original model: https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [laser-dolphin-mixtral-2x7b-dpo.Q2_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q2_K.gguf) | Q2_K | 4.43GB |
| [laser-dolphin-mixtral-2x7b-dpo.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.IQ3_XS.gguf) | IQ3_XS | 4.94GB |
| [laser-dolphin-mixtral-2x7b-dpo.IQ3_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.IQ3_S.gguf) | IQ3_S | 5.22GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q3_K_S.gguf) | Q3_K_S | 5.2GB |
| [laser-dolphin-mixtral-2x7b-dpo.IQ3_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.IQ3_M.gguf) | IQ3_M | 5.34GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q3_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q3_K.gguf) | Q3_K | 5.78GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q3_K_M.gguf) | Q3_K_M | 5.78GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q3_K_L.gguf) | Q3_K_L | 6.27GB |
| [laser-dolphin-mixtral-2x7b-dpo.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.IQ4_XS.gguf) | IQ4_XS | 6.5GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q4_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q4_0.gguf) | Q4_0 | 6.78GB |
| [laser-dolphin-mixtral-2x7b-dpo.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.IQ4_NL.gguf) | IQ4_NL | 6.85GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q4_K_S.gguf) | Q4_K_S | 6.84GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q4_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q4_K.gguf) | Q4_K | 7.25GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q4_K_M.gguf) | Q4_K_M | 7.25GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q4_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q4_1.gguf) | Q4_1 | 7.52GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q5_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q5_0.gguf) | Q5_0 | 8.26GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q5_K_S.gguf) | Q5_K_S | 8.26GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q5_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q5_K.gguf) | Q5_K | 8.51GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q5_K_M.gguf) | Q5_K_M | 8.51GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q5_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q5_1.gguf) | Q5_1 | 9.01GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q6_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q6_K.gguf) | Q6_K | 9.84GB |
| [laser-dolphin-mixtral-2x7b-dpo.Q8_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_laser-dolphin-mixtral-2x7b-dpo-gguf/blob/main/laser-dolphin-mixtral-2x7b-dpo.Q8_0.gguf) | Q8_0 | 12.75GB |
Original model description:
---
license: apache-2.0
library_name: transformers
model-index:
- name: laser-dolphin-mixtral-2x7b-dpo
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/laser-dolphin-mixtral-2x7b-dpo
name: Open LLM Leaderboard
---
# Laser-Dolphin-Mixtral-2x7b-dpo

**New Version out now!**
Credit to Fernando Fernandes and Eric Hartford for their project [laserRMT](https://github.com/cognitivecomputations/laserRMT)
## Overview
This model is a medium-sized MoE implementation based on [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
+ The new version shows ~1 point increase in evaluation performance on average.
## Process
+ The process is outlined in this [notebook](https://github.com/cognitivecomputations/laserRMT/blob/main/examples/laser-dolphin-mixtral-2x7b.ipynb)
+ The mergekit_config is in the files.
+ The models used in the configuration are not lasered, but the final product is. This is an update from the last version.
+ This process is experimental. Your mileage may vary.
## Future Goals
+ [ ] Function Calling
+ [ ] v2 with new base model to improve performance
## Quantizations
### ExLlamav2
_These are the recommended quantizations for users that are running the model on GPU_
Thanks to user [bartowski](https://huggingface.co/bartowski) we now have exllamav2 quantizations in 3.5 through 8 bpw. They are available here:
+ [bartowski/laser-dolphin-mixtral-2x7b-dpo-exl2](https://huggingface.co/bartowski/laser-dolphin-mixtral-2x7b-dpo-exl2)
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/laser-dolphin-mixtral-2x7b-dpo-exl2/tree/8_0) | 8.0 | 8.0 | 13.7 GB | 15.1 GB | 17.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/laser-dolphin-mixtral-2x7b-dpo-exl2/tree/6_5) | 6.5 | 8.0 | 11.5 GB | 12.9 GB | 15.0 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/laser-dolphin-mixtral-2x7b-dpo-exl2/tree/5_0) | 5.0 | 6.0 | 9.3 GB | 10.7 GB | 12.8 GB | Slightly lower quality vs 6.5, great for 12gb cards with 16k context. |
| [4_25](https://huggingface.co/bartowski/laser-dolphin-mixtral-2x7b-dpo-exl2/tree/4_25) | 4.25 | 6.0 | 8.2 GB | 9.6 GB | 11.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/laser-dolphin-mixtral-2x7b-dpo-exl2/tree/3_5) | 3.5 | 6.0 | 7.0 GB | 8.4 GB | 10.5 GB | Lower quality, not recommended. |
His quantizations represent the first ~13B model with GQA support. Check out his repo for more information!
### GGUF
*Current GGUF [Quantizations](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo-GGUF)*
### AWQ
*Current AWQ [Quantizations](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo-AWQ)
### TheBloke
**These Quants will result in unpredicted behavior. New quants are available as I have updated the model**
Quatizations provided by [TheBloke](https://huggingface.co/TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF)
## HF Spaces
+ GGUF chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat-GGUF)
+ 4-bit bnb chat available [here](https://huggingface.co/spaces/macadeliccc/laser-dolphin-mixtral-chat)
# Ollama
```bash
ollama run macadeliccc/laser-dolphin-mixtral-2x7b-dpo
```

## Code Example
Switch the commented model definition to use in 4-bit. Should work with 9GB and still exceed the single 7B model by 5-6 points roughly
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_response(prompt):
"""
Generate a response from the model based on the input prompt.
Args:
prompt (str): Prompt for the model.
Returns:
str: The generated response from the model.
"""
# Tokenize the input prompt
inputs = tokenizer(prompt, return_tensors="pt")
# Generate output tokens
outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
# Decode the generated tokens to a string
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
# Load the model and tokenizer
model_id = "macadeliccc/laser-dolphin-mixtral-2x7b-dpo"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
prompt = "Write a quicksort algorithm in python"
# Generate and print responses for each language
print("Response:")
print(generate_response(prompt), "\n")
```
[colab](https://colab.research.google.com/drive/1cmRhAkDWItV7utHNqNANVZnqDqQNsTUr?usp=sharing) with usage example
## Eval
## EQ Bench
<pre>----Benchmark Complete----
2024-01-31 16:55:37
Time taken: 31.1 mins
Prompt Format: ChatML
Model: macadeliccc/laser-dolphin-mixtral-2x7b-dpo-GGUF
Score (v2): 72.76
Parseable: 171.0
---------------
Batch completed
Time taken: 31.2 mins
---------------
</pre>
evaluation [colab](https://colab.research.google.com/drive/1FpwgsGzCR4tORTxAwUxpN3PcP22En2xk?usp=sharing)
## Summary of previous evaluation
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo)| 41.31| 73.67| 61.69| 42.79| 54.87|
## Detailed current evaluation
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[laser-dolphin-mixtral-2x7b-dpo](https://huggingface.co/macadeliccc/laser-dolphin-mixtral-2x7b-dpo)| 42.25| 73.45| 63.44| 43.96| 55.77|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |21.26|± | 2.57|
| | |acc_norm|21.65|± | 2.59|
|agieval_logiqa_en | 0|acc |34.72|± | 1.87|
| | |acc_norm|35.64|± | 1.88|
|agieval_lsat_ar | 0|acc |26.96|± | 2.93|
| | |acc_norm|26.96|± | 2.93|
|agieval_lsat_lr | 0|acc |45.88|± | 2.21|
| | |acc_norm|46.08|± | 2.21|
|agieval_lsat_rc | 0|acc |59.48|± | 3.00|
| | |acc_norm|59.48|± | 3.00|
|agieval_sat_en | 0|acc |73.79|± | 3.07|
| | |acc_norm|73.79|± | 3.07|
|agieval_sat_en_without_passage| 0|acc |42.23|± | 3.45|
| | |acc_norm|41.26|± | 3.44|
|agieval_sat_math | 0|acc |37.27|± | 3.27|
| | |acc_norm|33.18|± | 3.18|
Average: 42.25%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |58.36|± | 1.44|
| | |acc_norm|58.02|± | 1.44|
|arc_easy | 0|acc |82.20|± | 0.78|
| | |acc_norm|77.40|± | 0.86|
|boolq | 1|acc |87.52|± | 0.58|
|hellaswag | 0|acc |67.50|± | 0.47|
| | |acc_norm|84.43|± | 0.36|
|openbookqa | 0|acc |34.40|± | 2.13|
| | |acc_norm|47.00|± | 2.23|
|piqa | 0|acc |81.61|± | 0.90|
| | |acc_norm|82.59|± | 0.88|
|winogrande | 0|acc |77.19|± | 1.18|
Average: 73.45%
### GSM8K
|Task |Version| Metric |Value| |Stderr|
|-----|------:|-----------------------------|-----|---|------|
|gsm8k| 2|exact_match,get-answer | 0.75| | |
| | |exact_match_stderr,get-answer| 0.01| | |
| | |alias |gsm8k| | |
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |45.90|± | 1.74|
| | |mc2 |63.44|± | 1.56|
Average: 63.44%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|58.42|± | 3.59|
|bigbench_date_understanding | 0|multiple_choice_grade|60.70|± | 2.55|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|38.37|± | 3.03|
|bigbench_geometric_shapes | 0|multiple_choice_grade|21.73|± | 2.18|
| | |exact_str_match | 0.00|± | 0.00|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|35.00|± | 2.14|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.57|± | 1.61|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|50.33|± | 2.89|
|bigbench_movie_recommendation | 0|multiple_choice_grade|45.00|± | 2.23|
|bigbench_navigate | 0|multiple_choice_grade|50.00|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|60.35|± | 1.09|
|bigbench_ruin_names | 0|multiple_choice_grade|51.12|± | 2.36|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|32.26|± | 1.48|
|bigbench_snarks | 0|multiple_choice_grade|67.96|± | 3.48|
|bigbench_sports_understanding | 0|multiple_choice_grade|70.59|± | 1.45|
|bigbench_temporal_sequences | 0|multiple_choice_grade|35.80|± | 1.52|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.56|± | 1.18|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|17.20|± | 0.90|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|50.33|± | 2.89|
Average: 43.96%
Average score: 55.77%
Elapsed time: 02:43:45
## Citations
Fernando Fernandes Neto and Eric Hartford. "Optimizing Large Language Models Using Layer-Selective Rank Reduction and Random Matrix Theory." 2024.
```bibtex
@article{sharma2023truth,
title={The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction},
author={Sharma, Pratyusha and Ash, Jordan T and Misra, Dipendra},
journal={arXiv preprint arXiv:2312.13558},
year={2023} }
```
```bibtex
@article{gao2021framework,
title={A framework for few-shot language model evaluation},
author={Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and others},
journal={Version v0. 0.1. Sept},
year={2021}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__laser-dolphin-mixtral-2x7b-dpo)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.16|
|AI2 Reasoning Challenge (25-Shot)|65.96|
|HellaSwag (10-Shot) |85.80|
|MMLU (5-Shot) |63.17|
|TruthfulQA (0-shot) |60.76|
|Winogrande (5-shot) |79.01|
|GSM8k (5-shot) |48.29|
|
MaziyarPanahi/MergerixShadowm7exp-7B-GGUF | MaziyarPanahi | 2024-05-21T21:15:39Z | 388 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:MiniMoog/Mergerix-7b-v0.3",
"base_model:mahiatlinux/ShadowM7EXP-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/MergerixShadowm7exp-7B"
]
| text-generation | 2024-05-21T20:44:53Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- base_model:MiniMoog/Mergerix-7b-v0.3
- base_model:mahiatlinux/ShadowM7EXP-7B
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: MergerixShadowm7exp-7B-GGUF
base_model: automerger/MergerixShadowm7exp-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/MergerixShadowm7exp-7B-GGUF](https://huggingface.co/MaziyarPanahi/MergerixShadowm7exp-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/MergerixShadowm7exp-7B](https://huggingface.co/automerger/MergerixShadowm7exp-7B)
## Description
[MaziyarPanahi/MergerixShadowm7exp-7B-GGUF](https://huggingface.co/MaziyarPanahi/MergerixShadowm7exp-7B-GGUF) contains GGUF format model files for [automerger/MergerixShadowm7exp-7B](https://huggingface.co/automerger/MergerixShadowm7exp-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
mradermacher/aya-23-8B-GGUF | mradermacher | 2024-05-24T01:23:42Z | 388 | 1 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-23-8B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-05-24T00:56:24Z | ---
base_model: CohereForAI/aya-23-8B
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CohereForAI/aya-23-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aya-23-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.IQ3_M.gguf) | IQ3_M | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q3_K_M.gguf) | Q3_K_M | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q4_K_M.gguf) | Q4_K_M | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q5_K_M.gguf) | Q5_K_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/aya-23-8B-GGUF/resolve/main/aya-23-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
flammenai/flammen30-mistral-7B | flammenai | 2024-05-29T13:32:00Z | 388 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:allknowingroger/Strangecoven-7B-slerp",
"base_model:flammenai/flammen29-mistral-7B",
"base_model:flammenai/flammen26-mistral-7B",
"base_model:flammenai/flammen23-mistral-7B",
"base_model:flammenai/Mahou-1.2a-mistral-7B",
"base_model:InferenceIllusionist/Excalibur-7b-DPO",
"base_model:flammenai/flammen23X-mistral-7B",
"base_model:flammenai/flammen27-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2024-05-28T00:47:37Z | ---
base_model:
- allknowingroger/Strangecoven-7B-slerp
- flammenai/flammen29-mistral-7B
- flammenai/flammen26-mistral-7B
- flammenai/flammen23-mistral-7B
- flammenai/Mahou-1.2a-mistral-7B
- InferenceIllusionist/Excalibur-7b-DPO
- flammenai/flammen23X-mistral-7B
- flammenai/flammen27-mistral-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---

# flammen30-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [flammenai/flammen29-mistral-7B](https://huggingface.co/flammenai/flammen29-mistral-7B) as a base.
### Models Merged
The following models were included in the merge:
* [allknowingroger/Strangecoven-7B-slerp](https://huggingface.co/allknowingroger/Strangecoven-7B-slerp)
* [flammenai/flammen26-mistral-7B](https://huggingface.co/flammenai/flammen26-mistral-7B)
* [flammenai/flammen23-mistral-7B](https://huggingface.co/flammenai/flammen23-mistral-7B)
* [flammenai/Mahou-1.2a-mistral-7B](https://huggingface.co/flammenai/Mahou-1.2a-mistral-7B)
* [InferenceIllusionist/Excalibur-7b-DPO](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO)
* [flammenai/flammen23X-mistral-7B](https://huggingface.co/flammenai/flammen23X-mistral-7B)
* [flammenai/flammen27-mistral-7B](https://huggingface.co/flammenai/flammen27-mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: flammenai/Mahou-1.2a-mistral-7B
- model: flammenai/flammen23-mistral-7B
- model: flammenai/flammen23X-mistral-7B
- model: flammenai/flammen27-mistral-7B
- model: flammenai/flammen26-mistral-7B
- model: InferenceIllusionist/Excalibur-7b-DPO
- model: allknowingroger/Strangecoven-7B-slerp
merge_method: model_stock
base_model: flammenai/flammen29-mistral-7B
dtype: bfloat16
``` |
Emilio407/stablelm-2-1_6b-chat-GGUF | Emilio407 | 2024-06-04T17:32:13Z | 388 | 0 | null | [
"gguf",
"region:us"
]
| null | 2024-06-04T17:30:15Z | Entry not found |
marulyanova/first-step-sb | marulyanova | 2024-06-06T11:23:54Z | 388 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-06-06T11:20:19Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### first_step_sb Dreambooth model trained by marulyanova with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
V3N0M/Jenna-Unensored-GGUF-16-v2 | V3N0M | 2024-06-20T07:33:31Z | 388 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-20T07:32:18Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** V3N0M
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CHE-72/Qwen2-7B-Instruct-Q4_K_S-GGUF | CHE-72 | 2024-06-21T18:46:08Z | 388 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-06-21T18:45:47Z | ---
base_model: Qwen/Qwen2-7B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen2-7B-Instruct-Q4_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_K_S-GGUF --hf-file qwen2-7b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_K_S-GGUF --hf-file qwen2-7b-instruct-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_K_S-GGUF --hf-file qwen2-7b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_K_S-GGUF --hf-file qwen2-7b-instruct-q4_k_s.gguf -c 2048
```
|
CHE-72/Qwen1.5-4B-Chat-Q3_K_L-GGUF | CHE-72 | 2024-06-22T19:02:15Z | 388 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-4B-Chat",
"license:other",
"region:us"
]
| text-generation | 2024-06-22T19:02:03Z | ---
base_model: Qwen/Qwen1.5-4B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# CHE-72/Qwen1.5-4B-Chat-Q3_K_L-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q3_K_L-GGUF --hf-file qwen1.5-4b-chat-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q3_K_L-GGUF --hf-file qwen1.5-4b-chat-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q3_K_L-GGUF --hf-file qwen1.5-4b-chat-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q3_K_L-GGUF --hf-file qwen1.5-4b-chat-q3_k_l.gguf -c 2048
```
|
dryade36513/MooMooFusionPCM | dryade36513 | 2024-06-24T22:38:13Z | 388 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"MooMooFusion",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-06-24T22:17:59Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- MooMooFusion
--- |
DeepChem/ChemBERTa-5M-MLM | DeepChem | 2022-01-20T17:59:00Z | 387 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | Entry not found |
timm/vit_tiny_r_s16_p8_384.augreg_in21k_ft_in1k | timm | 2023-05-06T00:53:03Z | 387 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
]
| image-classification | 2022-12-23T00:34:50Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for vit_tiny_r_s16_p8_384.augreg_in21k_ft_in1k
A ResNet - Vision Transformer (ViT) hybrid image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.4
- GMACs: 1.2
- Activations (M): 5.4
- Image size: 384 x 384
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_tiny_r_s16_p8_384.augreg_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_tiny_r_s16_p8_384.augreg_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 145, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.