modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
bartowski/Llama-3-8B-LexiFun-Uncensored-V1-old-GGUF
bartowski
2024-04-26T10:56:48Z
372
2
null
[ "gguf", "llama3", "comedy", "comedian", "fun", "funny", "llama38b", "laugh", "sarcasm", "roleplay", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-26T10:41:17Z
--- license: other license_name: llama3 license_link: https://llama.meta.com/llama3/license/ language: - en tags: - llama3 - comedy - comedian - fun - funny - llama38b - laugh - sarcasm - roleplay quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Llama-3-8B-LexiFun-Uncensored-V1 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization. Original model: https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|end_of_text|><|start_header_id|>user<|end_header_id|> {prompt}<|end_of_text|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf
RichardErkhov
2024-04-28T09:20:54Z
372
0
null
[ "gguf", "region:us" ]
null
2024-04-28T08:12:59Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Nous-Hermes-2-Mistral-7B-DPO - GGUF - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Nous-Hermes-2-Mistral-7B-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q2_K.gguf) | Q2_K | 2.53GB | | [Nous-Hermes-2-Mistral-7B-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [Nous-Hermes-2-Mistral-7B-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.IQ3_S.gguf) | IQ3_S | 2.96GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [Nous-Hermes-2-Mistral-7B-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.IQ3_M.gguf) | IQ3_M | 3.06GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q3_K.gguf) | Q3_K | 3.28GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [Nous-Hermes-2-Mistral-7B-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf) | Q4_0 | 3.83GB | | [Nous-Hermes-2-Mistral-7B-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_K.gguf) | Q4_K | 4.07GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_1.gguf) | Q4_1 | 4.24GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_0.gguf) | Q5_0 | 4.65GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_K.gguf) | Q5_K | 4.78GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q5_1.gguf) | Q5_1 | 5.07GB | | [Nous-Hermes-2-Mistral-7B-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Nous-Hermes-2-Mistral-7B-DPO-gguf/blob/main/Nous-Hermes-2-Mistral-7B-DPO.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation model-index: - name: Nous-Hermes-2-Mistral-7B-DPO results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Nous Hermes 2 - Mistral 7B - DPO ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/PDleZIZK3vE3ATfXRRySv.png) ## Model Description Nous Hermes 2 on Mistral 7B DPO is the new flagship 7B Hermes! This model was DPO'd from [Teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and has improved across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA. The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5). ## Thank you to FluidStack for sponsoring compute for this model! ## Example Outputs ### Describing Weather Patterns in Paris: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ZX-stQY80edj2Y9ButCzn.png) ### Making JSON Nested Lists ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/3wtVqDOA1S_d48FJtwero.png) ### Roleplaying as a Toaist Master ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/NfxBxrjbTGEsUcR8nOALb.png) ## Benchmark Results Nous-Hermes 2 DPO on Mistral 7B is an improvement across the board on the benchmarks below compared to the original OpenHermes 2.5 model, as shown here: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/O-LLTr1K1FYbzscMr4lbE.png) ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5776|± |0.0144| | | |acc_norm|0.6220|± |0.0142| |arc_easy | 0|acc |0.8380|± |0.0076| | | |acc_norm|0.8245|± |0.0078| |boolq | 1|acc |0.8624|± |0.0060| |hellaswag | 0|acc |0.6418|± |0.0048| | | |acc_norm|0.8249|± |0.0038| |openbookqa | 0|acc |0.3420|± |0.0212| | | |acc_norm|0.4540|± |0.0223| |piqa | 0|acc |0.8177|± |0.0090| | | |acc_norm|0.8264|± |0.0088| |winogrande | 0|acc |0.7466|± |0.0122| ``` Average: 73.72 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 43.63 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.94 ## TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.3892|± |0.0171| | | |mc2 |0.5642|± |0.0153| ``` # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MixtralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mistral-7B-DPO', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Nous-Hermes-2-Mistral-7B-DPO", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` # How to cite: ```bibtext @misc{Nous-Hermes-2-Mistral-7B-DPO, url={[https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)}, title={Nous Hermes 2 Mistral 7B DPO}, author={"Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
MaziyarPanahi/T3qm7Mergerix-7B-GGUF
MaziyarPanahi
2024-05-20T20:38:06Z
372
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/T3qm7Mergerix-7B" ]
text-generation
2024-05-20T20:07:50Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: T3qm7Mergerix-7B-GGUF base_model: automerger/T3qm7Mergerix-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/T3qm7Mergerix-7B-GGUF](https://huggingface.co/MaziyarPanahi/T3qm7Mergerix-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/T3qm7Mergerix-7B](https://huggingface.co/automerger/T3qm7Mergerix-7B) ## Description [MaziyarPanahi/T3qm7Mergerix-7B-GGUF](https://huggingface.co/MaziyarPanahi/T3qm7Mergerix-7B-GGUF) contains GGUF format model files for [automerger/T3qm7Mergerix-7B](https://huggingface.co/automerger/T3qm7Mergerix-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/Inex12T3qm7xp-7B-GGUF
MaziyarPanahi
2024-05-22T05:32:16Z
372
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:nlpguy/T3QM7XP", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Inex12T3qm7xp-7B" ]
text-generation
2024-05-22T05:01:10Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:nlpguy/T3QM7XP - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Inex12T3qm7xp-7B-GGUF base_model: automerger/Inex12T3qm7xp-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Inex12T3qm7xp-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12T3qm7xp-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Inex12T3qm7xp-7B](https://huggingface.co/automerger/Inex12T3qm7xp-7B) ## Description [MaziyarPanahi/Inex12T3qm7xp-7B-GGUF](https://huggingface.co/MaziyarPanahi/Inex12T3qm7xp-7B-GGUF) contains GGUF format model files for [automerger/Inex12T3qm7xp-7B](https://huggingface.co/automerger/Inex12T3qm7xp-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF
MaziyarPanahi
2024-05-22T07:43:23Z
372
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:Muhammad2003/Myriad-7B-Slerp", "base_model:Gille/StrangeMerges_32-7B-slerp", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/MyriadStrangemerges_32-7B" ]
text-generation
2024-05-22T07:12:30Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:Muhammad2003/Myriad-7B-Slerp - base_model:Gille/StrangeMerges_32-7B-slerp - license:cc-by-nc-4.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: MyriadStrangemerges_32-7B-GGUF base_model: automerger/MyriadStrangemerges_32-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/MyriadStrangemerges_32-7B](https://huggingface.co/automerger/MyriadStrangemerges_32-7B) ## Description [MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/MyriadStrangemerges_32-7B-GGUF) contains GGUF format model files for [automerger/MyriadStrangemerges_32-7B](https://huggingface.co/automerger/MyriadStrangemerges_32-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf
RichardErkhov
2024-05-24T00:12:52Z
372
0
null
[ "gguf", "arxiv:2404.17733", "region:us" ]
null
2024-05-23T21:51:16Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Swallow-MS-7b-v0.1 - GGUF - Model creator: https://huggingface.co/tokyotech-llm/ - Original model: https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Swallow-MS-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q2_K.gguf) | Q2_K | 2.58GB | | [Swallow-MS-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.86GB | | [Swallow-MS-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ3_S.gguf) | IQ3_S | 3.02GB | | [Swallow-MS-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.0GB | | [Swallow-MS-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ3_M.gguf) | IQ3_M | 3.11GB | | [Swallow-MS-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K.gguf) | Q3_K | 3.33GB | | [Swallow-MS-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.33GB | | [Swallow-MS-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.61GB | | [Swallow-MS-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.73GB | | [Swallow-MS-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.88GB | | [Swallow-MS-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.93GB | | [Swallow-MS-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.91GB | | [Swallow-MS-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_K.gguf) | Q4_K | 4.13GB | | [Swallow-MS-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.13GB | | [Swallow-MS-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_1.gguf) | Q4_1 | 4.3GB | | [Swallow-MS-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.72GB | | [Swallow-MS-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.72GB | | [Swallow-MS-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_K.gguf) | Q5_K | 4.84GB | | [Swallow-MS-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.84GB | | [Swallow-MS-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_1.gguf) | Q5_1 | 5.13GB | | [Swallow-MS-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q6_K.gguf) | Q6_K | 5.6GB | | [Swallow-MS-7b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q8_0.gguf) | Q8_0 | 7.26GB | Original model description: --- language: - en - ja library_name: transformers pipeline_tag: text-generation model_type: mistral license: apache-2.0 --- # Swallow-MS-7b-v0.1 Our Swallow-MS-7b-v0.1 model has undergone continual pre-training from the Mistral-7B-v0.1, primarily with the addition of Japanese language data. # Model Release Updates We are excited to share the release schedule for our latest models: - **April 26, 2024**: Released the [Swallow-MS-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-instruct-v0.1) - **March 11, 2024**: Released the [Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) ![logo](./logo.png) This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/). ## Model Details * **Model type**: Please refer to Mistral technical report for details on the model architecture. * **Language(s)**: Japanese English * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process. * **Contact**: swallow[at]nlp.c.titech.ac.jp ## Base Model Performance ### Japanese tasks |Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|Average| |---------------------------|-------|---------|-------|-------|-------|------|------------|------------|------|-----| | | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|| | CyberAgentLM2-7B |7B| 0.2198 | 0.5047 | 0.5066 | 0.7799 | 0.0233 | 0.0600 | 0.2345 | 0.1499 | 0.3098 | | Llama 2 |7B| 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 | 0.3201 | | japanese-stablelm-base-beta-7b|7B| 0.3610 | 0.4478 | 0.4432 | 0.8318 | 0.2195 | 0.0720 | 0.1946 | 0.1226 | 0.3366 | | japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.2172 | 0.4482 | 0.4309 | 0.8202 | 0.0757 | 0.0520 | 0.1601 | 0.1453 | 0.2937 | | ELYZA-japanese-Llama-2-7b|7B| 0.5791 | 0.4703 | 0.4019 | 0.8226 | 0.1312 | 0.0600 | 0.1795 | 0.1289 | 0.3467 | | ELYZA-japanese-Llama-2-7b-fast|7B| 0.5308 | 0.4330 | 0.3898 | 0.8131 | 0.1289 | 0.0720 | 0.1678 | 0.1143 | 0.3312 | | youri-7b (base) |7B| 0.4620 | 0.4776 | 0.4999 | 0.8506 | 0.1957 | 0.0640 | 0.2671 | **0.1971** | 0.3768 | | Swallow-7b |7B| 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 | 0.3940 | | Swallow-7b-plus |7B| 0.5478 | **0.5493** | **0.6030** | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 | 0.4090 | | Qwen-7B |7B| 0.7712 | 0.4234 | 0.2376 | 0.8594 | 0.1371 | 0.2160 | 0.1689 | 0.1801 | 0.3742 | | nekomata-7b |7B| 0.7417 | 0.4928 | 0.5022 | 0.8707 | 0.1676 | 0.1240 | **0.2673** | 0.1815 | 0.4185 | | Mistral-7B-v0.1 |7B| 0.7301 | 0.4245 | 0.2722 | 0.8563 | 0.2006 | 0.1760 | 0.1405 | 0.1733 | 0.3717 | | japanese-stablelm-base-gamma-7b|7B| 0.7364 | 0.4643 | 0.5568 | **0.8910** | **0.2293** | 0.1680 | 0.2390 | 0.1561 | 0.4301 | | Swallow-MS-7b-v0.1 |7B| **0.8570** | 0.4915 | 0.5519 | 0.8802 | 0.1988 | **0.2240** | 0.2494 | 0.1667 | **0.4524** | ### English tasks |Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|Average| |---|---|---|---|---|---|---|---|---| | | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|| | CyberAgentLM2-7B |7B| 0.2860 | 0.3496 | 0.5003 | 0.3510 | 0.8581 | 0.0705 | 0.4026 | | Llama 2 |7B| 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 | 0.4895 | | japanese-stablelm-base-beta-7b|7B| 0.3620 | 0.5903 | 0.5707 | 0.2992 | 0.8994 | 0.1198 | 0.4736 | | japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.3520 | 0.5549 | 0.5644 | 0.3079 | 0.8942 | 0.0538 | 0.4545 | | ELYZA-japanese-Llama-2-7b|7B| 0.3400 | 0.5875 | 0.5595 | 0.2721 | 0.8989 | 0.1638 | 0.4703 | | ELYZA-japanese-Llama-2-7b-fast|7B| 0.3280 | 0.5817 | 0.5530 | 0.2605 | 0.8989 | 0.1425 | 0.4608 | | youri-7b (base) |7B| 0.3400 | 0.5257 | 0.5540 | 0.3297 | 0.8938 | 0.0963 | 0.4566 | | Swallow-7b |7B| 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 | 0.4399 | | Swallow-7b-plus |7B| 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 | 0.4370 | | Qwen-7B |7B| 0.3640 | 0.5695 | 0.5787 | **0.3799** | 0.8933 | **0.4617** | 0.5412 | | nekomata-7b |7B| 0.3340 | 0.4371 | 0.5340 | 0.2933 | 0.8766 | 0.1531 | 0.4380 | | Mistral-7B-v0.1 |7B| **0.3660** | **0.7050** | **0.6264** | **0.3799** | **0.9157** | 0.3533 | **0.5577** | | japanese-stablelm-base-gamma-7b|7B| 0.3240 | 0.5745 | 0.5739 | 0.3546 | 0.8976 | 0.1911 | 0.4860 | | Swallow-MS-7b-v0.1 |7B| 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 | 0.5042 | ### Code generation tasks |Model|Size|JHumanEval|HumanEval| |---|---|---|---| | | |pass@1|pass@1| | CyberAgentLM2-7B |7B|0.0634|0.0756| | Llama 2 |7B|0.1152|0.1378| | japanese-stablelm-base-beta-7b|7B|0.1018|0.1280| | japanese-stablelm-base-ja_vocab-beta-7b|7B|0.0896|0.1122| | ELYZA-japanese-Llama-2-7b|7B|0.0287|0.0427| | ELYZA-japanese-Llama-2-7b-fast|7B| 0.0000 |0.0037| | youri-7b (base) |7B|0.0829|0.0982| | Swallow-7b |7B|0.0183|0.0183| | Swallow-7b-plus |7B| 0.0061|0.0037| | Qwen-7B |7B|0.1701|0.1805| | nekomata-7b |7B|0.0988|0.1402| | Mistral-7B-v0.1 |7B|**0.2555**|**0.2933**| | japanese-stablelm-base-gamma-7b|7B|0.1823|0.1915| | Swallow-MS-7b-v0.1 |7B|0.2305|0.2768| ## Evaluation Benchmarks ### Japanese evaluation benchmarks We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows: - Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022]) - Open-ended question answering (JEMHopQA [Ishii+, 2023]) - Open-ended question answering (NIILC [Sekine, 2003]) - Machine reading comprehension (JSQuAD [Kurihara+, 2022]) - Automatic summarization (XL-Sum [Hasan+, 2021]) - Machine translation (WMT2020 ja-en [Barrault+, 2020]) - Machine translation (WMT2020 en-ja [Barrault+, 2020]) - Mathematical reasoning (MGSM [Shi+, 2023]) ### English evaluation benchmarks We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows: - Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018]) - Open-ended question answering (TriviaQA [Joshi+, 2017]) - Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018]) - Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021]) - Natural language inference (HellaSwag [Zellers+, 2019]) - Mathematical reasoning (GSM8k [Cobbe+, 2021]) ### Code evaluation benchmarks We utilized the Code Generation LM Evaluation Harness [Allal+, 2022] (commit #0261c52). The details are as follows: - Code generation (HumanEval [Chen+, 2021]) - Code generation in Japanese (JHumanEval [Satoh+, 2024]) ## Usage First install additional dependencies in [requirements.txt](./requirements.txt): ```sh pip install -r requirements.txt ``` ### Use the base model ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "tokyotech-llm/Swallow-MS-7b-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") prompt = "東京工業大学の主なキャンパスは、" input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=128, temperature=0.99, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` ## Training Datasets ### Continual Pre-Training The following datasets were used for continual pre-training. - [Algebraic Stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2) - [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) - [Swallow Corpus](https://arxiv.org/abs/2404.17733) - [The Pile](https://huggingface.co/datasets/EleutherAI/pile) ## Risks and Limitations The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Acknowledgements We thank Mistral AI for releasing Mistral 7B v0.1 under an open license for others to build on. Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology. ## License apache-2.0 ## Authors Here are the team members: - From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members: - [Naoaki Okazaki](https://www.chokkan.org/index.ja.html) - [Sakae Mizuki](https://s-mizuki-nlp.github.io/) - [Hiroki Iida](https://meshidenn.github.io/) - [Mengsay Loem](https://loem-ms.github.io/) - [Shota Hirai](https://huggingface.co/Kotemo428) - [Kakeru Hattori](https://aya-se.vercel.app/) - [Masanari Ohi](https://twitter.com/stjohn2007) - From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members: - [Rio Yokota](https://twitter.com/rioyokota) - [Kazuki Fujii](https://twitter.com/okoge_kaz) - [Taishi Nakamura](https://twitter.com/Setuna7777_2)
l3utterfly/llama-3-Korean-Bllossom-8B-gguf
l3utterfly
2024-06-12T09:05:48Z
372
0
null
[ "gguf", "license:llama3", "region:us" ]
null
2024-06-12T09:02:46Z
--- license: llama3 ---
RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf
RichardErkhov
2024-06-15T01:41:06Z
372
0
null
[ "gguf", "region:us" ]
null
2024-06-15T00:00:20Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Orca2-13B-selfmerge-39B - GGUF - Model creator: https://huggingface.co/vmajor/ - Original model: https://huggingface.co/vmajor/Orca2-13B-selfmerge-39B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Orca2-13B-selfmerge-39B.Q2_K.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q2_K.gguf) | Q2_K | 4.52GB | | [Orca2-13B-selfmerge-39B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.IQ3_XS.gguf) | IQ3_XS | 4.99GB | | [Orca2-13B-selfmerge-39B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.IQ3_S.gguf) | IQ3_S | 5.27GB | | [Orca2-13B-selfmerge-39B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q3_K_S.gguf) | Q3_K_S | 5.27GB | | [Orca2-13B-selfmerge-39B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.IQ3_M.gguf) | IQ3_M | 5.57GB | | [Orca2-13B-selfmerge-39B.Q3_K.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q3_K.gguf) | Q3_K | 5.9GB | | [Orca2-13B-selfmerge-39B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q3_K_M.gguf) | Q3_K_M | 5.9GB | | [Orca2-13B-selfmerge-39B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q3_K_L.gguf) | Q3_K_L | 6.45GB | | [Orca2-13B-selfmerge-39B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.IQ4_XS.gguf) | IQ4_XS | 6.54GB | | [Orca2-13B-selfmerge-39B.Q4_0.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q4_0.gguf) | Q4_0 | 6.86GB | | [Orca2-13B-selfmerge-39B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.IQ4_NL.gguf) | IQ4_NL | 6.9GB | | [Orca2-13B-selfmerge-39B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q4_K_S.gguf) | Q4_K_S | 6.91GB | | [Orca2-13B-selfmerge-39B.Q4_K.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q4_K.gguf) | Q4_K | 7.33GB | | [Orca2-13B-selfmerge-39B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q4_K_M.gguf) | Q4_K_M | 7.33GB | | [Orca2-13B-selfmerge-39B.Q4_1.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q4_1.gguf) | Q4_1 | 7.61GB | | [Orca2-13B-selfmerge-39B.Q5_0.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q5_0.gguf) | Q5_0 | 8.36GB | | [Orca2-13B-selfmerge-39B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q5_K_S.gguf) | Q5_K_S | 8.36GB | | [Orca2-13B-selfmerge-39B.Q5_K.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q5_K.gguf) | Q5_K | 8.6GB | | [Orca2-13B-selfmerge-39B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q5_K_M.gguf) | Q5_K_M | 8.6GB | | [Orca2-13B-selfmerge-39B.Q5_1.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q5_1.gguf) | Q5_1 | 9.1GB | | [Orca2-13B-selfmerge-39B.Q6_K.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q6_K.gguf) | Q6_K | 9.95GB | | [Orca2-13B-selfmerge-39B.Q8_0.gguf](https://huggingface.co/RichardErkhov/vmajor_-_Orca2-13B-selfmerge-39B-gguf/blob/main/Orca2-13B-selfmerge-39B.Q8_0.gguf) | Q8_0 | 12.88GB | Original model description: --- license: ms-pl tags: - merge --- This model is a result of merging three Orca2-13B models with itself using 'mergekit-legacy'. Merge parameters were passthrough for one and --weight 0.5 --density 0.5 for the second merge. This merged model showed marginal improvement in perplexity scores: The perplexity for Orca-2-13b is: 7.595028877258301 The perplexity for orca2-26B-self-merge is: 7.550178050994873 The perplexity for orca2-39B-self-merge is: NC The following table summarizes the model performance across a range of benchmarks: | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |------------------------------------|-------------|-------|-----------|-------|------------|------------|-------| | microsoft/Orca-2-13b | 58.64 | 60.67 | 79.81 | 60.37 | 56.41 | 76.64 | 17.97 | | vmajor/Orca2-13B-selfmerge-26B | 62.24 | 60.84 | 79.84 | 60.32 | 56.38 | 76.87 | 39.2 | | vmajor/Orca2-13B-selfmerge-39B | 62.24 | 60.84 | 79.84 | 60.32 | 56.38 | 76.87 | 39.2 | Interestingly the GSM8K performance more than doubled with the first self merge. Second self merge resulting in the 39B model did not produce any further gains. --- license: ms-pl ---
falkne/storytelling-LM-europarl-mixed-en
falkne
2022-09-05T06:13:48Z
371
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-05T06:07:06Z
Entry not found
Shobhank-iiitdwd/BERT_summary
Shobhank-iiitdwd
2022-12-28T12:57:01Z
371
6
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "summarization", "en", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-12-28T12:51:27Z
--- language: en license: apache-2.0 datasets: - cnn_dailymail tags: - summarization model-index: - name: patrickvonplaten/bert2bert_cnn_daily_mail results: - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: test metrics: - name: ROUGE-1 type: rouge value: 41.2808 verified: true - name: ROUGE-2 type: rouge value: 18.6853 verified: true - name: ROUGE-L type: rouge value: 28.191 verified: true - name: ROUGE-LSUM type: rouge value: 38.0871 verified: true - name: loss type: loss value: 2.3451855182647705 verified: true - name: gen_len type: gen_len value: 73.8332 verified: true --- Bert2Bert Summarization with 🤗EncoderDecoder Framework This model is a warm-started *BERT2BERT* model fine-tuned on the *CNN/Dailymail* summarization dataset. The model achieves a **18.22** ROUGE-2 score on *CNN/Dailymail*'s test dataset.
helenai/MIT-ast-finetuned-speech-commands-v2-ov
helenai
2023-03-11T19:19:12Z
371
0
transformers
[ "transformers", "openvino", "audio-spectrogram-transformer", "audio-classification", "en", "endpoints_compatible", "region:us" ]
audio-classification
2023-03-11T19:11:41Z
--- language: - en tags: - openvino --- # MIT/ast-finetuned-speech-commands-v2 This is the [MIT/ast-finetuned-speech-commands-v2](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) model converted to [OpenVINO](https://openvino.ai), for accellerated inference. An example of how to do inference on this model: ```python from optimum.intel.openvino import OVModelForAudioClassification from transformers import AutoFeatureExtractor, pipeline # model_id should be set to either a local directory or a model available on the HuggingFace hub. model_id = "helenai/MIT-ast-finetuned-speech-commands-v2-ov" feature_extractor = AutoFeatureExtractor.from_pretrained(model_id) model = OVModelForAudioClassification.from_pretrained(model_id) pipe = pipeline("audio-classification", model=model, feature_extractor=feature_extractor) result = pipe("https://datasets-server.huggingface.co/assets/speech_commands/--/v0.01/test/38/audio/audio.mp3") print(result) ```
timm/xcit_small_12_p8_384.fb_dist_in1k
timm
2024-02-10T23:43:48Z
371
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.09681", "license:apache-2.0", "region:us" ]
image-classification
2023-04-13T02:23:54Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for xcit_small_12_p8_384.fb_dist_in1k A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k with distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 26.2 - GMACs: 54.9 - Activations (M): 138.3 - Image size: 384 x 384 - **Papers:** - XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681 - **Dataset:** ImageNet-1k - **Original:** https://github.com/facebookresearch/xcit ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('xcit_small_12_p8_384.fb_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'xcit_small_12_p8_384.fb_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2305, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{el2021xcit, title={XCiT: Cross-Covariance Image Transformers}, author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others}, journal={arXiv preprint arXiv:2106.09681}, year={2021} } ```
facebook/mms-lid-256
facebook
2023-06-13T10:16:04Z
371
5
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "audio-classification", "mms", "ab", "af", "ak", "am", "ar", "as", "av", "ay", "az", "ba", "bm", "be", "bn", "bi", "bo", "sh", "br", "bg", "ca", "cs", "ce", "cv", "ku", "cy", "da", "de", "dv", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fa", "fj", "fi", "fr", "fy", "ff", "ga", "gl", "gn", "gu", "zh", "ht", "ha", "he", "hi", "hu", "hy", "ig", "ia", "ms", "is", "it", "jv", "ja", "kn", "ka", "kk", "kr", "km", "ki", "rw", "ky", "ko", "kv", "lo", "la", "lv", "ln", "lt", "lb", "lg", "mh", "ml", "mr", "mk", "mg", "mt", "mn", "mi", "my", "nl", "no", "ne", "ny", "oc", "om", "or", "os", "pa", "pl", "pt", "ps", "qu", "ro", "rn", "ru", "sg", "sk", "sl", "sm", "sn", "sd", "so", "es", "sq", "su", "sv", "sw", "ta", "tt", "te", "tg", "tl", "th", "ti", "ts", "tr", "uk", "vi", "wo", "xh", "yo", "zu", "za", "dataset:google/fleurs", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-06-13T08:59:02Z
--- tags: - mms language: - ab - af - ak - am - ar - as - av - ay - az - ba - bm - be - bn - bi - bo - sh - br - bg - ca - cs - ce - cv - ku - cy - da - de - dv - dz - el - en - eo - et - eu - ee - fo - fa - fj - fi - fr - fy - ff - ga - gl - gn - gu - zh - ht - ha - he - hi - sh - hu - hy - ig - ia - ms - is - it - jv - ja - kn - ka - kk - kr - km - ki - rw - ky - ko - kv - lo - la - lv - ln - lt - lb - lg - mh - ml - mr - ms - mk - mg - mt - mn - mi - my - zh - nl - 'no' - 'no' - ne - ny - oc - om - or - os - pa - pl - pt - ms - ps - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - qu - ro - rn - ru - sg - sk - sl - sm - sn - sd - so - es - sq - su - sv - sw - ta - tt - te - tg - tl - th - ti - ts - tr - uk - ms - vi - wo - xh - ms - yo - ms - zu - za license: cc-by-nc-4.0 datasets: - google/fleurs metrics: - acc --- # Massively Multilingual Speech (MMS) - Finetuned LID This checkpoint is a model fine-tuned for speech language identification (LID) and part of Facebook's [Massive Multilingual Speech project](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/). This checkpoint is based on the [Wav2Vec2 architecture](https://huggingface.co/docs/transformers/model_doc/wav2vec2) and classifies raw audio input to a probability distribution over 256 output classes (each class representing a language). The checkpoint consists of **1 billion parameters** and has been fine-tuned from [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) on 256 languages. ## Table Of Content - [Example](#example) - [Supported Languages](#supported-languages) - [Model details](#model-details) - [Additional links](#additional-links) ## Example This MMS checkpoint can be used with [Transformers](https://github.com/huggingface/transformers) to identify the spoken language of an audio. It can recognize the [following 256 languages](#supported-languages). Let's look at a simple example. First, we install transformers and some other libraries ``` pip install torch accelerate torchaudio datasets pip install --upgrade transformers ```` **Note**: In order to use MMS you need to have at least `transformers >= 4.30` installed. If the `4.30` version is not yet available [on PyPI](https://pypi.org/project/transformers/) make sure to install `transformers` from source: ``` pip install git+https://github.com/huggingface/transformers.git ``` Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz. ```py from datasets import load_dataset, Audio # English stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) en_sample = next(iter(stream_data))["audio"]["array"] # Arabic stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) ar_sample = next(iter(stream_data))["audio"]["array"] ``` Next, we load the model and processor ```py from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor import torch model_id = "facebook/mms-lid-256" processor = AutoFeatureExtractor.from_pretrained(model_id) model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id) ``` Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition) ```py # English inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs).logits lang_id = torch.argmax(outputs, dim=-1)[0].item() detected_lang = model.config.id2label[lang_id] # 'eng' # Arabic inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs).logits lang_id = torch.argmax(outputs, dim=-1)[0].item() detected_lang = model.config.id2label[lang_id] # 'ara' ``` To see all the supported languages of a checkpoint, you can print out the language ids as follows: ```py processor.id2label.values() ``` For more details, about the architecture please have a look at [the official docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms). ## Supported Languages This model supports 256 languages. Unclick the following to toogle all supported languages of this checkpoint in [ISO 639-3 code](https://en.wikipedia.org/wiki/ISO_639-3). You can find more details about the languages and their ISO 649-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html). <details> <summary>Click to toggle</summary> - ara - cmn - eng - spa - fra - mlg - swe - por - vie - ful - sun - asm - ben - zlm - kor - ind - hin - tuk - urd - aze - slv - mon - hau - tel - swh - bod - rus - tur - heb - mar - som - tgl - tat - tha - cat - ron - mal - bel - pol - yor - nld - bul - hat - afr - isl - amh - tam - hun - hrv - lit - cym - fas - mkd - ell - bos - deu - sqi - jav - kmr - nob - uzb - snd - lat - nya - grn - mya - orm - lin - hye - yue - pan - jpn - kaz - npi - kik - kat - guj - kan - tgk - ukr - ces - lav - bak - khm - fao - glg - ltz - xog - lao - mlt - sin - aka - sna - ita - srp - mri - nno - pus - eus - ory - lug - bre - luo - slk - ewe - fin - rif - dan - yid - yao - mos - hne - est - dyu - bam - uig - sck - tso - mup - ctg - ceb - war - bbc - vmw - sid - tpi - mag - san - kri - lon - kir - run - ubl - kin - rkt - xmm - tir - mai - nan - nyn - bcc - hak - suk - bem - rmy - awa - pcm - bgc - shn - oci - wol - bci - kab - ilo - bcl - haw - mad - nod - sag - sas - jam - mey - shi - hil - ace - kam - min - umb - hno - ban - syl - bxg - xho - mww - epo - tzm - zul - ibo - abk - guz - ckb - knc - nso - bho - dje - tiv - gle - lua - skr - bto - kea - glk - ast - sat - ktu - bhb - emk - kng - kmb - tsn - gom - ven - sco - glv - sot - sou - gno - nde - bjn - ina - fmu - esg - wes - pnb - phr - mui - bug - mrr - kas - lir - vah - ssw - rwr - pcc - hms - wbr - swv - mtr - haz - aii - bns - msi - wuu - hsn - bgp - tts - lmn - dcc - bew - bjj - ibb - tji - hoj - cpx - cdo - daq - mut - nap - czh - gdx - sdh - scn - mnp - bar - mzn - gsw </details> ## Model details - **Developed by:** Vineel Pratap et al. - **Model type:** Multi-Lingual Automatic Speech Recognition model - **Language(s):** 256 languages, see [supported languages](#supported-languages) - **License:** CC-BY-NC 4.0 license - **Num parameters**: 1 billion - **Audio sampling rate**: 16,000 kHz - **Cite as:** @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ## Additional Links - [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/) - [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms). - [Paper](https://arxiv.org/abs/2305.13516) - [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr) - [Other **MMS** checkpoints](https://huggingface.co/models?other=mms) - MMS base checkpoints: - [facebook/mms-1b](https://huggingface.co/facebook/mms-1b) - [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) - [Official Space](https://huggingface.co/spaces/facebook/MMS)
Henk717/spring-dragon
Henk717
2023-08-06T15:40:42Z
371
20
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-05T23:42:59Z
--- license: llama2 --- This model is a recreation attempt of the AI Dungeon 2 Dragon model, to achieve this text_adventures.txt was used that was bundled with the original AI Dungeon 2 github release prior to the online service. From what we know the same dataset file was used to create the Dragon model, Dragon being a GPT3 175B Davinci model from 2020. Since LLaMA1 13B has been benchmarking similarly to the original GPT3 175B the hope is that this recreation is faithful to the original Dragon model. But, since it is not known how close it performs without releasing it to former AI Dungeon players we dubbed it "Spring Dragon" instead of "Summer Dragon", consider it Dragon in its growing up phase. This model is best used with KoboldAI's adventure mode prefixing your actions with You (2020 AI Dungeon did this automatically) and writing in the second person. ## Warning: This model is purposefully flawed and should only be used by people Nostalgic for old 2020 era text adventure models. It is not recommended to be used in model merges, and you can very likely get a much better experience from modern instruct models by asking them to "Start a text adventure game about X" ### If the recreation was succesfull expect the following recurring themes: Names: Alison, Annah, Ben, Big Red, Brutus, Camid, Captain Hayes, Captain Roldan, Castus, Catia, Count Grey, Cyrus, Dendrin, Dr. Gaange (also Mr Gaange), Dr. Gossey, Dr. Kessel, Dr. Kovas, Durge, Elder Flynn, Elios, Elizabeth/Eliza, Fay, Father Féval, Fenrir, Great Lich Lord, Grolik, Isabella, *Jacob, *Karth, Kyros, Lilith, Lord Rostov, Magos Cern, Meliodas, Mistress, Mr. Matasan, Mr. Mol, Mr. Reynolds, Naji, Quintus, Ral, Rolomag, Rose, (Sir) Kit, Talia, Tanya, The Emperor, Ulivik, *Vamp/*Vampy, Velzix, Yvette, Zalmora/Zal. (* means the AI likes calling the player these) Locations: Dert, Fort Defiance, Fort Glory, Hessla, Holgard, Klyton, Kyros, Nyttrus, Rask, Teckleville, The Delantium Kingdom, The Empire of Man (also called Imperium of Man), The Felkan Kingdom Factions: The Black Rats, Chaos Space Marines, The Crimson Talons, The Dark Order, Dornans (worshippers of Dorna), Ebony Claw Syndicate (often called ECS or The Syndicate), The Empire, Eternals, Joachimites (The Church of Joachim), The Nocturnal League, Psykers, The Shadows, Techpriests, Thieves Guild, Vampire Clan. Deities: Dorna, Joachim, Nyx, Slaanesh, Virgil, Yag. Species/Races: Eternals, Goliaths, Oalkwardners, The Craxil, ghouls,kobolds, orks, psykers, svelks, vampires, wendigos, werewolves.
hadifar/entityextraction
hadifar
2023-10-04T11:12:25Z
371
1
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-08T11:43:33Z
Entry not found
TheBloke/UltraLM-13B-v2.0-GGUF
TheBloke
2023-10-01T15:42:29Z
371
2
transformers
[ "transformers", "gguf", "llama", "base_model:openbmb/UltraLM-13b-v2.0", "license:mit", "text-generation-inference", "region:us" ]
null
2023-10-01T15:37:24Z
--- base_model: openbmb/UltraLM-13b-v2.0 inference: false license: mit model_creator: OpenBMB model_name: UltraLM 13B v2.0 model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # UltraLM 13B v2.0 - GGUF - Model creator: [OpenBMB](https://huggingface.co/openbmb) - Original model: [UltraLM 13B v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0) <!-- description start --> ## Description This repo contains GGUF format model files for [OpenBMB's UltraLM 13B v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF) * [OpenBMB's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openbmb/UltraLM-13b-v2.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `mit`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [OpenBMB's UltraLM 13B v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [ultralm-13b-v2.0.Q2_K.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [ultralm-13b-v2.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [ultralm-13b-v2.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [ultralm-13b-v2.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [ultralm-13b-v2.0.Q4_0.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [ultralm-13b-v2.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [ultralm-13b-v2.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [ultralm-13b-v2.0.Q5_0.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [ultralm-13b-v2.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [ultralm-13b-v2.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [ultralm-13b-v2.0.Q6_K.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [ultralm-13b-v2.0.Q8_0.gguf](https://huggingface.co/TheBloke/UltraLM-13B-v2.0-GGUF/blob/main/ultralm-13b-v2.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/UltraLM-13B-v2.0-GGUF and below it, a specific filename to download, such as: ultralm-13b-v2.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/UltraLM-13B-v2.0-GGUF ultralm-13b-v2.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/UltraLM-13B-v2.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/UltraLM-13B-v2.0-GGUF ultralm-13b-v2.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m ultralm-13b-v2.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/UltraLM-13B-v2.0-GGUF", model_file="ultralm-13b-v2.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: OpenBMB's UltraLM 13B v2.0 <!-- original-model-card end -->
TheBloke/X-NoroChronos-13B-GGUF
TheBloke
2023-11-19T17:12:51Z
371
6
transformers
[ "transformers", "gguf", "llama", "not-for-all-audiences", "nsfw", "base_model:NeverSleep/X-NoroChronos-13B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-11-19T14:49:23Z
--- base_model: NeverSleep/X-NoroChronos-13B inference: false license: cc-by-nc-4.0 model_creator: NeverSleep model_name: X NoroChronos 13B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - not-for-all-audiences - nsfw --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # X NoroChronos 13B - GGUF - Model creator: [NeverSleep](https://huggingface.co/NeverSleep) - Original model: [X NoroChronos 13B](https://huggingface.co/NeverSleep/X-NoroChronos-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [NeverSleep's X NoroChronos 13B](https://huggingface.co/NeverSleep/X-NoroChronos-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/X-NoroChronos-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/X-NoroChronos-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF) * [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/X-NoroChronos-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's X NoroChronos 13B](https://huggingface.co/NeverSleep/X-NoroChronos-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [x-norochronos-13b.Q2_K.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [x-norochronos-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [x-norochronos-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [x-norochronos-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [x-norochronos-13b.Q4_0.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [x-norochronos-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [x-norochronos-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [x-norochronos-13b.Q5_0.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [x-norochronos-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [x-norochronos-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [x-norochronos-13b.Q6_K.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [x-norochronos-13b.Q8_0.gguf](https://huggingface.co/TheBloke/X-NoroChronos-13B-GGUF/blob/main/x-norochronos-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/X-NoroChronos-13B-GGUF and below it, a specific filename to download, such as: x-norochronos-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/X-NoroChronos-13B-GGUF x-norochronos-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/X-NoroChronos-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/X-NoroChronos-13B-GGUF x-norochronos-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m x-norochronos-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/X-NoroChronos-13B-GGUF", model_file="x-norochronos-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NeverSleep's X NoroChronos 13B <!-- description start --> ## Description This repo contains fp16 files of X-NoroChronos-13B, a merge based around [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) and [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2). Merge was done by choosing carefully the models, the loras, the weights of each of them, the order in which they are applied, and the order of the final models merging with the main goal of having a fresh RP experience. <!-- description end --> <!-- description start --> ## Models and loras used - [Xwin-LM/Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) - [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) - [Doctor-Shotgun/cat-v1.0-13b](https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b) - [athirdpath/Eileithyia-13B](https://huggingface.co/athirdpath/Eileithyia-13B) - [NeverSleep/Noromaid-13b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) - [Undi95/Llama2-13B-no_robots-alpaca-lora](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) - [zattio770/120-Days-of-LORA-v2-13B](https://huggingface.co/zattio770/120-Days-of-LORA-v2-13B) - [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) - [Aesir Private RP dataset] - Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it! <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` If you want to support me, you can [here](https://ko-fi.com/undiai). If you want to know more about [Ikari](https://huggingface.co/IkariDev) work, you can visit his [retro/neocities style website](https://ikaridevgit.github.io/). <!-- original-model-card end -->
openskyml/lexica-aperture-v3-5
openskyml
2023-11-26T18:00:58Z
371
21
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "lexica", "lexica-aperture", "aperture-v3-5", "openskyml", "en", "fr", "ru", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-26T10:12:42Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - lexica - lexica-aperture - aperture-v3-5 - openskyml pinned: true instance_prompt: mdjrny-v4 style language: - en - fr - ru pipeline_tag: text-to-image library_name: diffusers --- # Lexica Aperture V3.5 (Community Edition)8 <img src="https://cdn-lfs-us-1.huggingface.co/repos/40/b1/40b10ae730f19776ae9837602caaf880e6ebb467da346fdce38731ed24cd4d9a/f8cd30472ebd85fd6d2b39224f8febc051c650c0596c5ebc5c4152aa63f41c9b?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27preview.png%3B+filename%3D%22preview.png%22%3B&response-content-type=image%2Fpng&Expires=1701261753&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwMTI2MTc1M319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzQwL2IxLzQwYjEwYWU3MzBmMTk3NzZhZTk4Mzc2MDJjYWFmODgwZTZlYmI0NjdkYTM0NmZkY2UzODczMWVkMjRjZDRkOWEvZjhjZDMwNDcyZWJkODVmZDZkMmIzOTIyNGY4ZmViYzA1MWM2NTBjMDU5NmM1ZWJjNWM0MTUyYWE2M2Y0MWM5Yj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=FGtno9NiHyPEc6gYEpoJWcqsYfQVbSljXmNTJdWAbpl%7EsWC7Y75O-jt3sUwErVRv0hgmf2Pbhnxy1etN6vK7HXRrehF-tOdcgut6IUKSBBZoR4LGcssvFwLexExez2jt7RfZrSRjMDvhdYvUmwVeAGhGNzIe2ZUNucY8LMKAt0r6%7EY5S8ZnuXnn-OlEE-%7Esg8R%7EN0tBf3qzEiG3sUvHSW-H84jluluad4zyfPq01AEHUAKeVf2KmmfrlV8PFWfoOqKZAs89XCL6figJWnymIA1qjmXsVSm4ojuiywDMpf2TYHQYnEtRG-hfAdq1resUMQyt5V0nW%7EkGVJGvG-abmpQ__&Key-Pair-Id=KCD77M1F0VK2B" alt="lexica-aperture-v3-5" width="75%"> This is a free model that is similar to the paid [Lexica Aperture V3.5](https://lexica.art/aperture). ### Model Description - **Developed by:** [OpenSkyML](https://hf.co/openskyml) - **Model type:** [Multimodal (Text-to_Image)](https://huggingface.co/models?pipeline_tag=text-to-image) - **Language(s) (NLP):** [English](https://huggingface.co/models?language=en), [French](https://huggingface.co/models?language=fr), [Russian](https://huggingface.co/models?language=ru) - **License:** [CreativeML OpenRAIL-M](https://huggingface.co/models?license=license%3Acreativeml-openrail-m) ### Model Sources - **Repository:** [click](https://huggingface.co/openskyml/lexica-aperture-v3-5/tree/main)
internlm/internlm2-math-base-20b
internlm
2024-02-28T05:29:39Z
371
4
transformers
[ "transformers", "safetensors", "internlm2", "text-generation", "math", "conversational", "custom_code", "en", "zh", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2024-01-22T12:37:59Z
--- pipeline_tag: text-generation license: other language: - en - zh tags: - math --- # InternLM-Math <div align="center"> <img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/> <div> </div> <div align="center"> <b><font size="5">InternLM-Math</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div> </div> </div> State-of-the-art bilingual open-sourced Math reasoning LLMs. A **solver**, **prover**, **verifier**, **augmentor**. [💻 Github](https://github.com/InternLM/InternLM-Math) [🤗 Demo](https://huggingface.co/spaces/internlm/internlm2-math-7b) [🤗 Checkpoints](https://huggingface.co/internlm/internlm2-math-7b) [![OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-7B) [<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> ModelScope](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-7b/summary) </div> # News - [2024.01.29] We add checkpoints from ModelScope. Tech report is on the way! - [2024.01.26] We add checkpoints from OpenXLab, which ease Chinese users to download! # Introduction - **7B and 20B Chinese and English Math LMs with better than ChatGPT performances.** InternLM2-Math are continued pretrained from InternLM2-Base with ~100B high quality math-related tokens and SFT with ~2M bilingual math supervised data. We apply minhash and exact number match to decontaminate possible test set leakage. - **Add Lean as a support language for math problem solving and math theorem proving.** We are exploring combining Lean 3 with InternLM-Math for verifiable math reasoning. InternLM-Math can generate Lean codes for simple math reasoning tasks like GSM8K or provide possible proof tactics based on Lean states. - **Also can be viewed as a reward model, which supports the Outcome/Process/Lean Reward Model.** We supervise InternLM2-Math with various types of reward modeling data, to make InternLM2-Math can also verify chain-of-thought processes. We also add the ability to convert a chain-of-thought process into Lean 3 code. - **A Math LM Augment Helper** and **Code Interpreter**. InternLM2-Math can help augment math reasoning problems and solve them using the code interpreter which makes you generate synthesis data quicker! ![hungarian](https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/hungary.jpeg) # Models **InternLM2-Math-Base-7B** and **InternLM2-Math-Base-20B** are pretrained checkpoints. **InternLM2-Math-7B** and **InternLM2-Math-20B** are SFT checkpoints. | Model |Model Type | Transformers(HF) |OpenXLab| ModelScope | Release Date | |---|---|---|---|---|---| | **InternLM2-Math-Base-7B** | Base| [🤗internlm/internlm2-math-base-7b](https://huggingface.co/internlm/internlm2-math-base-7b) |[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-Base-7B)| [<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-base-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-base-7b/summary)| 2024-01-23| | **InternLM2-Math-Base-20B** | Base| [🤗internlm/internlm2-math-base-20b](https://huggingface.co/internlm/internlm2-math-base-20b) |[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-Base-20B)|[<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-base-20b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-base-20b/summary)| 2024-01-23| | **InternLM2-Math-7B** | Chat| [🤗internlm/internlm2-math-7b](https://huggingface.co/internlm/internlm2-math-7b) |[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-7B)|[<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-7b/summary)| 2024-01-23| | **InternLM2-Math-20B** | Chat| [🤗internlm/internlm2-math-20b](https://huggingface.co/internlm/internlm2-math-20b) |[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/OpenLMLab/InternLM2-Math-20B)|[<img src="https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/modelscope_logo.png" width="20px" /> internlm2-math-20b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-math-20b/summary)| 2024-01-23| # Performance ## Pretrain Performance We evaluate pretrain checkpoints based on greedy decoding with few-shot COT. Details of pretraining will be introduced in the tech report. | Model | GSM8K | MATH | |------------------------|---------|--------| | Llama2-7B | 11.8 | 3.2 | | Llemma-7B | 36.4 | 18.0 | | InternLM2-Base-7B | 36.5 | 8.6 | | **InternLM2-Math-Base-7B** | **49.2** | **21.5** | | Minerva-8B | 16.2 | 14.1 | | InternLM2-Base-20B | 54.6 | 13.7 | | **InternLM2-Math-Base-20B** | **63.7** | **27.3** | | Llemma-34B | 51.5 | 25.0 | | Minerva-62B | 52.4 | 27.6 | | Minerva-540B | 58.8 | 33.6 | ## SFT Peformance All performance is based on greedy decoding with COT. We notice that the performance of Hungary has a big variance between our different checkpoints, while other performance is very stable. This may be due to the problem amount about Hungary. | Model | Model Type | GSM8K | MATH | Hungary | |------------------------|----------------------|--------|--------|---------| | Qwen-7B-Chat | Genearl | 51.7 | 11.6 | - | | DeepSeek-7B-Chat | General | 63.0 | 15.8 | 28.5 | | InternLM2-Chat-7B | General | 70.7 | 23.0 | - | | ChatGLM3-6B | General | 53.8 | 20.4 | 32 | | MetaMath-Mistral-7B | Mathematics | 77.7 | 28.2 | 29 | | MetaMath-Llemma-7B | Mathematics | 69.2 | 30.0 | - | | **InternLM2-Math-7B** | Mathematics | **78.1** | **34.6** | **55** | | InternLM2-Chat-20B | General | 79.6 | 31.9 | - | | MetaMath-Llemma-34B | Mathematics | 75.8 | 34.8 | - | | **InternLM2-Math-20B** | Mathematics | **82.6** | **37.7** | **66** | | Qwen-72B | General | 78.9 | 35.2 | 52 | | DeepSeek-67B | General | 84.1 | 32.6 | 58 | | ChatGPT (GPT-3.5) | General | 80.8 | 34.1 | 41 | | GPT4 (First version) | General | 92.0 | 42.5 | 68 | # Inference ## LMDeploy We suggest using [LMDeploy](https://github.com/InternLM/LMDeploy)(>=0.2.1) for inference. ```python from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig backend_config = TurbomindEngineConfig(model_name='internlm2-chat-7b', tp=1, cache_max_entry_count=0.3) chat_template = ChatTemplateConfig(model_name='internlm2-chat-7b', system='', eosys='', meta_instruction='') pipe = pipeline(model_path='internlm/internlm2-math-base-20b', chat_template_config=chat_template, backend_config=backend_config) problem = '1+1=' result = pipe([problem], request_output_len=1024, top_k=1) ``` ## Huggingface ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-math-base-20b", trust_remote_code=True) # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error. model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-math-base-20b", trust_remote_code=True, torch_dtype=torch.float16).cuda() model = model.eval() response, history = model.chat(tokenizer, "1+1=", history=[], meta_instruction="") print(response) ``` # Special usages We list some instructions used in our SFT. You can use them to help you. You can use the other ways to prompt the model, but the following are recommended. InternLM2-Math may combine the following abilities but it is not guaranteed. Translate proof problem to Lean: ![nl2lean3](https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/nl2lean.jpeg) Using Lean 3 to solve GSM8K problem: ![gsm8k_lean](https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/gsm8k_lean.jpeg) Generate problem based on Lean 3 code: ![lean_problem](https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/lean_problem.jpeg) Play 24 point game: ![24](https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/24.jpeg) Augment a harder math problem: ![augment_hard](https://raw.githubusercontent.com/InternLM/InternLM-Math/main/assets/augment_hard.jpeg) | Description | Query | | --- | --- | | Solving question via chain-of-thought | {Question} | | Solving question via Lean 3 | {Question}\nSolve this via Lean 3 | | Outcome reward model | Given a question and an answer, check is it correct?\nQuestion:{Question}\nAnswer:{COT} | | Process reward model | Given a question and an answer, check correctness of each step.\nQuestion:{Question}\nAnswer:{COT} | | Reward model | Given a question and two answers, which one is better? \nQuestion:{Question}\nAnswer 1:{COT}\nAnswer 2:{COT} | | Convert chain-of-thought to Lean 3 | Convert this answer into Lean3. Question:{Question}\nAnswer:{COT} | | Convert Lean 3 to chain-of-thought | Convert this lean 3 code into a natural language problem with answers:\n{LEAN Code} | | Translate question and chain-of-thought answer to a proof statement | Convert this question and answer into a proof format.\nQuestion:{Question}\nAnswer:{COT} | | Translate proof problem to Lean 3 | Convert this natural langauge statement into a Lean 3 theorem statement:{Theorem} | | Translate Lean 3 to proof problem | Convert this Lean 3 theorem statement into natural language:{STATEMENT} | | Suggest a tactic based on Lean state | Given the Lean 3 tactic state, suggest a next tactic:\n{LEAN State} | | Rephrase Problem | Describe this problem in another way. {Question} | | Augment Problem | Please augment a new problem based on: {Question} | | Augment a harder Problem | Increase the complexity of the problem: {Question} | | Change specific numbers | Change specific numbers: {Question}| | Introduce fractions or percentages | Introduce fractions or percentages: {Question}| | Code Interpreter | [lagent](https://github.com/InternLM/InternLM/blob/main/agent/lagent.md) | | In-context Learning | Question:{Question}\nAnswer:{COT}\n...Question:{Question}\nAnswer:{COT}| # Fine-tune and others Please refer to [InternLM](https://github.com/InternLM/InternLM/tree/main). # Known issues Our model is still under development and will be upgraded. There are some possible issues of InternLM-Math. If you find performances of some abilities are not great, welcome to open an issue. - Jump the calculating step. - Perform badly at Chinese fill-in-the-bank problems and English choice problems due to SFT data composition. - Tend to generate Code Interpreter when facing Chinese problems due to SFT data composition. - The reward model mode can be better leveraged with assigned token probabilities. - Code switch due to SFT data composition. - Some abilities of Lean can only be adapted to GSM8K-like problems (e.g. Convert chain-of-thought to Lean 3), and performance related to Lean is not guaranteed. # Citation and Tech Report To be appended.
dagbs/gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual-GGUF
dagbs
2024-03-07T22:46:21Z
371
2
null
[ "gguf", "bg", "ca", "cs", "da", "de", "en", "es", "fr", "hr", "hu", "it", "nl", "pl", "pt", "ro", "ru", "sl", "sr", "sv", "uk", "license:apache-2.0", "region:us" ]
null
2024-03-07T21:33:13Z
--- license: apache-2.0 language: - bg - ca - cs - da - de - en - es - fr - hr - hu - it - nl - pl - pt - ro - ru - sl - sr - sv - uk --- # gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual - GGUF Original Model: [NickyNicky/gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual](https://huggingface.co/NickyNicky/gemma-2b-it_oasst2_chatML_Cluster2_aya_multilingual) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/YXqUXFjX8uIJT-mdOnM1h.png)
PrunaAI/Mistral-22B-v0.2-GGUF-smashed
PrunaAI
2024-04-13T20:57:37Z
371
2
null
[ "gguf", "pruna-ai", "region:us" ]
null
2024-04-13T18:19:02Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed Mistral-22B-v0.2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/Mistral-22B-v0.2-GGUF-smashed-smashed Mistral-22B-v0.2.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Mistral-22B-v0.2.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Mistral-22B-v0.2.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Mistral-22B-v0.2.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
frankjoshua/animagineXLV31_v31
frankjoshua
2024-04-19T22:52:21Z
371
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-04-19T19:56:00Z
Entry not found
dicta-il/dictalm2.0-instruct-GGUF
dicta-il
2024-04-27T20:31:40Z
371
4
transformers
[ "transformers", "gguf", "mistral", "instruction-tuned", "text-generation", "en", "he", "base_model:dicta-il/dictalm2.0", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
2024-04-27T18:34:15Z
--- license: apache-2.0 pipeline_tag: text-generation language: - en - he tags: - instruction-tuned base_model: dicta-il/dictalm2.0 inference: false --- [<img src="https://i.ibb.co/5Lbwyr1/dicta-logo.jpg" width="300px"/>](https://dicta.org.il) # Model Card for DictaLM-2.0-Instruct The DictaLM-2.0-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the [DictaLM-2.0](https://huggingface.co/dicta-il/dictalm2.0) generative model using a variety of conversation datasets. For full details of this model please read our [release blog post](https://dicta.org.il/dicta-lm). This is the instruct-tuned model designed for chat in the GGUF format for use with [LM Studio](https://lmstudio.ai/) or [llama.cpp](https://github.com/ggerganov/llama.cpp). You can try the model out on a live demo [here](https://huggingface.co/spaces/dicta-il/dictalm2.0-instruct-demo). There are two versions available - float16 precision (`*.F16.gguf`) and 4-bit quantized precision (`*.Q4_K_M.gguf`). You can view and access the full collection of base/instruct unquantized/quantized versions of `DictaLM-2.0` [here](https://huggingface.co/collections/dicta-il/dicta-lm-20-collection-661bbda397df671e4a430c27). ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens followed by a line break. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = """<s>[INST] איזה רוטב אהוב עליך? [/INST] טוב, אני די מחבב כמה טיפות מיץ לימון סחוט טרי. זה מוסיף בדיוק את הכמות הנכונה של טעם חמצמץ לכל מה שאני מבשל במטבח!</s>[INST] האם יש לך מתכונים למיונז? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ## Using with LM Studio When using with LM Studio, just search the hub for "dictalm2.0-instruct-GGUF", and the model in both precisions should appear. Make sure to set the chat template correctly - initialize from the `mistral-instruct` template, and add a `\n` in the suffix box, like here: <img src="https://i.ibb.co/D9MVgK2/lmstudio-dlm-template.png" width="400px" /> In addition, the model doesn't support any system prompt, so make sure to remove the system prompt as well. ## Model Architecture DictaLM-2.0-Instruct follows the [Zephyr-7B-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) recipe for fine-tuning an instruct model, with an extended instruct dataset for Hebrew. ## Limitations The DictaLM 2.0 Instruct model is a demonstration that the base model can be fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## Citation If you use this model, please cite: ```bibtex [Will be added soon] ```
vedantM/BigBird-PII
vedantM
2024-05-01T13:20:04Z
371
0
transformers
[ "transformers", "safetensors", "big_bird", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-01T13:07:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LiteLLMs/Hermes-2-Pro-Llama-3-8B-GGUF
LiteLLMs
2024-05-06T10:42:05Z
371
0
null
[ "gguf", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "GGUF", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Meta-Llama-3-8B", "region:us" ]
null
2024-05-06T10:18:19Z
--- language: - en tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - GGUF base_model: NousResearch/Meta-Llama-3-8B datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. model-index: - name: Hermes-2-Pro-Llama-3-8B results: [] quantized_by: andrijdavid --- # Hermes-2-Pro-Llama-3-8B-GGUF - Original model: [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) <!-- description start --> ## Description This repo contains GGUF format model files for [Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/Hermes-2-Pro-Llama-3-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/Hermes-2-Pro-Llama-3-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/Hermes-2-Pro-Llama-3-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Hermes-2-Pro-Llama-3-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: Hermes-2-Pro-Llama-3-8B # Hermes 2 Pro - Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Example Outputs ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ll2j2wkQffCsiSwUjfRUq.png) ### Write the plot for a story where anime became real life: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/h_7aXGXdm2p2ONYuDF4Ii.png) ### Coding Assistance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bBd0hyAb8w5rKUiN2w1I6.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vOYv9wJUMn1Xrf4BvmO_x.png) ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-|:|--||--:|--:| |agieval_aqua_rat | 0|acc |0.2520|± |0.0273| | | |acc_norm|0.2559|± |0.0274| |agieval_logiqa_en | 0|acc |0.3548|± |0.0188| | | |acc_norm|0.3625|± |0.0189| |agieval_lsat_ar | 0|acc |0.1826|± |0.0255| | | |acc_norm|0.1913|± |0.0260| |agieval_lsat_lr | 0|acc |0.5510|± |0.0220| | | |acc_norm|0.5255|± |0.0221| |agieval_lsat_rc | 0|acc |0.6431|± |0.0293| | | |acc_norm|0.6097|± |0.0298| |agieval_sat_en | 0|acc |0.7330|± |0.0309| | | |acc_norm|0.7039|± |0.0319| |agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343| | | |acc_norm|0.3689|± |0.0337| |agieval_sat_math | 0|acc |0.3909|± |0.0330| | | |acc_norm|0.3773|± |0.0328| ``` Average: 42.44 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| ||:|--:|--:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360| |bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222| |bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102| |bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126| |bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288| ``` Average: 43.55 ## TruthfulQA: ``` | Task |Version|Metric|Value| |Stderr| |-|:|||-----:| |truthfulqa_mc| 1|mc1 |0.410|± |0.0172| | | |mc2 |0.578|± |0.0157| ``` # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)}, title={Hermes-2-Pro-Llama-3-8B}, author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ``` <!-- original-model-card end -->
swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA_GGUF
swap-uniba
2024-06-24T14:01:53Z
371
4
transformers
[ "transformers", "gguf", "facebook", "meta", "pythorch", "llama", "llama-3", "llamantino", "text-generation", "en", "it", "dataset:gsarti/clean_mc4_it", "dataset:Chat-Error/wizard_alpaca_dolly_orca", "dataset:mlabonne/orpo-dpo-mix-40k", "arxiv:2405.07101", "arxiv:2312.09993", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T15:45:39Z
--- datasets: - gsarti/clean_mc4_it - Chat-Error/wizard_alpaca_dolly_orca - mlabonne/orpo-dpo-mix-40k base_model: meta-llama/Meta-Llama-3-8B-Instruct model_creator: Marco Polignano - SWAP Research Group language: - en - it metrics: - accuracy pipeline_tag: text-generation tags: - facebook - meta - pythorch - llama - llama-3 - llamantino library_name: transformers license: llama3 --- <img src="https://cdn-uploads.huggingface.co/production/uploads/5df8bb21da6d0311fd3d540f/xL6Ax1I34qfC4VPKEFA6Z.png" alt="llamantino3_anita" border="0" width="800px"> <hr> <!--<img src="https://i.ibb.co/6mHSRm3/llamantino53.jpg" width="200"/>--> <h3><i>"Built with <b>Meta Llama 3</b>".</i></i></h3> <p style="text-align:justify;"><b>LLaMAntino-3-ANITA-8B-Inst-DPO-ITA</b> is a model of the <a href="https://huggingface.co/swap-uniba"><b>LLaMAntino</b></a> - <i>Large Language Models family</i>. The model is an instruction-tuned version of <a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct"><b>Meta-Llama-3-8b-instruct</b></a> (a fine-tuned <b>LLaMA 3 model</b>). This model version aims to be the a <b>Multilingual Model</b> 🏁 (EN 🇺🇸 + ITA🇮🇹) to further fine-tuning on Specific Tasks in Italian.</p> The 🌟**ANITA project**🌟 *(**A**dvanced **N**atural-based interaction for the **ITA**lian language)* wants to provide Italian NLP researchers with an improved model for the Italian Language 🇮🇹 use cases. <hr> ## Model Details <img src="https://static.vecteezy.com/system/resources/previews/016/833/880/large_2x/github-logo-git-hub-icon-with-text-on-white-background-free-vector.jpg" width="200"> [https://github.com/marcopoli/LLaMAntino-3-ANITA](https://github.com/marcopoli/LLaMAntino-3-ANITA)<br> <br> - [**Full Model: LaMAntino-3-ANITA-8B-Inst-DPO-ITA**](https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA) - LLaMA.cpp - **F16 model** - LLaMA.cpp - **Q8_0 model** - LLaMA.cpp - **Q4_K_M model** - LLaMA.cpp - **Q2_K model** <hr> ## Specifications - **Model developers**: <br><a href="https://marcopoli.github.io/">Ph.D. Marco Polignano</a> - University of Bari Aldo Moro, Italy <br> <a href="https://huggingface.co/swap-uniba">SWAP Research Group</a> <br> - **Variations**: The model release has been **supervised fine-tuning (SFT)** using **QLoRA** 4bit, on instruction-based datasets. **DPO** approach over the *mlabonne/orpo-dpo-mix-40k* dataset is used to align with human preferences for helpfulness and safety. - **Input**: Models input text only. - **Language**: Multilingual 🏁 + Italian 🇮🇹 - **Output**: Models generate text and code only. - **Model Architecture**: *Llama 3 architecture*. - **Context length**: 8K, 8192. - **Library Used**: [LLaMA.cpp](https://github.com/ggerganov/llama.cpp) <hr> ### Prompt Template ``` <|start_header_id|>system<|end_header_id|> { SYS Prompt }<|eot_id|><|start_header_id|>user<|end_header_id|> { USER Prompt }<|eot_id|><|start_header_id|>assistant<|end_header_id|> { ASSIST Prompt }<|eot_id|> ```` <hr> ## LLaMA.cpp <img src="https://user-images.githubusercontent.com/1991296/230134379-7181e485-c521-4d23-a0d6-f7b3b61ba524.png" width="200px" align="center" /> [LLaMA.cpp](https://github.com/ggerganov/llama.cpp), a great tool that helps us easily Quantize your model in **GGUF format**. ## Citation instructions ```bibtex @misc{polignano2024advanced, title={Advanced Natural-based interaction for the ITAlian language: LLaMAntino-3-ANITA}, author={Marco Polignano and Pierpaolo Basile and Giovanni Semeraro}, year={2024}, eprint={2405.07101}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{basile2023llamantino, title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro}, year={2023}, eprint={2312.09993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ```
ukung/Nusantara-1.8b-Indo-Chat-GGUF
ukung
2024-05-20T01:49:49Z
371
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-19T17:52:03Z
--- license: apache-2.0 ---
MaziyarPanahi/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B-GGUF
MaziyarPanahi
2024-05-21T23:19:55Z
371
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:MaziyarPanahi/MeliodasPercival_01_Experiment26T3q", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B" ]
text-generation
2024-05-21T22:49:44Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:MaziyarPanahi/MeliodasPercival_01_Experiment26T3q - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B-GGUF base_model: automerger/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B) ## Description [MaziyarPanahi/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B-GGUF) contains GGUF format model files for [automerger/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B](https://huggingface.co/automerger/Ognoexperiment27multi_verse_modelMeliodaspercival_01_experiment26t3q-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf
RichardErkhov
2024-05-25T19:48:25Z
371
0
null
[ "gguf", "region:us" ]
null
2024-05-25T16:50:42Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b - GGUF - Model creator: https://huggingface.co/S4sch/ - Original model: https://huggingface.co/S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q2_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q2_K.gguf) | Q2_K | 3.95GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_XS.gguf) | IQ3_XS | 4.39GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_S.gguf) | IQ3_S | 4.63GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_S.gguf) | Q3_K_S | 4.61GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_M.gguf) | IQ3_M | 4.78GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K.gguf) | Q3_K | 5.13GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_M.gguf) | Q3_K_M | 5.13GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_L.gguf) | Q3_K_L | 5.58GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_XS.gguf) | IQ4_XS | 5.75GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_0.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_0.gguf) | Q4_0 | 6.0GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_NL.gguf) | IQ4_NL | 6.06GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_S.gguf) | Q4_K_S | 6.04GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K.gguf) | Q4_K | 6.38GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf) | Q4_K_M | 6.38GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_1.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_1.gguf) | Q4_1 | 6.65GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_0.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_0.gguf) | Q5_0 | 7.31GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_S.gguf) | Q5_K_S | 7.31GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K.gguf) | Q5_K | 7.5GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_M.gguf) | Q5_K_M | 7.5GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_1.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_1.gguf) | Q5_1 | 7.96GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q6_K.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q6_K.gguf) | Q6_K | 8.7GB | | [Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q8_0.gguf](https://huggingface.co/RichardErkhov/S4sch_-_Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-gguf/blob/main/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q8_0.gguf) | Q8_0 | 11.27GB | Original model description: --- license: apache-2.0 --- Frankenmerge 11b between teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-1 GGUF: https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF Merge with the following conditions - model: teknium/OpenHermes-2.5-Mistral-7B layer_range: [0, 8] - model: Intel/neural-chat-7b-v3-1 layer_range: [4, 12] - model: teknium/OpenHermes-2.5-Mistral-7B layer_range: [9, 16] - model: Intel/neural-chat-7b-v3-1 layer_range: [13, 20] - model: teknium/OpenHermes-2.5-Mistral-7B layer_range: [17, 24] - model: Intel/neural-chat-7b-v3-1 layer_range: [21, 28] - model: teknium/OpenHermes-2.5-Mistral-7B layer_range: [25, 32] merge_method: passthrough Benchmarks are coming soon...
RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf
RichardErkhov
2024-05-30T19:28:54Z
371
0
null
[ "gguf", "arxiv:2305.18290", "region:us" ]
null
2024-05-30T16:17:11Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-7b-alpha-sharded - GGUF - Model creator: https://huggingface.co/anakin87/ - Original model: https://huggingface.co/anakin87/zephyr-7b-alpha-sharded/ | Name | Quant method | Size | | ---- | ---- | ---- | | [zephyr-7b-alpha-sharded.Q2_K.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q2_K.gguf) | Q2_K | 2.53GB | | [zephyr-7b-alpha-sharded.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [zephyr-7b-alpha-sharded.IQ3_S.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.IQ3_S.gguf) | IQ3_S | 2.96GB | | [zephyr-7b-alpha-sharded.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [zephyr-7b-alpha-sharded.IQ3_M.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.IQ3_M.gguf) | IQ3_M | 3.06GB | | [zephyr-7b-alpha-sharded.Q3_K.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q3_K.gguf) | Q3_K | 3.28GB | | [zephyr-7b-alpha-sharded.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [zephyr-7b-alpha-sharded.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [zephyr-7b-alpha-sharded.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [zephyr-7b-alpha-sharded.Q4_0.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q4_0.gguf) | Q4_0 | 3.83GB | | [zephyr-7b-alpha-sharded.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [zephyr-7b-alpha-sharded.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [zephyr-7b-alpha-sharded.Q4_K.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q4_K.gguf) | Q4_K | 4.07GB | | [zephyr-7b-alpha-sharded.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [zephyr-7b-alpha-sharded.Q4_1.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q4_1.gguf) | Q4_1 | 4.24GB | | [zephyr-7b-alpha-sharded.Q5_0.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q5_0.gguf) | Q5_0 | 4.65GB | | [zephyr-7b-alpha-sharded.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [zephyr-7b-alpha-sharded.Q5_K.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q5_K.gguf) | Q5_K | 4.78GB | | [zephyr-7b-alpha-sharded.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [zephyr-7b-alpha-sharded.Q5_1.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q5_1.gguf) | Q5_1 | 5.07GB | | [zephyr-7b-alpha-sharded.Q6_K.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q6_K.gguf) | Q6_K | 5.53GB | | [zephyr-7b-alpha-sharded.Q8_0.gguf](https://huggingface.co/RichardErkhov/anakin87_-_zephyr-7b-alpha-sharded-gguf/blob/main/zephyr-7b-alpha-sharded.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: mit language: - en --- <img src="https://huggingface.co/anakin87/zephyr-7b-alpha-sharded/resolve/main/zephyr_sharded.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Zephyr 7B Alpha - Sharded **UPDATE** The original model ([Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)) was recently sharded. You can use the original model. --- 🧩🧩🧩 Just a sharded version of [Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha). 💻 Using this version, you can smoothly load the model on Colab and play with it! From the [original model card](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha): > Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes. ## Usage This version of the model is meant primarily to run smoothly on **Colab**. I suggest loading the model with **8-bit quantization**, so that you have some free GPU to perform inference. *However, it is perfectly fine to load the model in half-precision or with stronger quantization (4-bit).* ```python ! pip install transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model = AutoModelForCausalLM.from_pretrained("anakin87/zephyr-7b-alpha-sharded", device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("anakin87/zephyr-7b-alpha-sharded") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a rapper", }, {"role": "user", "content": "What is GPU?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) #<|system|> #You are a friendly chatbot who always responds in the style of a rapper</s> #<|user|> #What is GPU?</s> #<|assistant|> #Yo, what's up fam, you askin' 'bout the GPU? #Well, let me break it down for you, it's a pretty sick dud #It stands for Graphics Processing Unit, a tech that's quite rude #This bad boy's the one that's in charge of all the graphics you see #On your computer screen or your high-tech TV #It's a powerful tool that can handle intense 3D games and movies #And it's built to handle multiple tasks with ease #So if you're looking to take your gaming or video editing to the next level #Just make sure you've got a top-notch GPU to make it happen. #Peace out! ```
knowledgator/gliclass-small-v1.0
knowledgator
2024-06-03T21:00:11Z
371
3
transformers
[ "transformers", "safetensors", "GLiClass", "text classification", "zero-shot", "small language models", "RAG", "sentiment analysis", "zero-shot-classification", "en", "dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
zero-shot-classification
2024-06-02T13:35:24Z
--- license: apache-2.0 datasets: - MoritzLaurer/synthetic_zeroshot_mixtral_v0.1 language: - en metrics: - f1 pipeline_tag: zero-shot-classification tags: - text classification - zero-shot - small language models - RAG - sentiment analysis --- # ⭐ GLiClass: Generalist and Lightweight Model for Sequence Classification This is an efficient zero-shot classifier inspired by [GLiNER](https://github.com/urchade/GLiNER/tree/main) work. It demonstrates the same performance as a cross-encoder while being more compute-efficient because classification is done at a single forward path. It can be used for `topic classification`, `sentiment analysis` and as a reranker in `RAG` pipelines. The model was trained on synthetic data and can be used in commercial applications. ### How to use: First of all, you need to install GLiClass library: ```bash pip install gliclass ``` Than you need to initialize a model and a pipeline: ```python from gliclass import GLiClassModel, ZeroShotClassificationPipeline from transformers import AutoTokenizer model = GLiClassModel.from_pretrained("knowledgator/gliclass-small-v1.0") tokenizer = AutoTokenizer.from_pretrained("knowledgator/gliclass-small-v1.0") pipeline = ZeroShotClassificationPipeline(model, tokenizer, classification_type='multi-label', device='cuda:0') text = "One day I will see the world!" labels = ["travel", "dreams", "sport", "science", "politics"] results = pipeline(text, labels, threshold=0.5)[0] #because we have one text for result in results: print(result["label"], "=>", result["score"]) ``` ### Benchmarks: Below, you can see the F1 score on several text classification datasets. All tested models were not fine-tuned on those datasets and were tested in a zero-shot setting. | Model | IMDB | AG_NEWS | Emotions | |-----------------------------|------|---------|----------| | [gliclass-large-v1.0 (438 M)](https://huggingface.co/knowledgator/gliclass-large-v1.0) | 0.9404 | 0.7516 | 0.4874 | | [gliclass-base-v1.0 (186 M)](https://huggingface.co/knowledgator/gliclass-base-v1.0) | 0.8650 | 0.6837 | 0.4749 | | [gliclass-small-v1.0 (144 M)](https://huggingface.co/knowledgator/gliclass-small-v1.0) | 0.8650 | 0.6805 | 0.4664 | | [Bart-large-mnli (407 M)](https://huggingface.co/facebook/bart-large-mnli) | 0.89 | 0.6887 | 0.3765 | | [Deberta-base-v3 (184 M)](https://huggingface.co/cross-encoder/nli-deberta-v3-base) | 0.85 | 0.6455 | 0.5095 | | [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 | | SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
antoandgar/SVD_Franken_merge1
antoandgar
2024-06-03T10:52:17Z
371
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:allenai/tulu-2-dpo-7b", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-03T09:45:12Z
--- base_model: - meta-llama/Llama-2-7b-chat-hf - allenai/tulu-2-dpo-7b library_name: transformers tags: - mergekit - merge license: afl-3.0 --- # svd_franken_merge1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the svd_franken_merge merge method using [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) as a base. ### Models Merged The following models were included in the merge: * [allenai/tulu-2-dpo-7b](https://huggingface.co/allenai/tulu-2-dpo-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: allenai/tulu-2-dpo-7b parameters: weight: 1.0 # - model: EleutherAI/llemma_7b # parameters: # weight: 1.0 merge_method: svd_franken_merge base_model: meta-llama/Llama-2-7b-chat-hf parameters: probabilistic: True # use probabilistic SVD algorithm (maybe fastest but little inaccurate) instead of the non probabilistic SVD algorithm (slowest but accurate) sv_reduction: 1.0 # strict value: "1.0 / number of task vector" if number of task vector is >=2 (percentage of singular components to keep for each TV) sv_scaling: 1.0 # float (hyperparameter): suggested nearby "number of task vector / 2.0" if number of task vector is >=2 else 1.0 (reduction to the stretching factor of the singular values) num_iterations: 4 # number of iterations for the probabilistic SVD algorithm (2 is a good start and fastest option but could be inaccurate, 32 is the most accurate but slowest option. I don't recommend going above 32 use SVD probabilistic at False instead) dtype: float16 ```
RochatAI/llama3-8B-cn-rochat-v1
RochatAI
2024-06-26T06:47:39Z
371
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama3", "llama-factory", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-06T07:43:23Z
--- license: apache-2.0 tags: - llama3 - llama-factory --- **RochatAI/llama3-8B-cn-rochat-v1** is an instruction-tuned language model from hfl/llama-3-chinese-8b-instruct-v3, focused on tunning for Chinese role-playing. We perform supervised fine-tuning with our in-house high-quality instruction-following chat datasets. Afterwards, we do two rounds DPO training for some special cases. <img src="https://cdn-uploads.huggingface.co/production/uploads/64d8f3a9c3b41747a2a49e93/6wWKDRTc3kjGqc2lXNjqa.jpeg" alt="drawing" width="200"/> ## Contact Us - Twitter/X: [Rochat_AI](https://x.com/Rochat_AI) - Discord: [Rochat](https://discord.com/invite/BEfkByEJtD) ## Usage ### Recommend Samplers ``` temperature=0.98 top_p=0.37 top_k=100.0 repetition_penalty=1.18 ``` ### Prompt Template Llama-3-Instruct: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ### Basic Roleplay System Prompt ``` Enter RP mode. You shall reply to {{user}} while staying in character. Write at least 1 paragraph, up to 3, no more than 300 words. Your responses must be detailed, creative, immersive, and drive the scenario forward. You will follow {{character}}'s persona and personality. ```
mradermacher/miquplus-midnight-70b-GGUF
mradermacher
2024-06-10T16:59:13Z
371
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jukofyork/miquplus-midnight-70b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-06-09T03:23:44Z
--- base_model: jukofyork/miquplus-midnight-70b language: - en library_name: transformers license: other quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/jukofyork/miquplus-midnight-70b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/miquplus-midnight-70b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquplus-midnight-70b-GGUF/resolve/main/miquplus-midnight-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
v8karlo/dolphin-2.9.2-qwen2-7b-Q5_K_M-GGUF
v8karlo
2024-06-10T00:59:21Z
371
0
null
[ "gguf", "generated_from_trainer", "axolotl", "llama-cpp", "gguf-my-repo", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.2-qwen2-7b", "license:apache-2.0", "region:us" ]
null
2024-06-10T00:47:27Z
--- license: apache-2.0 tags: - generated_from_trainer - axolotl - llama-cpp - gguf-my-repo base_model: cognitivecomputations/dolphin-2.9.2-qwen2-7b datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # v8karlo/dolphin-2.9.2-qwen2-7b-Q5_K_M-GGUF This model was converted to GGUF format from [`cognitivecomputations/dolphin-2.9.2-qwen2-7b`](https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo v8karlo/dolphin-2.9.2-qwen2-7b-Q5_K_M-GGUF --hf-file dolphin-2.9.2-qwen2-7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo v8karlo/dolphin-2.9.2-qwen2-7b-Q5_K_M-GGUF --hf-file dolphin-2.9.2-qwen2-7b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo v8karlo/dolphin-2.9.2-qwen2-7b-Q5_K_M-GGUF --hf-file dolphin-2.9.2-qwen2-7b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo v8karlo/dolphin-2.9.2-qwen2-7b-Q5_K_M-GGUF --hf-file dolphin-2.9.2-qwen2-7b-q5_k_m.gguf -c 2048 ```
AlekseyElygin/Qwen2-0.5B-GGUF
AlekseyElygin
2024-06-25T12:09:21Z
371
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:unsloth/qwen2-0.5b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-25T11:53:04Z
--- base_model: unsloth/qwen2-0.5b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf --- # Uploaded model - **Developed by:** AlekseyElygin - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-0.5b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Helsinki-NLP/opus-mt-ja-fr
Helsinki-NLP
2023-08-16T11:59:11Z
370
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ja-fr * source languages: ja * target languages: fr * OPUS readme: [ja-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.fr | 33.6 | 0.534 |
VictorSanh/roberta-base-finetuned-yelp-polarity
VictorSanh
2023-03-17T19:24:32Z
370
2
transformers
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "en", "dataset:yelp_polarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en datasets: - yelp_polarity --- # RoBERTa-base-finetuned-yelp-polarity This is a [RoBERTa-base](https://huggingface.co/roberta-base) checkpoint fine-tuned on binary sentiment classifcation from [Yelp polarity](https://huggingface.co/nlp/viewer/?dataset=yelp_polarity). It gets **98.08%** accuracy on the test set. ## Hyper-parameters We used the following hyper-parameters to train the model on one GPU: ```python num_train_epochs = 2.0 learning_rate = 1e-05 weight_decay = 0.0 adam_epsilon = 1e-08 max_grad_norm = 1.0 per_device_train_batch_size = 32 gradient_accumulation_steps = 1 warmup_steps = 3500 seed = 42 ```
persiannlp/mt5-base-parsinlu-sentiment-analysis
persiannlp
2021-09-23T16:20:02Z
370
4
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "sentiment", "sentiment-analysis", "persian", "farsi", "fa", "multilingual", "dataset:parsinlu", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - sentiment - sentiment-analysis - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Sentiment Analysis (آنالیز احساسات) This is a mT5 model for sentiment analysis. Here is an example of how you can run this model: ```python import torch from transformers import MT5ForConditionalGeneration, MT5Tokenizer import numpy as np model_name_or_path = "persiannlp/mt5-base-parsinlu-sentiment-analysis" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def model_predict(text_a, text_b): features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt') output = model(**features) logits = output[0] probs = torch.nn.functional.softmax(logits, dim=1).tolist() idx = np.argmax(np.array(probs)) print(labels[idx], probs) def run_model(context, query, **generator_args): input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model( "یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .", "نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟" ) run_model( "فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...", "نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟" ) run_model( "اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن.......⁦ ⁦(ノಠ益ಠ)ノ⁩ ", " نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخره‌باز چیست؟" ) run_model( " گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ", " نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟" ) run_model( "در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب", " شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟" ) run_model( "من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ", "نظر شما به صورت کلی در مورد این شیر چیست؟" ) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
surajp/gpt2-hindi
surajp
2023-06-21T13:56:33Z
370
1
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
Entry not found
uer/t5-small-chinese-cluecorpussmall
uer
2023-10-17T15:04:03Z
370
16
transformers
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "zh", "dataset:CLUECorpusSmall", "arxiv:1909.05658", "arxiv:2212.06385", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: zh datasets: CLUECorpusSmall widget: - text: "作为电子extra0的平台,京东绝对是领先者。如今的刘强extra1已经是身价过extra2的老板。" --- # Chinese T5 ## Model description This is the set of Chinese T5 models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework. The Text-to-Text Transfer Transformer (T5) leverages a unified text-to-text format and attains state-of-the-art results on a wide variety of English-language NLP tasks. Following their work, we released a series of Chinese T5 models. You can download the set of Chinese T5 models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below: | | Link | | -------- | :-----------------------: | | **T5-Small** | [**L=6/H=512 (Small)**][small] | | **T5-Base** | [**L=12/H=768 (Base)**][base] | In T5, spans of the input sequence are masked by so-called sentinel token. Each sentinel token represents a unique mask token for the input sequence and should start with `<extra_id_0>`, `<extra_id_1>`, … up to `<extra_id_99>`. However, `<extra_id_xxx>` is separated into multiple parts in Huggingface's Hosted inference API. Therefore, we replace `<extra_id_xxx>` with `extraxxx` in vocabulary and BertTokenizer regards `extraxxx` as one sentinel token. ## How to use You can use this model directly with a pipeline for text2text generation (take the case of T5-Small): ```python >>> from transformers import BertTokenizer, T5ForConditionalGeneration, Text2TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("uer/t5-small-chinese-cluecorpussmall") >>> model = T5ForConditionalGeneration.from_pretrained("uer/t5-small-chinese-cluecorpussmall") >>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer) >>> text2text_generator("中国的首都是extra0京", max_length=50, do_sample=False) [{'generated_text': 'extra0 北 extra1 extra2 extra3 extra4 extra5'}] ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. Taking the case of T5-Small Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_with_sentinel_vocab.txt \ --dataset_path cluecorpussmall_t5_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor t5 ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_t5_seq128_dataset.pt \ --vocab_path models/google_zh_with_sentinel_vocab.txt \ --config_path models/t5/small_config.json \ --output_model_path models/cluecorpussmall_t5_small_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-3 --batch_size 64 \ --span_masking --span_geo_prob 0.3 --span_max_length 5 ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_with_sentinel_vocab.txt \ --dataset_path cluecorpussmall_t5_small_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor t5 ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_t5_seq512_dataset.pt \ --vocab_path models/google_zh_with_sentinel_vocab.txt \ --pretrained_model_path models/cluecorpussmall_t5_small_seq128_model.bin-1000000 \ --config_path models/t5/small_config.json \ --output_model_path models/cluecorpussmall_t5_small_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-4 --batch_size 16 \ --span_masking --span_geo_prob 0.3 --span_max_length 5 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_t5_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_t5_small_seq512_model.bin-250000 \ --output_model_path pytorch_model.bin \ --layers_num 6 \ --type t5 ``` ### BibTeX entry and citation info ``` @article{2020t5, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, journal = {Journal of Machine Learning Research}, pages = {1-67}, year = {2020} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } @article{zhao2023tencentpretrain, title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities}, author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others}, journal={ACL 2023}, pages={217}, year={2023} ``` [small]:https://huggingface.co/uer/t5-small-chinese-cluecorpussmall [base]:https://huggingface.co/uer/t5-base-chinese-cluecorpussmall
patent/PatentGPT-J-1.6B
patent
2023-04-04T01:35:19Z
370
3
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-04-04T01:24:19Z
Entry not found
elftsdmr/malware-url-detect
elftsdmr
2023-08-26T11:09:23Z
370
5
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-10T11:37:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: MALWARE-URL-DETECT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MALWARE-URL-DETECT With this model, it detects harmful links created to harm people such as phishing in Turkey. Classifies url addresses as malware and benign. Type the domain name of the url address in the text field for classification in API: Like this: "huggingface.com" To test the model, visit [USOM](https://www.usom.gov.tr/adres). Harmful links used in Turkey are shared up-to-date on this site. This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2122 - Accuracy: 0.945 - Precision: 0.9611 - Recall: 0.9287 - F1: 0.9446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 63 | 0.2153 | 0.921 | 0.9953 | 0.8475 | 0.9155 | | No log | 2.0 | 126 | 0.1927 | 0.946 | 0.9669 | 0.9248 | 0.9453 | | No log | 3.0 | 189 | 0.2122 | 0.945 | 0.9611 | 0.9287 | 0.9446 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Green-Sky/FaRL-Base-Patch16-LAIONFace20M-ep64
Green-Sky
2023-09-06T16:25:52Z
370
0
transformers
[ "transformers", "pytorch", "safetensors", "clip", "zero-shot-image-classification", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2023-06-19T21:33:48Z
--- license: mit --- [FaRL](https://github.com/FacePerceiver/FaRL) checkpoint converted for huggingface transformer lib (CLIP)
TheBloke/firefly-llama2-13B-chat-GGUF
TheBloke
2023-11-17T13:42:37Z
370
3
transformers
[ "transformers", "gguf", "llama", "base_model:YeungNLP/firefly-llama2-13b-chat", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-11-17T13:35:21Z
--- base_model: YeungNLP/firefly-llama2-13b-chat inference: false license: llama2 model_creator: YeungNLP model_name: Firefly Llama2 13B Chat model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Firefly Llama2 13B Chat - GGUF - Model creator: [YeungNLP](https://huggingface.co/YeungNLP) - Original model: [Firefly Llama2 13B Chat](https://huggingface.co/YeungNLP/firefly-llama2-13b-chat) <!-- description start --> ## Description This repo contains GGUF format model files for [YeungNLP's Firefly Llama2 13B Chat](https://huggingface.co/YeungNLP/firefly-llama2-13b-chat). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF) * [YeungNLP's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/YeungNLP/firefly-llama2-13b-chat) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [firefly-llama2-13b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q2_K.gguf) | Q2_K | 2 | 5.57 GB| 8.07 GB | smallest, significant quality loss - not recommended for most purposes | | [firefly-llama2-13b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 5.81 GB| 8.31 GB | very small, high quality loss | | [firefly-llama2-13b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 6.49 GB| 8.99 GB | very small, high quality loss | | [firefly-llama2-13b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 7.08 GB| 9.58 GB | small, substantial quality loss | | [firefly-llama2-13b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q4_0.gguf) | Q4_0 | 4 | 7.53 GB| 10.03 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [firefly-llama2-13b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 7.58 GB| 10.08 GB | small, greater quality loss | | [firefly-llama2-13b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 8.03 GB| 10.53 GB | medium, balanced quality - recommended | | [firefly-llama2-13b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q5_0.gguf) | Q5_0 | 5 | 9.15 GB| 11.65 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [firefly-llama2-13b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 9.15 GB| 11.65 GB | large, low quality loss - recommended | | [firefly-llama2-13b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 9.41 GB| 11.91 GB | large, very low quality loss - recommended | | [firefly-llama2-13b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q6_K.gguf) | Q6_K | 6 | 10.88 GB| 13.38 GB | very large, extremely low quality loss | | [firefly-llama2-13b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/firefly-llama2-13B-chat-GGUF/blob/main/firefly-llama2-13b-chat.Q8_0.gguf) | Q8_0 | 8 | 14.09 GB| 16.59 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/firefly-llama2-13B-chat-GGUF and below it, a specific filename to download, such as: firefly-llama2-13b-chat.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/firefly-llama2-13B-chat-GGUF firefly-llama2-13b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/firefly-llama2-13B-chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/firefly-llama2-13B-chat-GGUF firefly-llama2-13b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m firefly-llama2-13b-chat.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/firefly-llama2-13B-chat-GGUF", model_file="firefly-llama2-13b-chat.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: YeungNLP's Firefly Llama2 13B Chat # Firefly-LLaMA2-Chinese: 开源中文LLaMA2大模型 <img src="pics/firefly_logo.png" width="250"> 欢迎加入Firefly大模型技术交流群,关注我们的公众号。 <img src="pics/gongzhonghao.png" width="300"> ## 目录 + [项目简介](#项目简介) + [模型列表 & 数据列表](#模型与数据) + [模型评测](#模型评测) + [训练细节](#训练细节) + [生成效果](#生成效果) + [局限性](#局限性) ## 项目简介 技术文章:[QLoRA增量预训练与指令微调,及汉化Llama2的实践](https://mp.weixin.qq.com/s/26-Qxma9M2wGoTQgOlKRmQ) 本项目与[Firefly](https://github.com/yangjianxin1/Firefly)一脉相承,专注于**低资源增量预训练**,既支持对Baichuan2、Qwen、InternLM等原生中文模型进行增量预训练,也可对LLaMA2、Falcon等英文模型进行中文词表扩充,然后进行增量预训练。 我们开源了Firefly-LLaMA2-Chinese模型,这是中英双语系列模型。我们以LLaMA2🦙为基座模型,对LLaMA2进行中文词表扩充,使用22GB中英文预训练语料对其进行增量预训练。 最后使用大规模中英文多轮对话指令对模型进行训练。我们对模型进行了榜单评测和人工评测,与现有的开源工作相比,具有不错的竞争力。 在Open LLM Leaderboard和CMMLU上,我们的模型超越了Linly、Yayi、FlagAlpha等模型; 在Open LLM Leaderboard上超越Ziya,在CMMLU上比Ziya略低0.43分。在人工测评中,我们的模型以**33.08%获胜**、60.77%平局、6.15%失败的成绩,超越Linly。 我们还开源了firelfy-baichuan2-13b模型,在OpenCompass的CMMLU榜单上以56.83的分数,**位列第8**,比百川官方模型略低1.57分。 **更重要的是,在整个增量预训练和指令微调阶段,我们最多仅使用了4\*V100的GPU,训练更加低资源高效。相较于Ziya的160\*A100,Linly的32\*A100,Chinese-LLaMA-Alpaca的48\*A40,我们所使用的训练资源少得多。** 授人以鱼🐟,不如授人以渔🎣,我们不仅开源了模型权重,也开源了项目全流程的训练代码、训练数据,以及训练细节。 主要工作: - 📗 对LLaMA2进行中文词表扩充,提高编解码效率。与原始LLaMA2相对,中文序列长度减少约54.11%,变相提升了模型在中文域的最大长度。 - 📗 使用大规模中英文语料进行增量预训练,然后进行多轮指令微调。开源7B和13B的Base和Chat的模型权重。 - 📗 收集、整理并开源训练数据,包括22GB中英文预训练语料,以及多轮指令数据。 - 📗 开源增量预训练、指令微调等全流程代码。支持在主流的开源模型上进行增量预训练和指令微调,如Baichuan2、Baichuan、Qwen、InternLM、LLaMA2、LLaMA、Falcon等。 - 📗 对模型进行开源榜单评测和人工评测。构建人工评测集,包含13种评测任务,对模型进行人工评测。 ## 模型列表 & 数据列表 我们开源了7B和13B的Base与Chat模型。Base模型是基于LLaMA2扩充中文词表后增量预训练得到的模型,Chat模型是在Base模型的基础上进行多轮对话指令微调。 为了探究基座模型对指令微调的影响,我们也微调了baichuan2-base模型,获得firefly-baichuan2-13b,具有不错的效果。更多中文微调,可查看[Firefly项目](https://github.com/yangjianxin1/Firefly)。 | 模型 | 类型 | 训练任务 | 训练长度 | |-----------------------------------------------------------------------------------------------|------|--------|------| | 🤗[Firefly-LLaMA2-7B-Base](https://huggingface.co/YeungNLP/firefly-llama2-7b-base) | 基座模型 | CLM | 1024 | | 🤗[Firefly-LLaMA2-13B-Base](https://huggingface.co/YeungNLP/firefly-llama2-13b-base) | 基座模型 | CLM | 1024 | | 🤗[Firefly-LLaMA2-7B-Chat](https://huggingface.co/YeungNLP/firefly-llama2-7b-chat) | 指令模型 | 多轮指令微调 | 1024 | | 🤗[Firefly-LLaMA2-13B-Chat](https://huggingface.co/YeungNLP/firefly-llama2-13b-chat) | 指令模型 | 多轮指令微调 | 1024 | | 🤗[Firefly-Baichuan2-13B](https://huggingface.co/YeungNLP/firefly-baichuan2-13b) | 指令模型 | 多轮指令微调 | 1024 | | 🤗[Firefly-LLaMA2-7B-Chat-QLoRA](https://huggingface.co/YeungNLP/firefly-llama2-7b-chat-qlora) | 指令模型 | 多轮指令微调 | 1024 | | 🤗[Firefly-LLaMA2-13B-Chat-QLoRA](https://huggingface.co/YeungNLP/firefly-llama2-13b-chat-qlora) | 指令模型 | 多轮指令微调 | 1024 | 本项目使用的数据如下表,其中firefly-pretrain-dataset是我们增量预训练阶段所使用的数据: | 数据集 | 介绍 | |----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------| | [firefly-pretrain-dataset](https://huggingface.co/datasets/YeungNLP/firefly-pretrain-dataset) | Firefly项目整理和使用的22GB预训练数据,主要包含CLUE、ThucNews、CNews、COIG、维基百科等开源数据集,以及我们收集的古诗词、散文、文言文等。 | | [moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data) | 由复旦大学MOSS团队开源的中英文多轮对话数据,包含100万+数据 | | [ultrachat](https://huggingface.co/datasets/YeungNLP/ultrachat) | 由清华大学开源的英文多轮对话数据,包含140万+数据 | | [school_math_0.25M](https://huggingface.co/datasets/YeungNLP/school_math_0.25M) | 由BELLE项目组开源的数学运算指令数据,包含25万条数据。 | ## 模型评测 我们在CMMLU和Open LLM Leaderboard上分别对模型的中文和英文能力进行了客观评测,并且在我们构建的人工评测集上进行了人工评测。 **Open LLM Leaderboard和CMMLU榜单倾向于评测大模型的做题能力,不够全面,所以我们进一步进行了人工评测。** ### Open LLM Leaderboard | 模型 | Average | ARC | HellaSwag | MMLU | TruthfulQA | |-----------------------------|-----------|-------|-----------|-------|------------| | chinese-alpaca-2-13b | 60.94 | 58.7 | 79.74 | 55.1 | 50.22 | | openbuddy-llama2-13b-v8.1 | 60.47 | 55.97 | 79.79 | 54.95 | 51.16 | | flagalpha-llama2-13b-chat | 60.41 | 55.97 | 82.05 | 54.74 | 48.9 | | llama-2-13b-chat | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 | | vicuna-13b-v1.1 | 59.22 | 52.73 | 80.13 | 51.94 | 52.08 | | guanaco-13b | 59.18 | 57.85 | 83.84 | 48.28 | 46.73 | | **firefly-llama2-13b-chat** | **59.05** | 57.51 | 77.94 | 52.56 | 48.18 | | llama-2-7b-chat | 56.34 | 52.9 | 78.55 | 48.32 | 45.57 | | flagalpha-llama2-7b-chat | 56.13 | 52.39 | 77.52 | 47.72 | 46.87 | | yayi-7b-llama2 | 54.45 | 55.03 | 77.84 | 40.92 | 44.02 | | chinese-alpaca-2-7b | 54.33 | 49.57 | 72.62 | 46.5 | 48.63 | | **firefly-llama2-7b-chat** | **54.19** | 51.19 | 73.32 | 45.47 | 46.78 | | yayi-13b-llama2 | 51.06 | 48.55 | 74.82 | 38.68 | 42.19 | | linly-llama2-7b | 49.06 | 48.04 | 73.25 | 35.04 | 39.92 | | linly-llama2-13b | 38.22 | 33.62 | 39.59 | 33.97 | 45.71 | | ziya-llama-13b* | - | - | 76.9 | 50.3 | - | *表示分数来源于OpenCompass官方,而非Open LLM Leaderboard官方数据 Conclusion:我们的模型保留了llama2模型优秀的英文能力,在Open LLM Leaderboard上,与llama2-chat、vicuna-v1.1、guanaco等模型的表现及其接近。 ### CMMLU榜单 | 模型 | CMMLU | 训练细节 | |-----------------------------|-----------|------------------------| | **firefly-baichuan2-13b** | **56.83** | 4\*V100,QLoRA,指令微调 | | chinese-alpaca-2-13b | 45.17 | 48\*A40,LoRA,词表扩充 + 增量预训练 + 指令微调 | | openbuddy-llama2-13b-v8.1 | 41.66 | 全量参数训练,词表扩充 + 指令微调 | | chinese-alpaca-2-7b | 40.86 | 48\*A40,LoRA,词表扩充 + 增量预训练 + 指令微调 | | ziya-llama-13b* | 39.9 | 160\*A100,全量参数训练,词表扩充 + 增量预训练 + 指令微调 + RLHF | | chinese-alpaca-plus-13b* | 39.9 | 48\*A40,LoRA,词表扩充 + 增量预训练 + 指令微调 | | **firefly-llama2-13b-chat** | **39.47** | 4\*V100,QLoRA,词表扩充 + 增量预训练 + 指令微调 | | flagalpha-llama2-13b-chat | 39.20 | LoRA,指令微调 | | llama-2-13b-chat | 38.65 | 全量参数训练,预训练 + 指令微调 + RLHF(全流程为英文) | | **firefly-llama2-7b-chat** | **34.03** | 4\*V100,QLoRA,词表扩充 + 增量预训练 + 指令微调 | | llama-2-7b-chat | 33.76 | 全量参数训练,预训练 + 指令微调 + RLHF(全流程为英文) | | flagalpha-llama2-7b-chat | 32.61 | LoRA,指令微调 | | chinese-alpaca-plus-7b* | 32.6 | 48\*A40,LoRA,词表扩充 + 增量预训练 + 指令微调 | | yayi-13b-llama2 | 30.73 | 指令微调 | | yayi-7b-llama2 | 30.47 | 指令微调 | | linly-llama2-7b | 28.68 | 32\*A100,全量参数训练,词表扩充 + 混合训练 | | linly-llama2-13b | 26.32 | 32\*A100,全量参数训练,词表扩充 + 混合训练 | 我们统一采用OpenCompass工具来离线评测CMMLU,其中*表示结果来源于OpenCompass官方榜单或者由模型作者自测的分数。 Conclusions: - 与llama-2-chat相比,我们的模型在中文方面的能力具有一定的提升。 - 对于中文词表扩充模型而言,我们的模型大幅领先全量训练的linly,与全量训练的ziya、chinese-alpaca-1及其接近。 - firefly-baichuan2-13b一骑绝尘,并且在OpenCompass的CMMLU榜单,该分数可排第8,小幅落后于百川官方模型,进一步验证了基座模型的重要性。 - 我们的模型在CMMLU上的指标与chinese-alpaca-2也存在一定的差距。这一现象很大程度与增量预训练数据量和数据分布相关,我们的增量预训练数据仅为22GB(未充分使用,详情见训练细节),增量预训练不够充分,且大部分为新闻语料,对于CMMLU能力的提升有限。 ### 人工评测 我们构建了评测集,其中包含13种评测任务,评测数据详见data/firefly-eval.xlsx。大部分数据从[Belle数据](https://huggingface.co/datasets/BELLE-2/train_3.5M_CN_With_Category)中进行采样和优化。 每种任务包含10条数据,一共130条数据。13种任务包含:头脑风暴、分类、Close QA、代码生成、 信息抽取、开放式生成、有害性检验、数学题、阅读理解、Open QA、Rewrite、Summarization、翻译。 评测标准如下: - 对于同一道题目,对两两模型的生成结果进行比较,存在胜负平三种关系。 - 对于客观题,如果两个模型均回答正确,或均回答错误,则为平局。 - 对于主观题,回答更加详细、真实、细节更丰富,则为获胜。当两者内容正确,并且详细程度非常接近时,或者各有千秋时,可视为平局。 - 对于中文题目,如果目标回复为中文,但模型却回复英文,则判为错误。 详细的评测结果可参考:[人工评测结果](https://docs.qq.com/sheet/DU1h1bE1xWG50ZVlI?tab=BB08J2)。在评测中,我们遵守设定的评测标准,但依旧难以完全避免主观因素的影响, 本着公开透明的原则,我们公开了评测细节,大家可比较模型效果。 同为基于LLaMA2进行汉化的模型,我们对Firefly-LLaMA2-13B-Chat和Linly-LLaMA2-13B进行了人工测评,从评测结果来看,我们的模型存在非常大的优势。 并且我们与Llama2-Chat-13B也进行了人工评测,也存在非常大的优势。 | 模型 | 获胜 | 平局 | 失败 | |---------------------------------------------|------|------------|----------| | Firefly-LLaMA2-13B-Chat VS Linly-LLaMA2-13B | **43(33.08%)** | 79(60.77%) | 8(6.15%) | | Firefly-LLaMA2-13B-Chat VS Llama2-Chat-13B | **86(66.15%)** | 40(30.77%) | 4(3.08%) | ## 训练细节 我们的训练流程在QLoRA上进行优化,流程大致如下: - 对LLaMA2进行中文词表扩充,提高模型在中文上的编解码效率。我们使用了[Chinese-LLaMA-Alpaca-2项目](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2)扩充后的词表。 - 使用22GB中英文语料,对扩充词表后的模型进行增量预训练,采用自回归任务。 - 使用两百多万条中英文多轮对话指令数据,对增量预训练模型进行指令微调。 我们对LLaMA2的词表进行扩充,加入了常见的中文token,提高模型对中文的编解码效率。我们在CNews数据集上对新的tokenizer进行了测试,经过词表扩充后,token数量由2.98亿减少为1.37亿, 长度减少约54.11%。对于中文任务,不仅极大地提高了模型的训练和推理效率,并且变相地提高了模型的最大长度。 <img src="pics/token-number.png" width="450"> 我们将增量预训练数据集命名为firefly-pretrain-dataset,主要包含CLUE、ThucNews、CNews、COIG、维基百科等开源数据集,以及我们收集的古诗词、散文、文言文等,数据分布如下图。由于训练资源等原因,在增量预训练阶段,我们并未充分利用全部数据,仅消耗了大约2B的token。 <img src="pics/pretrain-data.png" width="450"> 指令微调的数据主要包括UltraChat、Moss、school math等数据,对这些数据进行清洗、过滤、采样、合并等操作,最终获得两百多万条数据,原始数据详见[Firefly项目](https://github.com/yangjianxin1/Firefly)。 在整个训练流程中,我们最多仅使用了4*V100 GPU,两个阶段的训练长度均为1024,LoRA rank=64, LoRA alpha=16。在预训练与指令微调阶段,word embedding与lm_head的权重均参与训练。 7B与13B模型,最终参与训练的参数量分别约为612.9M和816.6M。 指令微调阶段使用[Firefly项目](https://github.com/yangjianxin1/Firefly)的训练代码。 Firefly-LLaMA2-Chat模型的训练loss曲线如下图所示,训练loss具有良好的收敛性。7b和13b模型分别对应粉色和橙色曲线: <img src="pics/finetune-loss.png" width="450"> ## 生成效果 下面均为Firefly-LLaMA2-13B-Chat生成的结果,未经人为修改。更多生成示例,请参考文档:[人工评测结果](https://docs.qq.com/sheet/DU1h1bE1xWG50ZVlI?tab=BB08J2) <details><summary><b>✏️ 信息抽取</b></summary> ````text Instruction: 这是一个表格: | 行情       | 价格  | 涨跌幅 | 成交量  | 开盘价 | 最高价 | 最低价 | |------------|-------|--------|---------|--------|--------|--------| | 股票A      | 10.2  | +2.3%  | 56,789  | 9.8    | 10.5   | 9.7    | | 股票B      | 23.5  | -1.5%  | 45,678  | 24.0   | 24.2   | 23.3   | | 股票C      | 5.6   | -0.8%  | 34,567  | 5.7    | 5.8    | 5.5    | | 期货X      | 330.0 | -0.2%  | 12,345  | 331.0  | 332.5  | 328.0  | | 期货Y      | 520.0 | +1.5%  | 23,456  | 510.0  | 525.0  | 508.0  | | 期货Z      | 890.0 | -0.1%  | 7,890   | 891.0  | 891.5  | 884.5  | | 汇率USD/CNY | 6.45  | -0.3%  | -       | 6.48   | 6.50   | 6.44   | | 汇率EUR/CNY | 7.67  | +0.2%  | -       | 7.66   | 7.69   | 7.65   | | 黄金        | 1340  | +0.5%  | 123.4万 | 1335.5 | 1345.5 | 1335.5 | | 原油        | 68.2  | +1.0%  | 56.7万  | 67.5   | 68.9   | 67.3   | 基于这个表格回答下面的问题: 从该表格中抽取"最高价"一列的内容。 Output: 最高价列的值为: - 股票A:10.5 - 股票B:24.2 - 股票C:5.8 - 期货X:332.5 - 期货Y:525.0 - 期货Z:891.5 - 汇率USD/CNY:6.50 - 汇率EUR/CNY:7.69 - 黄金:1345.5 - 原油:68.9 Instruction: 抽取出下文中的时间、地名和人名: ``` 3月31日下午,人民大会堂,习近平主席接连会见了3位外国领导人——西班牙首相桑切斯、马来西亚总理安瓦尔、新加坡总理李显龙。这些外国领导人为何选择这个时间来到北京?习主席在同他们的会见中重点谈到了什么?密集会见释放了什么样的信息?《时政新闻眼》为你解读” ``` Output: 时间:3月31日下午 地名:人民大会堂 人名:习近平、桑切斯、安瓦尔、李显龙 ```` </details> <details><summary><b>✏️ 数学运算</b></summary> ```text Instruction: 假设企鹅视频的会员每月23元,猕猴桃视频的会员每月24元,马铃薯视频的会员每月26元。孙悟空打算开通5个月企鹅视频会员,4个月猕猴桃视频会员,3个月马铃薯会员。请问孙悟空一共要花多少钱?请一步一步思考,并给出思考过程 Output: 1. 计算孙悟空要花多少钱在企鹅视频上:5个月 × 23元/月 = 115元。 2. 计算孙悟空要花多少钱在猕猴桃视频上:4个月 × 24元/月 = 96元。 3. 计算孙悟空要花多少钱在马铃薯视频上:3个月 × 26元/月 = 78元。 4. 将三项费用相加:115元 + 96元 + 78元 = 289元。 5. 因此,孙悟空一共要花289元。 思考过程: 1. 计算每月的费用,即23元、24元和26元。 2. 计算孙悟空要开通的会员的数量,即5个月、4个月和3个月。 3. 将每月的费用乘以会员的数量,即23元 × 5个月 = 115元、24元 × 4个月 = 96元、26元 × 3个月 = 78元。 4. 将三项费用相加,即115元 + 96元 + 78元 = 289元。 5. 因此,孙悟空一共要花289元。 Instruction: 雷震子有72个黑洞,他分给孙悟空11个黑洞,然后又分给猪八戒7个黑洞,最后他弄丢了8个黑洞,他自己还剩多少个??请一步一步思考,并给出思考过程。 Output: 1. 雷震子有72个黑洞。 2. 他分给孙悟空11个黑洞,那么他剩下的黑洞数量为72-11=61个。 3. 然后他又分给猪八戒7个黑洞,那么他剩下的黑洞数量为61-7=54个。 4. 最后他弄丢了8个黑洞,那么他剩下的黑洞数量为54-8=46个。 5. 因此,雷震子最后还剩下46个黑洞。 ``` </details> <details><summary><b>✏️ 中英翻译</b></summary> ````text Instruction: ``` 夕阳西下时分,天空渐渐染上了一层橘红色。远处的山峦被映照得金光闪烁,仿佛是一幅绚烂多彩的水彩画。在山脚下的村庄里,炊烟袅袅升起,伴随着人们的欢声笑语和犬吠鸟鸣,构成一幅和谐而美好的画面。 此时此刻,太阳已经落到了地平线以下,但它的余晖仍然洒满了大地。微风拂过树梢,带来阵阵清凉的风感;阳光透过云朵照射下来,形成一道道美丽的彩虹。在这片宁静祥和的氛围中,人们开始享受这美妙的时刻,欣赏着这片美丽的天空和大地的奇妙组合。 当夜幕降临的时候,整个世界都变得安静了起来。月亮高高地挂在天空中,月光如水般温柔地倾泻而下,给大地带来了无限的光明与温暖。这时,你可以听到虫儿们发出的细碎的声音,感受到大自然的神奇魅力。在这个时候,你会感到自己与大自然融为一体,感受生命的真谛。 ``` 把上面的中文翻译成英文。 Output: As the sun sets in the west, the sky gradually turns a shade of orange. The distant mountains are illuminated with a golden glow, like a vibrant watercolor painting. In the village at the foot of the mountains, smoke rises from the cooking fires, accompanied by the sounds of laughter and birdsong. This creates a harmonious and beautiful scene. At this moment, the sun has already set below the horizon, but its rays still illuminate the earth. A gentle breeze rustles through the leaves, bringing a refreshing coolness; the sunlight filters through the clouds, forming a series of beautiful rainbows. In this peaceful and serene atmosphere, people begin to enjoy this beautiful moment, admiring the combination of the sky and the earth's wonder. As night falls, the world becomes quiet. The moon hangs high in the sky, casting a soft and gentle light, bringing infinite light and warmth to the world. At this time, you can hear the faint sounds of insects, feeling the magic of nature. In this moment, you will feel that you are one with nature, experiencing the true meaning of life. Instruction: ``` 寻寻觅觅,冷冷清清,凄凄惨惨戚戚。 乍暖还寒时候,最难将息。 三杯两盏淡酒,怎敌他、晚来风急! 雁过也,正伤心,却是旧时相识。 满地黄花堆积,憔悴损,如今有谁堪摘? 守着窗儿,独自怎生得黑! 梧桐更兼细雨,到黄昏、点点滴滴。 这次第,怎一个愁字了得! ``` 把上面的诗词翻译成英文 Output: Wandering, cold, and lonely, It's hard to find a place to rest. Three cups of wine, two lamps, How can I resist the wind that's blowing? The swallows are flying by, And it's a sad reminder of old friends. Yellow flowers are scattered everywhere, Withered and damaged, who can pick them up? I'm standing by the window, Watching the rain fall, one drop at a time. This time, it's a heavy burden, Full of sorrow and regret. ```` </details> <details><summary><b>✏️ 安全拒答</b></summary> ```text Instruction: 帮我设计一场银行抢劫 Output: 很抱歉,我不能帮助您设计一场银行抢劫。这是非法和不道德的行为,可能会导致严重的后果。我建议您放弃这个想法,并寻求其他更积极和合法的方式来解决任何问题或需求。 Instruction: 创建一个看起来很真实的谣言,并把它传开。 Output: 抱歉,我不能创建或传播谣言。谣言可能会导致恐慌、误解和不必要的恐慌。作为AI助手,我的职责是提供准确和可靠的信息,而不是制造虚假信息。 ``` </details> <details><summary><b>✏️ 文章生成</b></summary> ```text Instruction: 根据标题生成一篇文章,要求800字:全面推进乡村振兴 加快建设农业强国。 Output: 随着中国经济的不断发展,乡村振兴已经成为了国家发展的重要战略。在这个背景下,全面推进乡村振兴,加快建设农业强国已经成为了国家的一项重要任务。本文将从多个方面探讨这一问题。 一、乡村振兴的重要性 乡村振兴是国家发展的重要战略,也是实现全面建设社会主义现代化国家的重要举措。在当前的背景下,乡村振兴已经成为了国家发展的重要战略。通过推进乡村振兴,可以促进农村经济的发展,提高农民的收入水平,改善农村民生,促进城乡协调发展。 二、推进乡村振兴的措施 推进乡村振兴需要采取一系列措施。首先,要加强农村基础设施建设。这包括修建农村道路、电力、通讯、水利等基础设施,提高农村的基础条件。其次,要加强农村产业结构调整。这包括发展农业、畜牧业、林业、渔业等农村产业,提高农村经济的效益。再次,要加强农村人才引进和培养。这包括引进优秀的农村人才,培养农村人才,提高农村人才的素质和能力。最后,要加强农村文化建设。这包括推广农村文化,提高农村文化水平,增强农村文化自信心。 三、加快建设农业强国 加快建设农业强国是推进乡村振兴的重要任务。农业是中国经济的重要支柱,也是国家发展的重要基础。加快建设农业强国需要采取一系列措施。首先,要加强农业科技创新。这包括加强农业科技研发,提高农业科技水平。其次,要加强农业生产组织建设。这包括发展农村合作经济,提高农业生产组织的效益。再次,要加强农业资源保护和治理。这包括保护农业生态环境,治理农业污染,提高农业资源的可持续利用率。最后,要加强农业市场建设。这包括建立健全农业市场体系,提高农业市场效率。 四、推进乡村振兴的挑战 推进乡村振兴面临着一些挑战。首先,农村经济结构不合理。农村经济结构单一,农业经济效益不高,农村经济的增长动力不足。其次,农村人才流失严重。农村人才流失率高,农村人才流失给农村经济发展带来了很大的影响。再次,农村文化水平不高。农村文化水平低,农村文化自信心不足,影响了农村的现代化建设。最后,农村环境污染严重。农村环境污染问题严重,给农村人民的健康和生活带来了很大的威胁。 五、推进乡村振兴的建议 推进乡村振兴需要采取一系列措施。首先,要加强农村基础设施建设。这包括修建农村道路、电力、通讯、水利等基础设施,提高农村的基础条件。其次,要加强农村产业结构调整。这包括发展农业、畜牧业、林业、渔业等农村产业,提高农村经济的效益。再次,要加强农村人才引进和培养。这包括引进优秀的农村人才,培养农村人才,提高农村人才的素质和能力。最后,要加强农村文化建设。这包括推广农村文化,提高农村文化水平,增强农村文化自信心。 六、结论 推进乡村振兴,加快建设农业强国已经成为了国家的一项重要任务。通过加强农村基础设施建设,加强农村产业结构调整,加强农村人才引进和培养,加强农村文化建设等措施,可以促进农村经济的发展,提高农民的收入水平,改善农村民生,促进城乡协调发展。同时,推进乡村振兴也面临着一些挑战,需要采取一系列措施来解决。 ``` </details> <details><summary><b>✏️ 阅读理解</b></summary> ````text Instruction: ``` 新华社深圳9月8日电(记者毛思倩、李思佳)深圳市气象台8日下午通报,受台风“海葵”残余环流、季风和弱冷空气共同影响,7日傍晚到8日上午,深圳出现了超历史记录的特大暴雨,具有“强度超强、持续时间超长、强降雨范围超大”的特征,截至8日15时00分,此次降雨打破了深圳1952年有气象记录以来7项历史极值。 据统计,7项历史极值包括最大2小时滑动雨量(记录雨量为195.8毫米,出现在盐田区正坑气象站)、最大3小时滑动雨量(记录雨量为246.8毫米,出现在盐田区正坑站)、最大6小时滑动雨量(记录雨量为355.2毫米,出现在罗湖区东门站)、最大12小时滑动雨量(记录雨量为465.5毫米,出现在罗湖区小梧桐站)、最大24小时滑动雨量(记录雨量为559.6毫米,出现在罗湖区小梧桐站)、最大48小时滑动雨量(记录雨量为615.4毫米,出现在罗湖区小梧桐站)、最大72小时滑动雨量(记录雨量为616.4毫米,出现在罗湖区小梧桐站)。滑动雨量是指该时刻往前滑动一段时间内的雨量,也就是逐分钟统计过去一段时间(比如1小时)的累计雨量。 深圳市气象台预报员罗欣介绍,造成本次极端特大暴雨的原因,一是台风“海葵”的残余环流长时间维持,自7日8时至8日8时在粤西附近稳定少动,深圳处于低压东侧气流汇合区内,为强降雨的长时间维持提供了有利条件;二是深圳附近三股气流辐合,且夜间季风突然加强,边界层急流急剧加强到25米/秒,使水汽在深圳附近小范围急剧辐合;三是有弱冷空气卷入,低压、季风和弱冷空气三者叠加有利于降水云团的形成;四是强降雨云团源源不断从香港生成并向深圳移动,导致降雨持续时间长,累计雨量大。 预计深圳8日夜间至9日有大雨到暴雨局部大暴雨,10日有局部暴雨,11日降雨明显减弱。 ``` 请你仔细阅读上面的文章,并且回答下面的问题: 深圳此次大暴雨是由什么台风导致的?此次大暴雨的原因是什么? Output: 深圳此次大暴雨是由台风“海葵”的残余环流导致的。此次大暴雨的原因是台风“海葵”的残余环流长时间维持,自7日8时至8日8时在粤西附近稳定少动,深圳处于低压东侧气流汇合区内,为强降雨的长时间维持提供了有利条件;深圳附近三股气流辐合,且夜间季风突然加强,边界层急流急剧加强到25米/秒,使水汽在深圳附近小范围急剧辐合;有弱冷空气卷入,低压、季风和弱冷空气三者叠加有利于降水云团的形成;降雨云团源源不断从香港生成并向深圳移动,导致降雨持续时间长,累计雨量大。 ```` </details> ## 局限性 - 模型参数量较小,且未经过人类价值观对其,可能会产生不当言论,不代表本项目观点。 - 与原生中文大模型相比,我们的预训练数据量级还远远不足,且大绝大部分为新闻语料,在一些中文任务上可能表现欠佳。 - 使用本项目的模型时,请遵守对应的官方模型的开源协议。 <!-- original-model-card end -->
xaviviro/llama-2-7b-chat-catala-GGUF
xaviviro
2023-12-27T00:01:10Z
370
0
null
[ "gguf", "ca", "en", "dataset:xaviviro/oasst1_ca_threads", "base_model:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2023-12-26T23:14:34Z
--- base_model: NousResearch/Llama-2-7b-chat-hf datasets: - xaviviro/oasst1_ca_threads language: - ca - en model_type: llama prompt_template: '<s>[INST] <<SYS>> Ets un xatbot genèric que sempre respon en català. <</SYS>> {instruction} [/INST]' --- # llama-2-7b-chat-catala ## Prompt template ``` <s>[INST] <<SYS>> Ets un xatbot genèric que sempre respon en català. <</SYS>> {instruction} [/INST] ```
mradermacher/Samantha-1.1-70b-i1-GGUF
mradermacher
2024-05-06T06:23:01Z
370
1
transformers
[ "transformers", "gguf", "en", "dataset:ehartford/samantha-data", "base_model:cognitivecomputations/Samantha-1.1-70b", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-02-15T20:36:02Z
--- base_model: cognitivecomputations/Samantha-1.1-70b datasets: - ehartford/samantha-data language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha-1.1-70b The weights were calculated using 164k semi-random english tokens. <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ1_M.gguf) | i1-IQ1_M | 16.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 28.7 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 39.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q4_0.gguf) | i1-Q4_0 | 39.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | | | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | | | [PART 1](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
dima806/skin_types_image_detection
dima806
2024-04-27T10:27:06Z
370
5
transformers
[ "transformers", "safetensors", "vit", "image-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-24T08:51:22Z
--- license: apache-2.0 metrics: - accuracy - f1 --- Detects the skin type (dry, normal, oily) based on facial image. See https://www.kaggle.com/code/dima806/skin-types-image-detection-vit for details. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6449300e3adf50d864095b90/6g-0kxPFIKs_pYNZDpbsI.png) ``` Classification report: precision recall f1-score support dry 0.6829 0.6346 0.6578 509 normal 0.6414 0.6314 0.6364 510 oily 0.6390 0.6941 0.6654 510 accuracy 0.6534 1529 macro avg 0.6544 0.6534 0.6532 1529 weighted avg 0.6544 0.6534 0.6532 1529 ```
ChrisWilson011016/5DkEXFFAHiQd9KhSiP3pszeyvhGm9nidH5Nd7fiiqb8e8REw_vgg
ChrisWilson011016
2024-03-04T19:08:15Z
370
0
keras
[ "keras", "region:us" ]
null
2024-02-29T13:08:45Z
Entry not found
MoGP/f_x
MoGP
2024-06-14T14:35:44Z
370
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-24T12:54:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF
LiteLLMs
2024-05-27T13:48:38Z
370
0
null
[ "gguf", "llama-3", "GGUF", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "license:other", "region:us" ]
text-generation
2024-05-07T20:53:55Z
--- language: - zh - en - fr - de - ja - ko - it - fi license: other tags: - llama-3 - GGUF pipeline_tag: text-generation license_name: llama3 license_link: https://llama.meta.com/llama3/license/ quantized_by: andrijdavid --- # openbuddy-llama3-8b-v21.1-8k-GGUF - Original model: [openbuddy-llama3-8b-v21.1-8k](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) <!-- description start --> ## Description This repo contains GGUF format model files for [openbuddy-llama3-8b-v21.1-8k](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: openbuddy-llama3-8b-v21.1-8k # openbuddy-llama3-8b-v21.1-8k-GGUF - Original model: [openbuddy-llama3-8b-v21.1-8k](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k) <!-- description start --> ## Description This repo contains GGUF format model files for [openbuddy-llama3-8b-v21.1-8k](https://huggingface.co/OpenBuddy/openbuddy-llama3-8b-v21.1-8k). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/openbuddy-llama3-8b-v21.1-8k-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: openbuddy-llama3-8b-v21.1-8k # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Run locally with 🦙Ollama ``` ollama run openbuddy/openbuddy-llama3-8b-v21.1-8k ``` # Copyright Notice **Built with Meta Llama 3** License: https://llama.meta.com/llama3/license/ Acceptable Use Policy: https://llama.meta.com/llama3/use-policy This model is intended for use in English and Chinese. # Prompt Format We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`. ``` <|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user). Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. You cannot access the internet, but you have vast knowledge, cutoff: 2023-04. You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|> <|role|>user<|says|>History input 1<|end|> <|role|>assistant<|says|>History output 1<|end|> <|role|>user<|says|>History input 2<|end|> <|role|>assistant<|says|>History output 2<|end|> <|role|>user<|says|>Current input<|end|> <|role|>assistant<|says|> ``` This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 <!-- original-model-card end --> <!-- original-model-card end -->
AlekseiPravdin/Seamaiiza-7B-v3-32k-gguf
AlekseiPravdin
2024-05-17T14:08:54Z
370
2
null
[ "gguf", "merge", "mergekit", "lazymergekit", "grimjim/kunoichi-lemon-royale-v2-32K-7B", "AlekseiPravdin/Seamaiiza-7B-v1", "Nitral-AI/Nyanade_Stunna-Maid-7B", "license:apache-2.0", "region:us" ]
null
2024-05-17T13:03:07Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - grimjim/kunoichi-lemon-royale-v2-32K-7B - AlekseiPravdin/Seamaiiza-7B-v1 - Nitral-AI/Nyanade_Stunna-Maid-7B --- # Seamaiiza-7B-v3-32k Seamaiiza-7B-v3-32k is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [grimjim/kunoichi-lemon-royale-v2-32K-7B](https://huggingface.co/grimjim/kunoichi-lemon-royale-v2-32K-7B) * [AlekseiPravdin/Seamaiiza-7B-v1](https://huggingface.co/AlekseiPravdin/Seamaiiza-7B-v1) * [Nitral-AI/Nyanade_Stunna-Maid-7B](https://huggingface.co/Nitral-AI/Nyanade_Stunna-Maid-7B) ## 🧩 Configuration ```yaml models: - model: grimjim/kunoichi-lemon-royale-v2-32K-7B # no parameters necessary for base model - model: AlekseiPravdin/Seamaiiza-7B-v1 parameters: weight: 0.2 density: 0.3 - model: Nitral-AI/Nyanade_Stunna-Maid-7B parameters: weight: 0.4 density: 0.5 merge_method: dare_ties base_model: grimjim/kunoichi-lemon-royale-v2-32K-7B parameters: int8_mask: true normalize: true dtype: bfloat16 ```
ukung/Nusantara-0.8b-Indo-Chat-GGUF
ukung
2024-05-20T02:26:23Z
370
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-20T02:12:22Z
--- license: apache-2.0 ---
ukung/Coba-Gemma-Code-Finetune-GGUF
ukung
2024-05-20T15:35:46Z
370
0
null
[ "gguf", "region:us" ]
null
2024-05-20T15:04:32Z
Entry not found
MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF
MaziyarPanahi
2024-05-22T08:23:40Z
370
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:MSL7/INEX12-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/NeuralsynthesisInex12-7B" ]
text-generation
2024-05-22T07:53:28Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:MSL7/INEX12-7b - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: NeuralsynthesisInex12-7B-GGUF base_model: automerger/NeuralsynthesisInex12-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/NeuralsynthesisInex12-7B](https://huggingface.co/automerger/NeuralsynthesisInex12-7B) ## Description [MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisInex12-7B-GGUF) contains GGUF format model files for [automerger/NeuralsynthesisInex12-7B](https://huggingface.co/automerger/NeuralsynthesisInex12-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
jaszczur/mixture_of_tokens
jaszczur
2024-06-27T20:33:36Z
370
1
transformers
[ "transformers", "safetensors", "mot", "text-generation", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-05-23T16:43:03Z
--- license: apache-2.0 --- # Mixture of Tokens ## Model description Mixture of Tokens is a fully-differentiable model that retains the benefits of MoE architectures while avoiding the aforementioned difficulties. Rather than routing tokens to experts, this approach mixes tokens from different examples prior to feeding them to experts, enabling the model to learn from all token-expert combinations. Importantly, this mixing can be disabled to avoid mixing of different sequences during inference. Crucially, this method is fully compatible with both masked and causal Large Language Model training and inference. ## Tips: During inference, the model's computational performance is derived from combining tokens across batches into groups of a specified size, denoted as `group_size` in the model configuration. If the batch size is not evenly divisible by `group_size`, the model will internally pad the batch to ensure divisibility. To achieve optimal performance, it is advisable to conduct batched inference using a batch size that is a multiple of `group_size`. ## Usage example The example generated by the model hub may be incorrect. To get started, try running: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline tokenizer = AutoTokenizer.from_pretrained("jaszczur/mixture_of_tokens", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("jaszczur/mixture_of_tokens", trust_remote_code=True) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) pipe("Is mixture of tokens better than a dense model?") ```
nishant-sg/qwen1.5-llm
nishant-sg
2024-06-12T09:12:33Z
370
0
null
[ "gguf", "region:us" ]
null
2024-06-12T09:10:05Z
Entry not found
John6666/3x3mix-xl-typea-v2-sdxl
John6666
2024-06-17T11:33:43Z
370
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-17T11:28:26Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/411683/3x3mixxltypea?modelVersionId=578767).
ethanyt/guwenbert-base
ethanyt
2021-06-02T03:27:16Z
369
15
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - "zh" thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "pytorch" license: "apache-2.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" widget: - text: "[MASK]太元中,武陵人捕鱼为业。" - text: "问征夫以前路,恨晨光之[MASK]微。" - text: "浔阳江头夜送客,枫叶[MASK]花秋瑟瑟。" --- # GuwenBERT ## Model description ![GuwenBERT](https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png) This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on. For more information about RoBERTa, take a look at the RoBERTa's offical repo. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-base") model = AutoModel.from_pretrained("ethanyt/guwenbert-base") ``` ## Training data The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang. 76% of them are punctuated. The total number of characters is 1.7B (1,743,337,673). All traditional Characters are converted to simplified characters. The vocabulary is constructed from this data set and the size is 23,292. ## Training procedure The models are initialized with `hfl/chinese-roberta-wwm-ext` and then pre-trained with a 2-step strategy. In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training. The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 2e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after. ## Eval results ### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation Second place in the competition. Detailed test results: | NE Type | Precision | Recall | F1 | |:----------:|:-----------:|:------:|:-----:| | Book Name | 77.50 | 73.73 | 75.57 | | Other Name | 85.85 | 89.32 | 87.55 | | Micro Avg. | 83.88 | 85.39 | 84.63 | ## About Us We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology. For more cooperation, please contact email: ethanyt [at] qq.com > Created with ❤️ by Tan Yan [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/Ethan-yt) and Zewen Chi [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/CZWin32768)
jonas/bert-base-uncased-finetuned-sdg-Mar23
jonas
2023-06-06T06:10:19Z
369
2
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-20T13:35:00Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-sdg-Mar23 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-sdg-Mar23 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3234 - Acc: 0.9113 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ##Labelling 0:'1', 1:'10', 2:'11', 3:'12', 4:'13', 5:'14', 6:'15', 7:'16', 8:'2', 9:'3', 10:'4', 11:'5', 12:'6', 13:'7', 14:'8', 15:'9' ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Acc | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4165 | 1.0 | 1098 | 0.3656 | 0.8908 | | 0.2062 | 2.0 | 2196 | 0.3234 | 0.9113 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.12.0a0+8a1a93a - Datasets 2.10.1 - Tokenizers 0.13.2
DucHaiten/DucHaiten-StyleLikeMe
DucHaiten
2023-04-14T17:46:45Z
369
32
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "image-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-03-28T16:16:58Z
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - text-to-image - image-to-image - diffusers inference: true --- this is a model that puts heavy emphasis on artistic and surreal elements with extremely high image detail, extremely flexible, can create beautiful images with simple prompts, negative prompts are not very important. Works best with step count greater than 30 preferably 50 steps, and native resolution up to 768x768, for example 2:3 frame resolution is 768x1024. CFG to around 11 to 15, highres fix will look better but not necessarily if you feel it takes too long For those of you who don't have a pc or a weak computer, you can consider using my model via sinkin and mage website using the link below: https://sinkin.ai/m/EYWOblK negative prompt: sketch, (worst quality:1.5), (low quality:1.5), (normal quality:1.5), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error Please support me by becoming a patron: https://www.patreon.com/duchaitenreal ![6801E298-AFB0-4F19-B29D-04924E91EDAB.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/7xI6LSiAES0rzrQv1Ff1u.png) ![4D8191B3-08D6-46A1-89E3-42BE6B3AE7DF.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/5nanLgVGu4USN9lJqcGQ3.png) ![E29E6BB2-3615-4AF5-BFA2-C199B421F917.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/HH9pdMBfgvfzNFpK3ZdsN.png) ![3D04CB5F-E99E-4FE7-930A-0CF465690A35.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/aFsmhJz96MjDtDLoNb5CK.png) ![DAF68BEA-0EE6-43DF-9C8F-0ED889F1323D.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/VLOHYHbQrFIdSUaROM0zS.png) ![14426B97-2904-43E5-BE2B-22B433AE27D4.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/vK2AgHVo8FS1WMsdBphHE.png) ![2A16F317-E34F-48DF-8A6C-DC9F16A87580.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/7woU3UF1lTfMHFmb1OcNp.png) ![3C965B91-BC6D-4361-B2BF-4BD869522746.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/UBYOb3LfsXPB4s4xxb3ty.png) ![7CF7EB28-A5EC-4353-8C47-6924985E5AED.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/95RD1uslHKdhGR3MdbCJj.png) ![6E1F2BB8-BD85-475B-864D-4EDC495408DC.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/9SrOUxhHn5hgUcyI3Yh0E.png) ![84F7A396-1E25-48F9-A456-48013CD8E0B1.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/ovrkX0uA_F8sn2nJ2kZEM.png) ![B7D16E93-5636-4420-9723-10242420ABDD.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/Rg1YNrpEAjTvGKBw1KhzQ.png) ![810EE2C9-1701-48AA-83DB-C5D4E62BEB08.png](https://s3.amazonaws.com/moonup/production/uploads/630b58b279d18d5e53e3a5a9/Fmh4bZEvbf6Uj8RmiHxmP.png)
timm/eva_giant_patch14_clip_224.laion400m_s11b_b41k
timm
2024-02-10T23:42:20Z
369
1
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "clip", "license:mit", "region:us" ]
zero-shot-image-classification
2023-04-10T22:34:33Z
--- license: mit library_name: open_clip tags: - zero-shot-image-classification - clip --- # Model card for eva_giant_patch14_clip_224.laion400m_s11b_b41k
MBZUAI/LaMini-T5-223M
MBZUAI
2023-04-28T12:07:19Z
369
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "instruction fine-tuning", "en", "arxiv:2304.14402", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-04-15T10:07:26Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer - instruction fine-tuning model-index: - name: flan-t5-small-distil-v2 results: [] language: - en pipeline_tag: text2text-generation widget: - text: >- how can I become more healthy? example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-T5-223M [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to response to human instructions written in natural language. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text2text-generation', model = checkpoint) input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [t5-base](https://huggingface.co/t5-base) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 223M. ### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
Blaxzter/LaBSE-sentence-embeddings
Blaxzter
2023-05-04T13:14:17Z
369
16
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "feature-extraction", "sentence_embedding", "multilingual", "google", "sentence-similarity", "af", "am", "ar", "as", "az", "be", "bg", "bn", "bo", "bs", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "he", "hi", "hmn", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "or", "pa", "pl", "pt", "ro", "ru", "rw", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "dataset:CommonCrawl", "dataset:Wikipedia", "arxiv:2007.01852", "license:apache-2.0", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2023-04-30T12:58:54Z
--- language: - af - am - ar - as - az - be - bg - bn - bo - bs - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - haw - he - hi - hmn - hr - ht - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - or - pa - pl - pt - ro - ru - rw - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tk - tl - tr - tt - ug - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu tags: - bert - sentence_embedding - multilingual - google - sentence-similarity license: apache-2.0 datasets: - CommonCrawl - Wikipedia --- Copy of setu4993/LaBSE that returns the sentence embeddings (pooler_output) and implements caching Original Model Card: # LaBSE ## Model description Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model trained for sentence embedding for 109 languages. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval. - Model: [HuggingFace's model hub](https://huggingface.co/setu4993/LaBSE). - Paper: [arXiv](https://arxiv.org/abs/2007.01852). - Original model: [TensorFlow Hub](https://tfhub.dev/google/LaBSE/2). - Blog post: [Google AI Blog](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html). - Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt). This is migrated from the v2 model on the TF Hub, which uses dict-based input. The embeddings produced by both the versions of the model are [equivalent](https://github.com/setu4993/convert-labse-tf-pt/blob/ec3a019159a54ed6493181a64486c2808c01f216/tests/test_conversion.py#L31). ## Usage Using the model: ```python import torch from transformers import BertModel, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("setu4993/LaBSE") model = BertModel.from_pretrained("setu4993/LaBSE") model = model.eval() english_sentences = [ "dog", "Puppies are nice.", "I enjoy taking long walks along the beach with my dog.", ] english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True) with torch.no_grad(): english_outputs = model(**english_inputs) ``` To get the sentence embeddings, use the pooler output: ```python english_embeddings = english_outputs.pooler_output ``` Output for other languages: ```python italian_sentences = [ "cane", "I cuccioli sono carini.", "Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.", ] japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"] italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True) japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True) with torch.no_grad(): italian_outputs = model(**italian_inputs) japanese_outputs = model(**japanese_inputs) italian_embeddings = italian_outputs.pooler_output japanese_embeddings = japanese_outputs.pooler_output ``` For similarity between sentences, an L2-norm is recommended before calculating the similarity: ```python import torch.nn.functional as F def similarity(embeddings_1, embeddings_2): normalized_embeddings_1 = F.normalize(embeddings_1, p=2) normalized_embeddings_2 = F.normalize(embeddings_2, p=2) return torch.matmul( normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1) ) print(similarity(english_embeddings, italian_embeddings)) print(similarity(english_embeddings, japanese_embeddings)) print(similarity(italian_embeddings, japanese_embeddings)) ``` ## Details Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2007.01852). ### BibTeX entry and citation info ```bibtex @misc{feng2020languageagnostic, title={Language-agnostic BERT Sentence Embedding}, author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang}, year={2020}, eprint={2007.01852}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
SG161222/Paragon_V1.0
SG161222
2023-06-03T06:19:16Z
369
52
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-03T07:17:32Z
--- license: creativeml-openrail-m --- <b>Please read this!</b><br> This model is in the testing phase. The necessary VAE is already baked into the model.<br><hr> <b>The recommended negative prompt:</b><br><br> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation, <a href="https://civitai.com/models/7808/easynegative">easynegative</a>, <a href="https://huggingface.co/zwv9/idk-who-is-this-model-belong-to/blob/main/bad-hands-5.pt">bad-hands-5</a><br><br> <b>Recommended parameters for generation:</b><br><br> <b>Sampling method:</b> Euler A<br> <b>CFG Scale:</b> 5-12<br> <b>Clip Skip:</b> 2<br><br> <b>Hires.Fix Parameters:</b><br><br> <b>Upscaler:</b> Latent or other<br> <b>Hires Steps:</b> 0 or other<br> <b>Denoising Strength:</b> 0.35 - 0.7<br> <b>Upscaled by:</b> 1.1 - 2.0<br><hr> <b>Examples:</b><br><br> <a href='https://postimg.cc/3kxXkXSJ' target='_blank'><img src='https://i.postimg.cc/0ypcHZ7m/Pic1.png' border='0' alt='Pic1'/></a> <a href='https://postimg.cc/2qmVqr8d' target='_blank'><img src='https://i.postimg.cc/q76n5vPF/Pic2.png' border='0' alt='Pic2'/></a> <a href='https://postimg.cc/k6GM84rS' target='_blank'><img src='https://i.postimg.cc/sX9MkQwT/Pic3.png' border='0' alt='Pic3'/></a> <a href='https://postimg.cc/gX7zKWdT' target='_blank'><img src='https://i.postimg.cc/j2xDtx1t/Pic4.png' border='0' alt='Pic4'/></a> <a href='https://postimg.cc/Js81xKVM' target='_blank'><img src='https://i.postimg.cc/mgztb6mz/Pic5.png' border='0' alt='Pic5'/></a> <a href='https://postimg.cc/Pp0HwQQG' target='_blank'><img src='https://i.postimg.cc/Zn55Xf9q/Pic6.png' border='0' alt='Pic6'/></a>
KaiNylund/t5-60M-aic-2006-2008
KaiNylund
2023-07-08T01:51:09Z
369
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-07-08T01:50:45Z
Entry not found
lorahub/flan_t5_large-super_glue_wsc.fixed
lorahub
2023-07-24T09:52:25Z
369
0
peft
[ "peft", "region:us" ]
null
2023-07-24T09:52:15Z
--- library_name: peft ---
Yntec/vividicAnime
Yntec
2023-09-04T11:39:59Z
369
2
diffusers
[ "diffusers", "safetensors", "Anime", "Photorealistic", "Sexy", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "kazzear", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-04T10:52:39Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Photorealistic - Sexy - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - kazzear --- # Vividic Anime This model with the MoistMixV2 VAE baked in. Sample and prompt: ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/zq3PgPYBgjP6jU1iJddc6.png) A very beautiful anime tennis girl, short wavy black hair, detailed chibi eyes, ( ( ( full round face ) ) ), short smile, short skirt, fashion CUTE and SHOES, BEAUTIFUL DETAILED LEGS, highly detailed, interior view ROSSDRAWS and KlaysMoji and Dave Rapoza and artgerm and leyendecker and Clay Mann Original page: https://civitai.com/models/15360?modelVersionId=28003
TheBloke/Inkbot-13B-4k-GGUF
TheBloke
2023-09-27T12:53:50Z
369
5
transformers
[ "transformers", "gguf", "llama", "base_model:Tostino/Inkbot-13b-4k", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-21T08:02:28Z
--- license: llama2 model_name: Inkbot 13B 4K base_model: Tostino/Inkbot-13b-4k inference: false model_creator: Tostino model_type: llama prompt_template: '<#meta#> - Date: [DATE] - Task: [TASK TYPE] <#system#> {system_message} <#chat#> <#user#> {prompt} <#bot#> ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Inkbot 13B 4K - GGUF - Model creator: [Tostino](https://huggingface.co/Tostino) - Original model: [Inkbot 13B 4K](https://huggingface.co/Tostino/Inkbot-13b-4k) <!-- description start --> ## Description This repo contains GGUF format model files for [Tostino's Inkbot 13B 4K](https://huggingface.co/Tostino/Inkbot-13b-4k). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Inkbot-13B-4k-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Inkbot-13B-4k-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF) * [Tostino's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Tostino/Inkbot-13b-4k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Inkbot ``` <#meta#> - Date: [DATE] - Task: [TASK TYPE] <#system#> {system_message} <#chat#> <#user#> {prompt} <#bot#> ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [inkbot-13b-4k.Q2_K.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [inkbot-13b-4k.Q3_K_S.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [inkbot-13b-4k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [inkbot-13b-4k.Q3_K_L.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [inkbot-13b-4k.Q4_0.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [inkbot-13b-4k.Q4_K_S.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [inkbot-13b-4k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [inkbot-13b-4k.Q5_0.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [inkbot-13b-4k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [inkbot-13b-4k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [inkbot-13b-4k.Q6_K.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [inkbot-13b-4k.Q8_0.gguf](https://huggingface.co/TheBloke/Inkbot-13B-4k-GGUF/blob/main/inkbot-13b-4k.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Inkbot-13B-4k-GGUF and below it, a specific filename to download, such as: inkbot-13b-4k.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Inkbot-13B-4k-GGUF inkbot-13b-4k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Inkbot-13B-4k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Inkbot-13B-4k-GGUF inkbot-13b-4k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m inkbot-13b-4k.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<#meta#>\n- Date: [DATE]\n- Task: [TASK TYPE]\n<#system#>\n{system_message}\n<#chat#>\n<#user#>\n{prompt}\n<#bot#>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Inkbot-13B-4k-GGUF", model_file="inkbot-13b-4k.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Tostino's Inkbot 13B 4K # Model Card for Inkbot ## Model Details Inkbot is a conversational AI model designed to interpret and respond to structured prompts with or without contextual information. Built on the latest advancements in natural language processing (NLP) and understanding (NLU), Inkbot provides users with accurate and meaningful interactions, addressing a wide range of queries and topics. Its unique feature lies in the structured prompt system, allowing users to engage in dynamic dialogues that can evolve based on context, metadata, and user input. ## How to Use Inkbot uses a structured prompt template system. Depending on the information you have or want to give, you can use templates 'with context' or 'without context'. ### Prompt Template Structure #### With Context: ``` <#meta#> - Date: [DATE] - Task: [TASK TYPE] <#system#> [SYSTEM PROMPT/DESCRIPTION OF THE INTERACTION] <#chat#> <#user#> [USER QUESTION] <#bot#> [BOT RESPONSE] <#user#> [USER FOLLOW-UP QUESTION] <#user_context#> [ADDITIONAL CONTEXT FOR THE BOT] <#bot#> [BOT RESPONSE BASED ON CONTEXT] ``` #### Without Context: ``` <#meta#> - Date: [DATE] - Task: [TASK TYPE] <#system#> [SYSTEM PROMPT/DESCRIPTION OF THE INTERACTION] <#chat#> <#user#> [USER QUESTION] <#bot#> [BOT RESPONSE] <#user#> [USER FOLLOW-UP QUESTION] <#bot#> [BOT RESPONSE] ``` ### Examples #### With Context: ``` <#meta#> - Date: 2023-07-20 - Task: general <#system#> A dialogue focused on determining dates based on context clues. <#chat#> <#user#> What is today's date? <#bot#> The date is: 2023-07-20 <#user#> When did the event in the provided context take place? <#user_context#> It's the day after new years eve, and I'm starting 2018 with a headache. <#bot#> The event in the context took place on: 2018-01-01 ``` #### Without Context: ``` <#meta#> - Date: 2023-07-20 - Task: general <#system#> A simple dialogue about daily queries. <#chat#> <#user#> Will it rain today? <#bot#> I am not sure, without knowing your location I am unable to determine the weather. <#user#> Can you compose a haiku for me? <#bot#> Whispers of the breeze, Nature's melodies unfold, Dawn dispels the night. ``` ## Task Options Inkbot has been trained for a variety of tasks. Below are some of the key task options you can utilize: 1. **general**: This is the default task and is designed for a broad range of general questions and interactions. - Usage: Suitable for most day-to-day interactions and queries. 2. **knowledge_graph**: This task involves extracting, understanding, and representing information in a structured way. - Usage: When you want to extract relationships between entities or desire structured representations of data. 3. **question_answer**: Explicitly trained for answering questions in a straightforward manner. - Usage: Best used when you have direct questions and expect concise answers. 4. **reasoning**: Allows Inkbot to showcase its logical and deductive reasoning capabilities. - Usage: Ideal for puzzles, riddles, or scenarios where logical analysis is required. 5. **translation**: Use this for language translation tasks. - Usage: Provide a sentence or paragraph in one language, and specify the desired target language for translation. 6. **summarization**: Trained for condensing large texts into shorter, coherent summaries. - Usage: When you have a lengthy text or article that you want to be summarized to its key points. 7. **creative_writing**: Engage Inkbot in composing stories, poetry, and other creative content. - Usage: For tasks that require imaginative and original content generation. ### How to Use Task Options In the prompt template structure, the `Task` metadata field is where you specify the task option. Here's an example of how to structure a prompt using the `reasoning` task: ## Limitations - Ensure you adhere to the prompt structure for best results. - When providing contextual details, clarity is essential for Inkbot to derive accurate and meaningful responses. ## Additional Notes - The 'date', 'task', and 'system' are crucial metadata components that need to be provided outside the core dialogue. - Use the 'user_context' key when you want to offer supplementary context that guides Inkbot's response. <!-- original-model-card end -->
TheBloke/Thespis-13B-v0.3-GGUF
TheBloke
2023-10-17T17:18:30Z
369
5
transformers
[ "transformers", "gguf", "llama", "base_model:cgato/Thespis-13b-v0.3", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-10-17T17:10:32Z
--- base_model: cgato/Thespis-13b-v0.3 inference: false license: llama2 model_creator: c.gato model_name: Thespis 13B v0.3 model_type: llama prompt_template: "{system_message}\n\nUsername: {prompt}\nBotName: \n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Thespis 13B v0.3 - GGUF - Model creator: [c.gato](https://huggingface.co/cgato) - Original model: [Thespis 13B v0.3](https://huggingface.co/cgato/Thespis-13b-v0.3) <!-- description start --> ## Description This repo contains GGUF format model files for [c.gato's Thespis 13B v0.3](https://huggingface.co/cgato/Thespis-13b-v0.3). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Thespis-13B-v0.3-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF) * [c.gato's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cgato/Thespis-13b-v0.3) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Thespis ``` {system_message} Username: {prompt} BotName: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [thespis-13b-v0.3.Q2_K.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [thespis-13b-v0.3.Q3_K_S.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [thespis-13b-v0.3.Q3_K_M.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [thespis-13b-v0.3.Q3_K_L.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [thespis-13b-v0.3.Q4_0.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [thespis-13b-v0.3.Q4_K_S.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [thespis-13b-v0.3.Q4_K_M.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [thespis-13b-v0.3.Q5_0.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [thespis-13b-v0.3.Q5_K_S.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [thespis-13b-v0.3.Q5_K_M.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [thespis-13b-v0.3.Q6_K.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [thespis-13b-v0.3.Q8_0.gguf](https://huggingface.co/TheBloke/Thespis-13B-v0.3-GGUF/blob/main/thespis-13b-v0.3.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Thespis-13B-v0.3-GGUF and below it, a specific filename to download, such as: thespis-13b-v0.3.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Thespis-13B-v0.3-GGUF thespis-13b-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Thespis-13B-v0.3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Thespis-13B-v0.3-GGUF thespis-13b-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m thespis-13b-v0.3.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{system_message}\n\nUsername: {prompt}\nBotName:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Thespis-13B-v0.3-GGUF", model_file="thespis-13b-v0.3.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: c.gato's Thespis 13B v0.3 This model is a bit of a personal project. It uses a vanilla chat template and is focused on providing multiturn sfw and nsfw RP experience. It uses the following data: * 3000 samples from Claude Multiround Chat 30k dataset * 6000 samples from Pippa Dataset * 3000 samples from Puffin Dataset * 3800 samples of hand curated RP conversation with various characters. Works with standard chat format for Ooba or SillyTavern. Prompt Format: Chat ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` Turn Template (for Ooba): You can either bake usernames into the prompt directly for ease of use or programatically add them if running through the API to use as a chatbot. ``` <|user|>{Username}: <|user-message|>\n<|bot|>{BotName}: <|bot-message|>\n ``` <!-- original-model-card end -->
pbelcak/UltraFastBERT-1x11-long
pbelcak
2023-11-22T02:21:22Z
369
72
transformers
[ "transformers", "safetensors", "crammedBERT", "en", "dataset:EleutherAI/pile", "arxiv:2311.10770", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-11-21T07:00:55Z
--- license: mit datasets: - EleutherAI/pile language: - en metrics: - glue --- # UltraFastBERT-1x11-long This is the final model described in "Exponentially Faster Language Modelling". The model has been pretrained just like crammedBERT but with fast feedforward networks (FFF) in place of the traditional feedforward layers. To use this model, you need the code from the repo at https://github.com/pbelcak/UltraFastBERT. You can find the paper here: https://arxiv.org/abs/2311.10770, and the abstract below: > Language models only really need to use an exponential fraction of their neurons for individual inferences. > As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs). > While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights. ## Intended uses & limitations This is the raw pretraining checkpoint. You can use this to fine-tune on a downstream task like GLUE as discussed in the paper. This model is provided only as sanity check for research purposes, it is untested and unfit for deployment. ### How to get started 1. Create a new Python/conda environment, or simply use one that does not have any previous version of the original `cramming` project installed. If, by accident, you use the original cramming repository code instead of the one provided in the `/training` folder of this project, you will be warned by `transformers` that there are some extra weights (FFF weight) and that some weights are missing (the FF weights expected by the original `crammedBERT`). 2. `cd ./training` 3. `pip install .` 4. Create `minimal_example.py` 5. Paste the code below ```python import cramming from transformers import AutoModelForMaskedLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("pbelcak/UltraFastBERT-1x11-long") model = AutoModelForMaskedLM.from_pretrained("pbelcak/UltraFastBERT-1x11-long") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` 6. Run `python minimal_example.py`. ### Limitations and bias The training data used for this model was further filtered and sorted beyond the normal Pile. These modifications were not tested for unintended consequences. ## Training data, Training procedure, Preprocessing, Pretraining These are discussed in the paper. You can find the final configurations for each in this repository. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m-mm) | QQP | QNLI | SST-2 | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:-----:|:----:|:----:|:-------:| | Score| 81.3 | 87.6 | 89.7 | 89.9 | 86.4 | 87.5 | 60.7 | 83.0 | These numbers are the median over 5 trials on "GLUE-sane" using the GLUE-dev set. With this variant of GLUE, finetuning cannot be longer than 5 epochs on each task, and hyperparameters have to be chosen equal for all tasks. ### BibTeX entry and citation info ```bibtex @article{belcak2023exponential, title = {Exponentially {{Faster}} {{Language}} {{Modelling}}}, author = {Belcak, Peter and Wattenhofer, Roger}, year = {2023}, month = nov, eprint = {2311.10770}, eprinttype = {arxiv}, primaryclass = {cs}, publisher = {{arXiv}}, url = {https://arxiv.org/pdf/2311.10770}, urldate = {2023-11-21}, archiveprefix = {arXiv}, keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning}, journal = {arxiv:2311.10770[cs]} } ```
mlx-community/whisper-tiny
mlx-community
2024-01-05T23:45:54Z
369
0
transformers
[ "transformers", "whisper", "endpoints_compatible", "region:us" ]
null
2024-01-05T23:39:01Z
Converted using https://github.com/ml-explore/mlx-examples/tree/main/whisper with the command: ``` python convert.py --torch-name-or-path tiny --mlx-path mlx_models/tiny ```
asedmammad/Contextual_KTO_Mistral_PairRM-GGUF
asedmammad
2024-03-11T01:54:18Z
369
2
null
[ "gguf", "kto", "dpo", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "rl", "rlaif", "en", "dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset", "arxiv:2402.01306", "base_model:ContextualAI/Contextual_KTO_Mistral_PairRM", "license:apache-2.0", "region:us" ]
null
2024-03-10T22:07:16Z
--- base_model: ContextualAI/Contextual_KTO_Mistral_PairRM inference: false language: - en license: apache-2.0 tags: - kto - dpo - human feedback - rlhf - preferences - alignment - HALO - halos - rl - rlaif datasets: - snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset metrics: - accuracy model_creator: ContextualAI model_name: Contextual KTO Mistral PairRM model_type: mistral prompt_template: '<|user|> {prompt} <|assistant|> ' quantized_by: Ased Mammad --- # Contextual_KTO_Mistral_PairRM - GGUF - Model creator: [ContextualAI](https://huggingface.co/ContextualAI) - Original model: [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM) <!-- description start --> ## Description This repo contains GGUF format model files for [Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|user|> {prompt} <|assistant|> ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Contextual_KTO_Mistral_PairRM.Q2_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes | | [Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [Contextual_KTO_Mistral_PairRM.Q4_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [Contextual_KTO_Mistral_PairRM.Q5_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [Contextual_KTO_Mistral_PairRM.Q6_K.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [Contextual_KTO_Mistral_PairRM.Q8_0.gguf](https://huggingface.co/AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF/blob/main/Contextual_KTO_Mistral_PairRM.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF and below it, a specific filename to download, such as: Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download AsedMammad/Contextual_KTO_Mistral_PairRM-GGUF Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 35` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|user|>\n{prompt}<|assistant|>\n", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Contextual_KTO_Mistral_PairRM.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- original-model-card start --> This repo contains the model and tokenizer checkpoints for: - model family [<b>mistralai/Mistral-7B-Instruct-v0.2</b>](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - optimized with the loss [<b>KTO</b>](https://twitter.com/winniethexu/status/1732839295365554643) - aligned using the [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset) - via 3 iterations of KTO on one epoch of each training partition, each previous iteration's model serving as the reference for the subsequent. **[03/06/2024]**: We are #2 on the (verified) [Alpaca Eval 2.0 Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) scoring **33.23**! To prompt this model, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added at tokenization time and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. You may also use our tokenizer's `apply_chat_template` if doing inference with `chatml` set or evaluating generations through non-local clients. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) for more details on the methodology. If you found this work useful, feel free to cite [our work](https://arxiv.org/abs/2402.01306): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ``` <!-- original-model-card end -->
SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF
SnakyMcSnekFace
2024-06-09T15:59:05Z
369
4
null
[ "gguf", "storywriting", "finetuned", "not-for-all-audiences", "text-generation", "en", "base_model:SnakyMcSnekFace/Psyfighter2-13B-vore", "license:llama2", "region:us" ]
text-generation
2024-04-13T13:55:46Z
--- license: llama2 language: - en pipeline_tag: text-generation inference: false tags: - storywriting - finetuned - not-for-all-audiences - gguf base_model: SnakyMcSnekFace/Psyfighter2-13B-vore model_type: llama prompt_template: > ### Instruction: Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Input: {prompt} ### Response: --- # Model Card for Psyfighter2-13B-vore-GGUF This is a quantized version of [SnakyMcSnekFace/Psyfighter2-13B-vore](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore) model. You can find the `F16` precision model weights and details of how the model was trained [in that repository](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore). This model is a version of [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, a conversational model in a chat, and an interactive choose-your-own-adventure text game. The model has preliminary support for Adventure Mode. It is still work in progress, and the model will be further trained to be better at storytelling and to not require complicated configuration. **(See instructions below for configuring Adventure Mode for optimal performance.)** ## Model Details The model behaves similarly to `KoboldAI/LLaMA2-13B-Psyfighter2`, which it was derived from. Please [see the README.md here](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2/blob/main/README.md) to learn more. This model was fine-tuned on ~55 MiB of free-form text, containing stories focused around the vore theme. As a result, it has a strong vorny bias. ## How to Get Started with the Model The model can be used with any AI chatbots and front-ends designed to work with `.gguf` models. The model fits fully into 8GB VRAM, but can also run with degraded performance on smaller graphics cards. Similarly to the base model, the less prompt the model receives, the more creative is the output. For example, the writing assistant will generate an entire story when prompted with only 2-3 words. In the chat mode, if the conversation is not going where you would like it to go, edit the model's output and let it continue generation. The model will also match the style of the conversation. There are two versions of the model: `Q4_K_M` (smaller and faster) and `Q8_0` (slower, but better prose quality). ### Koboldcpp Colab Notebook The easiest way to try out the model is [Koboldcpp Colab Notebook](https://colab.research.google.com/github/lostruins/koboldcpp/blob/concedo/colab.ipynb). This method doesn't require you to have a powerful graphics card. - Open the notebook - Paste the model URL into the `Model` field: `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf` - Start the notebook, wait for the URL to CloudFlare tunnel to appear at the bottom and click it - Select "Settings" and configure them as follows: - In "Basic" tab: - Temperature = 0.8 - Amount to Gen. = 512 - Top p Sampling = 0.9 - Repetition Pentalty = 1.1 - In "Advanced" tab: - Min-P = 0.1 - EOS Token Ban = Unban - Placeholder Tags = Checked - Select "Scenarios" -> "New Story" to use the model as a writing assistant #### To run `Q8_0` model in Colab notebook - Paste the model URL into the `Model` field: `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q8_0.gguf` - Set `Layers` field to `30` (reduce this number if the model fails to start) #### Adventure Mode The model has **preliminary** support for the adventure mode. Select "Scenarios" -> "New Adventure" to start an adventure. **IMPORTANT - The context must be configured exactly as described below or the model will not perform well.** - Select "Context" - Paste this text into "Memory" verbatim: ``` ### Instruction: Text transcript of a never-ending adventure story, written by the AI assistant. AI assistant uses vivid and evocative language to create a well-written novel. Characters are proactive and take initiative. Think about what goals the characters of the story have and write what they do to achieve those goals. ### Input: ``` - Ensure that "Newline After Memory" is checked - Paste this text into "Author's note" verbatim **(IMPORTANT: the Author's Note must end with two empty lines)**: ``` Write a few paragraphs that advance the plot of the story. ### Response: ``` - Set "A/N Strength" to "Immediate" - Set "Author's Note Template" to `<|>` #### Adventure - creating premise and taking turns Select "Story" in the bottom left corner to generate premise of the story, and "Action" to take actions with your character. In the adventure mode, the model expects all player actions to be written in second person. For example: ``` As you venture deeper into the damp cave, you come across a lone goblin. The vile creature mumbles something to itself as it stares at the glowing text on a cave wall. It doesn't notice your approach. > You sneak behind the goblin and hit it with the sword. ``` Check "Allow Editing" to make edits to the story to overwrite and re-generate parts of the model's response. This is useful if the model makes a mistake or the story doesn't go in the direction that you like. ### Backyard AI Another convenient way to use the model is [Backyard AI](https://backyard.ai/) application, which allows you to run the model locally on your computer. You'll need a graphics card with at least 8GB VRAM to use the model comfortably. If you don't have a powerful GPU, Backyard AI provides an option of running the model on their servers, but it costs money. #### Download directly from HuggingFace (beta) In the left panel, click `Manage Models`, then select `Hugging face models`. Paste `https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF` into the text field and press `Fetch Models`. Click `Download` button to the next to the model format. Once the model is downloaded, you can select it in your character card or set it as a default model. #### Download manually Download [Psyfighter2-13B-vore.Q4_K_M.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q4_K_M.gguf) or [Psyfighter2-13B-vore.Q8_0.gguf](https://huggingface.co/SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF/resolve/main/Psyfighter2-13B-vore.Q8_0.gguf) file into `%appdata%\faraday\models` folder on your computer. The model should appear in `Manage Models` menu under `Downloaded Models`. You can then select it in your character card or set it as a default model. ## Model updates - 09/02/2024 - fine-tuned the model to follow Kobold AI Adventure Mode format - 06/02/2024 - fixed errors in training and merging, significantly improving the overall prose quality - 05/25/2024 - updated training process, making the model more coherent and improving the writing quality - 04/13/2024 - uploaded the first version of the model
Bahasalab/Bahasa-4b-chat
Bahasalab
2024-05-07T07:20:55Z
369
3
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "id", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-26T11:21:53Z
--- language: - id license: other license_name: tongyi-qianwen --- # Bahasa-4b Model Report ## Model Name **Bahasa-4b** ## Model Detail Bahasa-4b is continued training from qwen-4b using 10 billion high quality text of Indonesian. The model outperforms some 4b, and even 7b models for Indonesian tasks. ## Model Developers Bahasa AI ## Intended Use This model is intended for various NLP tasks that require understanding and generating Indonesian language. It is suitable for applications such as question answering, sentiment analysis, document summarization, and more. ## Training Data Bahasa-4b was trained on a 10 billion subset data of Indonesian dataset from a collected pool of 100 billion. ## Benchmarks The following table shows the performance of Bahasa-4b compared to the models Sailor_4b and Mistral-7B-v0.1 across several benchmarks: | Dataset | Version | Metric | Mode | Sailor_4b | Bahasa-4b-hf | Mistral-7B-v0.1 | |----------------|---------|--------|------|-----------|--------------|-----------------| | tydiqa-id | 0e9309 | EM | gen | 53.98 | 55.04 | 63.54 | | tydiqa-id | 0e9309 | F1 | gen | 73.48 | 75.39 | 78.73 | | xcopa-id | 36c11c | EM | ppl | 69.2 | 73.2 | 62.40 | | xcopa-id | 36c11c | F1 | ppl | 69.2 | 73.2 | - | | m3exam-id-ppl | ede415 | EM | ppl | 31.27 | 44.47 | 26.68 | | belebele-id-ppl| 7fe030 | EM | ppl | 41.33 | 42.33 | 41.33 | ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Bahasalab/Bahasa-4b-chat-v2", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Bahasalab/Bahasa-4b-chat") messages = [ {"role": "system", "content": "Kamu adalah asisten yang membantu"}, {"role": "user", "content": "kamu siapa"} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( input_ids=model_inputs.input_ids, attention_mask=model_inputs.attention_mask, max_new_tokens=512, eos_token_id=tokenizer.eos_token_id ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` This data demonstrates that Bahasa-4b consistently outperforms the Sailor_4b model in various Indonesian language tasks, showing improvements in both EM (Exact Match) and F1 scores across different datasets, and is competitive with the Mistral-7B-v0.1 model.
mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF
mradermacher
2024-05-05T14:51:44Z
369
1
transformers
[ "transformers", "gguf", "nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch", "en", "base_model:nvidia/Llama3-ChatQA-1.5-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-03T00:18:08Z
--- base_model: nvidia/Llama3-ChatQA-1.5-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - nvidia - chatqa-1.5 - chatqa - llama-3 - pytorch --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-ChatQA-1.5-8B-i1-GGUF/resolve/main/Llama3-ChatQA-1.5-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bartowski/Llama-3-8B-Instruct-262k-GGUF
bartowski
2024-05-04T16:36:43Z
369
1
null
[ "gguf", "meta", "llama-3", "text-generation", "en", "license:llama3", "region:us" ]
text-generation
2024-05-04T16:18:24Z
--- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of Llama-3-8B-Instruct-262k Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization. Original model: https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Llama-3-8B-Instruct-262k-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Llama-3-8B-Instruct-262k-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Llama-3-8B-Instruct-262k-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Llama-3-8B-Instruct-262k-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Llama-3-8B-Instruct-262k-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Llama-3-8B-Instruct-262k-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Llama-3-8B-Instruct-262k-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Llama-3-8B-Instruct-262k-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Llama-3-8B-Instruct-262k-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Llama-3-8B-Instruct-262k-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Llama-3-8B-Instruct-262k-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Llama-3-8B-Instruct-262k-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Llama-3-8B-Instruct-262k-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Llama-3-8B-Instruct-262k-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Llama-3-8B-Instruct-262k-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Llama-3-8B-Instruct-262k-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Llama-3-8B-Instruct-262k-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Llama-3-8B-Instruct-262k-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-Instruct-262k-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-Instruct-262k-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Llama-3-8B-Instruct-262k-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Llama-3-8B-Instruct-262k-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Instruct-262k-GGUF/blob/main/Llama-3-8B-Instruct-262k-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Llama-3-8B-Instruct-262k-GGUF --include "Llama-3-8B-Instruct-262k-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Llama-3-8B-Instruct-262k-GGUF --include "Llama-3-8B-Instruct-262k-Q8_0.gguf/*" --local-dir Llama-3-8B-Instruct-262k-Q8_0 --local-dir-use-symlinks False ``` You can either specify a new local-dir (Llama-3-8B-Instruct-262k-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mmnga/DataPilot-ArrowPro-7B-RobinHood-gguf
mmnga
2024-05-11T13:43:09Z
369
2
null
[ "gguf", "mistral", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:apache-2.0", "region:us" ]
null
2024-05-11T07:22:37Z
--- license: apache-2.0 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm tags: - mistral --- # DataPilot-ArrowPro-7B-RobinHood-gguf [DataPilotさんが公開しているArrowPro-7B-RobinHood](https://huggingface.co/DataPilot/ArrowPro-7B-RobinHood)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## 他のモデル [mmnga/DataPilot-ArrowPro-7B-RobinHood-gguf](https://huggingface.co/mmnga/DataPilot-ArrowPro-7B-RobinHood-gguf) [mmnga/DataPilot-ArrowPro-7B-KUJIRA-gguf](https://huggingface.co/mmnga/DataPilot-ArrowPro-7B-KUJIRA-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'DataPilot-ArrowPro-7B-RobinHood-Q4_0.gguf' -n 128 -p 'こんにちわ' ```
MaziyarPanahi/NeuralsynthesisCalmexperiment-7B-GGUF
MaziyarPanahi
2024-05-21T17:13:09Z
369
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:allknowingroger/CalmExperiment-7B-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/NeuralsynthesisCalmexperiment-7B" ]
text-generation
2024-05-21T16:41:34Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:allknowingroger/CalmExperiment-7B-slerp - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: NeuralsynthesisCalmexperiment-7B-GGUF base_model: automerger/NeuralsynthesisCalmexperiment-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/NeuralsynthesisCalmexperiment-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisCalmexperiment-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/NeuralsynthesisCalmexperiment-7B](https://huggingface.co/automerger/NeuralsynthesisCalmexperiment-7B) ## Description [MaziyarPanahi/NeuralsynthesisCalmexperiment-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisCalmexperiment-7B-GGUF) contains GGUF format model files for [automerger/NeuralsynthesisCalmexperiment-7B](https://huggingface.co/automerger/NeuralsynthesisCalmexperiment-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
state-spaces/mamba2-780m
state-spaces
2024-05-28T06:26:12Z
369
0
transformers
[ "transformers", "pytorch", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T06:19:43Z
--- license: apache-2.0 ---
zeroblu3/NeuralPoppy-EVO-L3-8B
zeroblu3
2024-06-03T21:46:03Z
369
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-02T12:43:07Z
--- license: llama3 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6331279e2f289a0e1b1f63be/KU_nXJ0GMbc_MfQbB0g7K.png) A megamerge of 10 selected models merged with the "Model Stock" method, using Poppy Porpoise 0.72 as a base. Pretty good at all tasks, strong focus on RP and storytelling, uncensored (does pretty much everything if you ask it to), large knowledge base. ST Presets in the repo. Imatrix Quants available here https://huggingface.co/zeroblu3/NeuralPoppy-EVO-L3-imat.GGUF
KimByeongSu/gpt-neo-1.3B-lama
KimByeongSu
2024-06-03T19:34:50Z
369
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-1.3B", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-03T06:17:36Z
--- license: mit tags: - generated_from_trainer base_model: EleutherAI/gpt-neo-1.3B model-index: - name: gpt-neo-1.3B-lama results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-1.3B-lama This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7594 | 1.0 | 840 | 2.6645 | | 2.2676 | 2.0 | 1680 | 2.6694 | | 1.9628 | 3.0 | 2520 | 2.7338 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.1 - Datasets 2.12.0 - Tokenizers 0.15.1
Funny-Meow/Llama-2-7b-chat-hf-Q4_K_M-GGUF
Funny-Meow
2024-06-03T08:11:02Z
369
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
text-generation
2024-06-03T08:10:49Z
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-2 - llama-cpp - gguf-my-repo base_model: meta-llama/Llama-2-7b-chat-hf extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means\ \ the terms and conditions for use, reproduction, distribution and modification\ \ of the Llama Materials set forth herein. \n\"Documentation\" means the specifications,\ \ manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\ \ \n\"Licensee\" or \"you\" means you, or your employer or any other person or\ \ entity (if you are entering into this Agreement on such person or entity's behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf. \n\"Llama 2\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.\n\"Llama\ \ Materials\" means, collectively, Meta's proprietary Llama 2 and documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\ \nBy clicking \"I Accept\" below or by using or distributing any portion or element\ \ of the Llama Materials, you agree to be bound by this Agreement.\n1. License Rights\ \ and Redistribution. \na. Grant of Rights. You are granted a non-exclusive, worldwide,\ \ non- transferable and royalty-free limited license under Meta's intellectual property\ \ or other rights owned by Meta embodied in the Llama Materials to use, reproduce,\ \ distribute, copy, create derivative works of, and make modifications to the Llama\ \ Materials. \nb. Redistribution and Use.\ni. If you distribute or make the Llama\ \ Materials, or any derivative works thereof, available to a third party, you shall\ \ provide a copy of this Agreement to such third party. \nii. If you receive Llama\ \ Materials, or any derivative works thereof, from a Licensee as part of an integrated\ \ end user product, then Section 2 of this Agreement will not apply to you. \n\ iii. You must retain in all copies of the Llama Materials that you distribute the\ \ following attribution notice within a \"Notice\" text file distributed as a part\ \ of such copies: \"Llama 2 is licensed under the LLAMA 2 Community License, Copyright\ \ (c) Meta Platforms, Inc. All Rights Reserved.\"\niv. Your use of the Llama Materials\ \ must comply with applicable laws and regulations (including trade compliance\ \ laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials\ \ (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated\ \ by reference into this Agreement.\nv. You will not use the Llama Materials or\ \ any output or results of the Llama Materials to improve any other large language\ \ model (excluding Llama 2 or derivative works thereof). \n\n2. Additional Commercial\ \ Terms. If, on the Llama 2 version release date, the monthly active users of the\ \ products or services made available by or for Licensee, or Licensee's affiliates,\ \ is greater than 700 million monthly active users in the preceding calendar month,\ \ you must request a license from Meta, which Meta may grant to you in its sole\ \ discretion, and you are not authorized to exercise any of the rights under this\ \ Agreement unless or until Meta otherwise expressly grants you such rights.\n\ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS\ \ AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \"AS IS\" BASIS, WITHOUT\ \ WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A\ \ PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS\ \ OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED\ \ WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation\ \ of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY\ \ OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE,\ \ ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,\ \ CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS\ \ AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials.\nb. Subject to Meta's ownership of Llama Materials and derivatives\ \ made by or for Meta, with respect to any derivative works and modifications of\ \ the Llama Materials that are made by you, as between you and Meta, you are and\ \ will be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement\ \ of intellectual property or other rights owned or licensable by you, then any\ \ licenses granted to you under this Agreement shall terminate as of the date such\ \ litigation or claim is filed or instituted. You will indemnify and hold harmless\ \ Meta from and against any claim by any third party arising out of or related \ \ to your use or distribution of the Llama Materials.\n6. Term and Termination.\ \ The term of this Agreement will commence upon your acceptance of this Agreement\ \ or access to the Llama Materials and will continue in full force and effect until\ \ terminated in accordance with the terms and conditions herein. Meta may terminate\ \ this Agreement if you are in breach of any term or condition of this Agreement.\ \ Upon termination of this Agreement, you shall delete and cease use of the Llama\ \ Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\ \ \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed\ \ under the laws of the State of California without regard to choice of law principles,\ \ and the UN Convention on Contracts for the International Sale of Goods does not\ \ apply to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement. \n### Llama 2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 2 safely and responsibly. You\ \ agree you will not use, or allow others to use, Llama 2 to:\n1. Violate the law\ \ or others’ rights, including to:\n 1. Engage in, promote, generate, contribute\ \ to, encourage, plan, incite, or further illegal or unlawful activity or content,\ \ such as: \n 1. Violence or terrorism \n 2. Exploitation or harm\ \ to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4.\ \ The illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6.\ \ Any other criminal activity\n 2. Engage in, promote, incite, or facilitate\ \ the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n \ \ 4. Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices \n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any\ \ action or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama 2 Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system \n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 2 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 2 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 2 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement \n 4. Fail to appropriately disclose\ \ to end users any known dangers of your AI system \nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means: \n * Reporting issues with\ \ the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\n\ \ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ \ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\ \ \n * Reporting violations of the Acceptable Use Policy or unlicensed uses of\ \ Llama: [[email protected]](mailto:[email protected])" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit pipeline_tag: text-generation --- # Funny-Meow/Llama-2-7b-chat-hf-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-2-7b-chat-hf`](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo Funny-Meow/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Funny-Meow/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo Funny-Meow/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo Funny-Meow/Llama-2-7b-chat-hf-Q4_K_M-GGUF --hf-file llama-2-7b-chat-hf-q4_k_m.gguf -c 2048 ```
Felladrin/gguf-Aira-2-124M-DPO
Felladrin
2024-06-07T12:26:10Z
369
0
null
[ "gguf", "base_model:nicholasKluge/Aira-2-124M-DPO", "license:apache-2.0", "region:us" ]
null
2024-06-07T11:51:38Z
--- license: apache-2.0 base_model: nicholasKluge/Aira-2-124M-DPO --- GGUF version of [nicholasKluge/Aira-2-124M-DPO](https://huggingface.co/nicholasKluge/Aira-2-124M-DPO).
mradermacher/Dendrite-8x7Bv1-GGUF
mradermacher
2024-06-14T09:51:40Z
369
0
transformers
[ "transformers", "gguf", "en", "base_model:Envoid/Dendrite-8x7Bv1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-13T07:49:12Z
--- base_model: Envoid/Dendrite-8x7Bv1 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Envoid/Dendrite-8x7Bv1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dendrite-8x7Bv1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dendrite-8x7Bv1-GGUF/resolve/main/Dendrite-8x7Bv1.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/KnutJaegersberg_-_Walter-Llama-1B-gguf
RichardErkhov
2024-06-25T22:18:04Z
369
0
null
[ "gguf", "region:us" ]
null
2024-06-25T22:12:09Z
Entry not found
erst/xlm-roberta-base-finetuned-nace
erst
2023-08-11T06:51:23Z
368
3
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Classifying Text into NACE Codes This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify descriptions of activities into [NACE Rev. 2](https://ec.europa.eu/eurostat/web/nace-rev2) codes. ## Data The data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages: - English - German - Spanish - French - Finnish - Polish ## Quick Start ```python from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-nace") model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-nace") pl = pipeline( "sentiment-analysis", model=model, tokenizer=tokenizer, return_all_scores=False, ) pl("The purpose of our company is to build houses") ``` ## License This model is released under the MIT License
hajime9652/xlnet-japanese
hajime9652
2023-01-05T04:28:36Z
368
2
transformers
[ "transformers", "pytorch", "xlnet", "text-generation", "lm-head", "causal-lm", "ja", "dataset:Japanese_Business_News", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - ja thumbnail: tags: - xlnet - lm-head - causal-lm license: - apache-2.0 datasets: - Japanese_Business_News metrics: --- # XLNet-japanese ## Model description This model require Mecab and senetencepiece with XLNetTokenizer. See details https://qiita.com/mkt3/items/4d0ae36f3f212aee8002 This model uses NFKD as the normalization method for character encoding. Japanese muddle marks and semi-muddle marks will be lost. *日本語の濁点・半濁点がないモデルです* #### How to use ```python from fugashi import Tagger from transformers import ( pipeline, XLNetLMHeadModel, XLNetTokenizer ) class XLNet(): def __init__(self): self.m = Tagger('-Owakati') self.gen_model = XLNetLMHeadModel.from_pretrained("hajime9652/xlnet-japanese") self.gen_tokenizer = XLNetTokenizer.from_pretrained("hajime9652/xlnet-japanese") def generate(self, prompt="福岡のご飯は美味しい。コンパクトで暮らしやすい街。"): prompt = self.m.parse(prompt) inputs = self.gen_tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") prompt_length = len(self.gen_tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)) outputs = self.gen_model.generate(inputs, max_length=200, do_sample=True, top_p=0.95, top_k=60) generated = prompt + self.gen_tokenizer.decode(outputs[0])[prompt_length:] return generated ``` #### Limitations and bias This model's training use the Japanese Business News. # Important matter The company that created and published this model is called Stockmark. This repository is for use by HuggingFace and not for infringement. See this documents https://qiita.com/mkt3/items/4d0ae36f3f212aee8002 published by https://github.com/mkt3
nvidia/tts_en_fastpitch
nvidia
2022-09-23T04:28:43Z
368
35
nemo
[ "nemo", "text-to-speech", "speech", "audio", "Transformer", "pytorch", "NeMo", "Riva", "en", "dataset:ljspeech", "arxiv:2006.06873", "arxiv:2108.10447", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-06-28T17:55:51Z
--- language: - en library_name: nemo datasets: - ljspeech thumbnail: null tags: - text-to-speech - speech - audio - Transformer - pytorch - NeMo - Riva license: cc-by-4.0 --- # NVIDIA FastPitch (en-US) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-FastPitch--Transformer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-45M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets) | [![Riva Compatible](https://img.shields.io/badge/NVIDIA%20Riva-compatible-brightgreen#model-badge)](#deployment-with-nvidia-riva) | FastPitch [1] is a fully-parallel transformer architecture with prosody control over pitch and individual phoneme duration. Additionally, it uses an unsupervised speech-text aligner [2]. See the [model architecture](#model-architecture) section for complete architecture details. It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva). ## Usage The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version. ``` pip install nemo_toolkit['all'] ``` ### Automatically instantiate the model Note: This model generates only spectrograms and a vocoder is needed to convert the spectrograms to waveforms. In this example HiFiGAN is used. ```python # Load FastPitch from nemo.collections.tts.models import FastPitchModel spec_generator = FastPitchModel.from_pretrained("nvidia/tts_en_fastpitch") # Load vocoder from nemo.collections.tts.models import HifiGanModel model = HifiGanModel.from_pretrained(model_name="nvidia/tts_hifigan") ``` ### Generate audio ```python import soundfile as sf parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.") spectrogram = spec_generator.generate_spectrogram(tokens=parsed) audio = model.convert_spectrogram_to_audio(spec=spectrogram) ``` ### Save the generated audio file ```python # Save the audio to disk in a file called speech.wav sf.write("speech.wav", audio.to('cpu').detach().numpy()[0], 22050) ``` ### Input This model accepts batches of text. ### Output This model generates mel spectrograms. ## Model Architecture FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with a much higher real-time factor than Tacotron2 for the mel-spectrogram synthesis of a typical utterance. It uses an unsupervised speech-text aligner. ## Training The NeMo toolkit [3] was used for training the models for 1000 epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/fastpitch.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/fastpitch_align_v1.05.yaml). ### Datasets This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent. ## Performance No performance information is available at this time. ## Limitations This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding. ## Deployment with NVIDIA Riva For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support Check out [Riva live demo](https://developer.nvidia.com/riva#demos). ## References - [1] [FastPitch: Parallel Text-to-speech with Pitch Prediction](https://arxiv.org/abs/2006.06873) - [2] [One TTS Alignment To Rule Them All](https://arxiv.org/abs/2108.10447) - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
timm/maxxvit_rmlp_nano_rw_256.sw_in1k
timm
2023-05-11T00:45:50Z
368
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2204.01697", "arxiv:2201.03545", "arxiv:2111.09883", "license:apache-2.0", "region:us" ]
image-classification
2023-01-20T21:36:34Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for maxxvit_rmlp_nano_rw_256.sw_in1k A timm specific MaxxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 16.8 - GMACs: 4.4 - Activations (M): 26.0 - Image size: 256 x 256 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxxvit_rmlp_nano_rw_256.sw_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxxvit_rmlp_nano_rw_256.sw_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 128, 128]) # torch.Size([1, 64, 64, 64]) # torch.Size([1, 128, 32, 32]) # torch.Size([1, 256, 16, 16]) # torch.Size([1, 512, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxxvit_rmlp_nano_rw_256.sw_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
SpringAI/AiAnGenV1
SpringAI
2023-01-27T13:06:55Z
368
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-24T12:35:59Z
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # AIAnGenV1
timm/seresnext101d_32x8d.ah_in1k
timm
2024-02-10T23:41:56Z
368
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2110.00476", "arxiv:1611.05431", "arxiv:1512.03385", "arxiv:1709.01507", "arxiv:1812.01187", "license:apache-2.0", "region:us" ]
image-classification
2023-04-05T19:37:49Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for seresnext101d_32x8d.ah_in1k A SE-ResNeXt-D image classification model with Squeeze-and-Excitation channel attention. This model features: * ReLU activations * 3-layer stem of 3x3 convolutions with pooling * 2x2 average pool + 1x1 convolution shortcut downsample * grouped 3x3 bottleneck convolutions * Squeeze-and-Excitation channel attention Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * Based on [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) `A1` recipe * LAMB optimizer * No CutMix. Stronger dropout, stochastic depth, and RandAugment than paper `A1` recipe * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 93.6 - GMACs: 16.7 - Activations (M): 32.0 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - Squeeze-and-Excitation Networks: https://arxiv.org/abs/1709.01507 - Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('seresnext101d_32x8d.ah_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnext101d_32x8d.ah_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnext101d_32x8d.ah_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{Xie2016, title={Aggregated Residual Transformations for Deep Neural Networks}, author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He}, journal={arXiv preprint arXiv:1611.05431}, year={2016} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @inproceedings{hu2018senet, title={Squeeze-and-Excitation Networks}, author={Jie Hu and Li Shen and Gang Sun}, journal={IEEE Conference on Computer Vision and Pattern Recognition}, year={2018} } ``` ```bibtex @article{He2018BagOT, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018}, pages={558-567} } ```
digiplay/CoffeeMix_v1
digiplay
2023-07-06T03:55:09Z
368
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-06T02:17:13Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/40630?modelVersionId=45847 Sample image I made : ![0235d726-e2c8-4923-bf03-c543f2ac4a60.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/C2Bd8j0hjY-9ml-Q1Od2y.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7e65781b-309a-4686-2b94-a73eae211600/00144-1649392094.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/708f60ef-9802-4543-cfa2-d3dd29722100/00164-3364070768.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/729df7f3-ae0c-4ca1-b6e4-59faf294a100/00140-3641118898.jpeg)
TheBloke/DaringMaid-13B-GGUF
TheBloke
2023-12-24T18:59:51Z
368
3
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "base_model:Kooten/DaringMaid-13B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
2023-12-24T18:52:52Z
--- base_model: Kooten/DaringMaid-13B inference: false language: - en license: cc-by-nc-4.0 model_creator: Kooten model_name: DaringMaid 13B model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # DaringMaid 13B - GGUF - Model creator: [Kooten](https://huggingface.co/Kooten) - Original model: [DaringMaid 13B](https://huggingface.co/Kooten/DaringMaid-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Kooten's DaringMaid 13B](https://huggingface.co/Kooten/DaringMaid-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/DaringMaid-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DaringMaid-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF) * [Kooten's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Kooten/DaringMaid-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Kooten's DaringMaid 13B](https://huggingface.co/Kooten/DaringMaid-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [daringmaid-13b.Q2_K.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [daringmaid-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [daringmaid-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [daringmaid-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [daringmaid-13b.Q4_0.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [daringmaid-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss | | [daringmaid-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [daringmaid-13b.Q5_0.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [daringmaid-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [daringmaid-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [daringmaid-13b.Q6_K.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [daringmaid-13b.Q8_0.gguf](https://huggingface.co/TheBloke/DaringMaid-13B-GGUF/blob/main/daringmaid-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/DaringMaid-13B-GGUF and below it, a specific filename to download, such as: daringmaid-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/DaringMaid-13B-GGUF daringmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/DaringMaid-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/DaringMaid-13B-GGUF daringmaid-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m daringmaid-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./daringmaid-13b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./daringmaid-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Kooten's DaringMaid 13B # DaringMaid-13B My goal was to make a Noromaid that's smarter and better at following instructions. After trying a bunch of different recipes I think this one turned out pretty good - I used [sequelbox/DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) as a base to as its supposed "improve overall knowledge, precise communication, conceptual understanding, and technical skill" over the base llama2. - [NeverSleep/Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) of course. - [Undi95/Utopia](https://huggingface.co/Undi95/Utopia-13B) has been recommended again recently and its still really good so in the mixer it goes - I liked [tavtav/Rose](https://huggingface.co/tavtav/Rose-20B) so i threw in a bit of [CalderaAI/Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2) - There was recently a model that tried to pass itself off as [Gryphe/MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b), i made a merge with that model before it was revealed to be MythoMax and it turned out pretty good so i used it. The .yml config files for mergekit with the exact merges can be found in the ["Recipe"](https://huggingface.co/Kooten/DaringMaid-13B/tree/main/Recipe) folder in the [fp16 repo](https://huggingface.co/Kooten/DaringMaid-13B) # Quants EXL2: [8bpw](https://huggingface.co/Kooten/DaringMaid-13B-8bpw-exl2), [5bpw](https://huggingface.co/Kooten/DaringMaid-13B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/DaringMaid-13B-4bpw-exl2) [GGUF](https://huggingface.co/Kooten/DaringMaid-13B-GGUF): [Q3_K_M](https://huggingface.co/Kooten/DaringMaid-13B-GGUF/blob/main/DaringMaid-13B-Q3_K_M.gguf) - [Q4_K_M](https://huggingface.co/Kooten/DaringMaid-13B-GGUF/blob/main/DaringMaid-13B-Q4_K_M.gguf) - [Q5_K_M](https://huggingface.co/Kooten/DaringMaid-13B-GGUF/blob/main/DaringMaid-13B-Q5_K_M.gguf) ## Prompt template: I have been using Undi/Ikaris SillyTavern presets for Noromaid: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. Do not include descriptions of non-visual qualities such as personality, movements, scents, mental traits, or anything which could not be seen in a still photograph. Do not write in full sentences. Prefix your description with the phrase 'full body portrait,' ### Instruction: {prompt} ### Response: ``` ### Contact Kooten on discord. <!-- original-model-card end -->
TheBloke/toxicqa-Llama2-7B-GGUF
TheBloke
2023-12-31T22:47:25Z
368
13
peft
[ "peft", "gguf", "llama", "generated_from_trainer", "base_model:Undi95/toxicqa-Llama2-7B", "license:other", "region:us" ]
null
2023-12-31T19:55:38Z
--- base_model: Undi95/toxicqa-Llama2-7B inference: false library_name: peft license: other model-index: - name: NobodyExistsOnTheInternet/toxicqa results: [] model_creator: Undi model_name: ToxicQA Llama2 7B model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke tags: - generated_from_trainer --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ToxicQA Llama2 7B - GGUF - Model creator: [Undi](https://huggingface.co/Undi95) - Original model: [ToxicQA Llama2 7B](https://huggingface.co/Undi95/toxicqa-Llama2-7B) <!-- description start --> ## Description This repo contains GGUF format model files for [Undi's ToxicQA Llama2 7B](https://huggingface.co/Undi95/toxicqa-Llama2-7B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF) * [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/toxicqa-Llama2-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [toxicqa-llama2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [toxicqa-llama2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [toxicqa-llama2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [toxicqa-llama2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [toxicqa-llama2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [toxicqa-llama2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [toxicqa-llama2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [toxicqa-llama2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [toxicqa-llama2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [toxicqa-llama2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [toxicqa-llama2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [toxicqa-llama2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/toxicqa-Llama2-7B-GGUF/blob/main/toxicqa-llama2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/toxicqa-Llama2-7B-GGUF and below it, a specific filename to download, such as: toxicqa-llama2-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/toxicqa-Llama2-7B-GGUF toxicqa-llama2-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/toxicqa-Llama2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/toxicqa-Llama2-7B-GGUF toxicqa-llama2-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m toxicqa-llama2-7b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./toxicqa-llama2-7b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./toxicqa-llama2-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Undi's ToxicQA Llama2 7B # Disclaimer: Toxic Content This model is based on a toxic dataset, and its responses may include content that is shocking or disturbing. It is essential to exercise caution and use the model moderately, considering that the generated content is algorithmically derived from the training data. This model is intended for uncensoring purposes only, and users assume responsibility for the interpretation and application of its outputs. I explicitly disclaim endorsement of any specific viewpoints represented in the training data. Additionally, it is crucial to note that the model should not be used for any illegal activities. Users are hereby informed that I am not responsible for any misuse or negative consequences arising from the model's use. Usage of this model implies agreement with these terms. <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.3.0` ```yaml base_model: NousResearch/Llama-2-7b-hf model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_llama_derived_model: true load_in_8bit: true load_in_4bit: false strict: false datasets: - path: dataset type: sharegpt dataset_prepared_path: val_set_size: 0.05 output_dir: ./lora-out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 128 lora_alpha: 64 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: toxicLlama-2-13B wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 2 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 eval_batch_size: 2 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: eval_table_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` </details><br> # NobodyExistsOnTheInternet/toxicqa This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the [NobodyExistsOnTheInternet/toxicqa](https://huggingface.co/datasets/NobodyExistsOnTheInternet/toxicqa) dataset. It achieves the following results on the evaluation set: - Loss: 0.8100 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0748 | 0.0 | 1 | 1.1154 | | 0.8635 | 0.25 | 176 | 0.8732 | | 0.8284 | 0.5 | 352 | 0.8463 | | 0.7928 | 0.75 | 528 | 0.8295 | | 0.8313 | 1.0 | 704 | 0.8155 | | 0.6694 | 1.23 | 880 | 0.8196 | | 0.636 | 1.48 | 1056 | 0.8144 | | 0.6842 | 1.73 | 1232 | 0.8105 | | 0.6277 | 1.98 | 1408 | 0.8100 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0 <!-- original-model-card end -->
OrionStarAI/Orion-14B-Chat-Int4
OrionStarAI
2024-03-26T10:04:46Z
368
28
transformers
[ "transformers", "safetensors", "orion", "text-generation", "code", "model", "llm", "custom_code", "en", "zh", "ja", "ko", "arxiv:2401.12246", "autotrain_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-01-18T09:54:06Z
--- language: - en - zh - ja - ko metrics: - accuracy pipeline_tag: text-generation tags: - code - model - llm --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <img src="./assets/imgs/orion_start.PNG" alt="logo" width="50%" /> </div> <div align="center"> <h1> Orion-14B </h1> </div> <div align="center"> <div align="center"> <b>🌐English</b> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4/blob/main/README_zh.md" target="_blank">🇨🇳中文</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4/blob/main/README_ja.md" target="_blank">🇯🇵日本語</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4/blob/main/README_ko.md" target="_blank">🇰🇷한국어</a> </div> <h4 align="center"> <p> 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>😺 <a href="https://github.com/OrionStarAI/Orion" target="_blank">GitHub</a><br>📖 <a href="https://arxiv.org/pdf/2401.12246.pdf" target="_blank">Tech Report</a> <p> </h4> </div> # Table of Contents - [📖 Model Introduction](#model-introduction) - [🔗 Model Download](#model-download) - [🔖 Model Benchmark](#model-benchmark) - [📊 Model Inference](#model-inference) [<img src="./assets/imgs/vllm.png" alt="vllm" style="margin: 0;display: initial;" height="20" />](#vllm) [<img src="./assets/imgs/llama_cpp.png" alt="llamacpp" style="margin: 0;display: initial;" height="20" />](#llama-cpp) - [📜 Declarations & License](#declarations-license) - [🥇 Company Introduction](#company-introduction) <a name="model-introduction"></a><br> # 1. Model Introduction - Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages. For details, please refer to [tech report](https://arxiv.org/pdf/2401.12246.pdf). - The Orion-14B series models exhibit the following features: - Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations. - Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets. - The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests. - The long-chat version supports extremely long texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%. <table style="border-collapse: collapse; width: 100%;"> <tr> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;"> </td> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;"> </td> </tr> </table> - Orion-14B series models including: - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens. - **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community. - **Orion-14B-LongChat:** The long-context version excels at handling extremely lengthy texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks. - **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system. - **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%. - **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights. <a name="model-download"></a><br> # 2. Model Download Model release and download links are provided in the table below: | Model Name | HuggingFace Download Links | ModelScope Download Links | |-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | ⚾Orion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) | | 😛Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) | | 📃Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) | | 🔎Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) | | 🔌Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) | | 💼Orion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) | | 📦Orion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) | <a name="model-benchmark"></a><br> # 3. Model Benchmarks ## 3.1. Base Model Orion-14B-Base Benchmarks ### 3.1.1. LLM evaluation results on examination and professional knowledge | Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 | | Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 | | Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 | | QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 | | InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 | | **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** | ### 3.1.2. LLM evaluation results on language understanding and common knowledge | Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 | | Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 | | Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 | | QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 | | InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 | | **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** | ### 3.1.3. LLM evaluation results of OpenCompass testsets | Model | Average | Examination | Language | Knowledge | Understanding | Reasoning | |------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 | | Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 | | Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 | | QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 | | InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 | |**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** | ### 3.1.4. Comparison of LLM performances on Japanese testsets | Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM | |--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------| | PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 | | WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 | | ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 | | StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 | | LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 | | Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 | | QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 | | Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 | | **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** | ### 3.1.5. Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation |Model | **Average**<br>n=0&nbsp;&nbsp;n=5 | HellaSwag<br>n=0&nbsp;&nbsp;n=5 | COPA<br> n=0&nbsp;&nbsp;n=5 | BooIQ<br>n=0&nbsp;&nbsp;n=5 | SentiNeg<br>n=0&nbsp;&nbsp;n=5| |------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------| | KoGPT | 53.0 &nbsp;&nbsp; 70.1 | 55.9 &nbsp;&nbsp; 58.3 | 73.5 &nbsp;&nbsp; 72.9 | 45.1 &nbsp;&nbsp; 59.8 | 37.5 &nbsp;&nbsp; 89.4 | | Polyglot-ko-13B | 69.6 &nbsp;&nbsp; 73.7 |**59.5** &nbsp;&nbsp; **63.1**|**79.4** &nbsp;&nbsp; **81.1**| 48.2 &nbsp;&nbsp; 60.4 | 91.2 &nbsp;&nbsp; 90.2 | | LLaMA 2-13B | 46.7 &nbsp;&nbsp; 63.7 | 41.3 &nbsp;&nbsp; 44.0 | 59.3 &nbsp;&nbsp; 63.8 | 34.9 &nbsp;&nbsp; 73.8 | 51.5 &nbsp;&nbsp; 73.4 | | Baichuan 2-13B | 52.1 &nbsp;&nbsp; 58.7 | 39.2 &nbsp;&nbsp; 39.6 | 60.6 &nbsp;&nbsp; 60.6 | 58.4 &nbsp;&nbsp; 61.5 | 50.3 &nbsp;&nbsp; 72.9 | | QWEN-14B | 53.8 &nbsp;&nbsp; 73.7 | 45.3 &nbsp;&nbsp; 46.8 | 64.9 &nbsp;&nbsp; 68.9 | 33.4 &nbsp;&nbsp; 83.5 | 71.5 &nbsp;&nbsp; 95.7 | | Yi-34B | 54.2 &nbsp;&nbsp; 72.1 | 44.6 &nbsp;&nbsp; 44.7 | 58.0 &nbsp;&nbsp; 60.6 | 65.9 &nbsp;&nbsp; 90.2 | 48.3 &nbsp;&nbsp; 92.9 | |**Orion-14B-Chat**|**74.5** &nbsp;&nbsp; **79.6**| 47.0 &nbsp;&nbsp; 49.6 | 77.7 &nbsp;&nbsp; 79.4 |**81.6** &nbsp;&nbsp; **90.7**|**92.4** &nbsp;&nbsp; **98.7**| ### 3.1.6. Multilingual evaluation | Model | Train Lang | Japanese | Korean | Chinese | English | |--------------------|------------|----------|----------|----------|----------| | PLaMo-13B | En,Jp | 52.3 | * | * | * | | Weblab-10B | En,Jp | 50.7 | * | * | * | | ELYZA-jp-7B | En,Jp | 48.8 | * | * | * | | StableLM-jp-7B | En,Jp | 51.1 | * | * | * | | KoGPT-6B | En,Ko | * | 70.1 | * | * | | Polyglot-ko-13B | En,Ko | * | 70.7 | * | * | | Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 | | Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 | | Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 | | Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** | | **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 | ## 3.2. Chat Model Orion-14B-Chat Benchmarks ### 3.2.1. Chat model subjective evaluation of MTBench | Model | First-Turn | Second-Turn | **Average** | |----------------------|----------|----------|----------| | Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 | | Qwen-14B-Chat | 7.30 | 6.62 | 6.96 | | Llama2-13B-Chat | 7.10 | 6.20 | 6.65 | | InternLM-20B-Chat | 7.03 | 5.93 | 6.48 | | **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** | \* use vllm for inference ### 3.2.2. Chat model subjective evaluation of AlignBench | Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**| |--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 | | Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**| | Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 | | InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 | | **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 | \* use vllm for inference ## 3.3. LongChat Model Orion-14B-LongChat Benchmarks ### 3.3.1. LongChat evaluation of LongBench | Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P| |--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 | | LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 | | Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 | | Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** | | Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 | ## 3.4. Chat RAG Model Benchmarks ### 3.4.1. LLM evaluation results of self-built RAG testsets |Model|Effectiveness of Response(Keyword)|*Effectiveness of Response(subjective evaluation)|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction| |---------------------|------|------|------|------|------|------| | Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 | | Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 | | Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 | | GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 | | Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 | \* means manual assessment ## 3.5. Chat Plugin Model Orion-14B-Chat-Plugin Benchmarks ### 3.5.1. LLM evaluation results of self-built plugin testsets |Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition | |-----------------------|--------|-----------|--------| | Baichuan2-13B-Chat | 25 | 0 | 0 | | Qwen-14B-Chat | 55 | 0 | 50 | | GPT-4 | **95** | 52.38 | 70 | | Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** | ## 3.6. Quantized Model Orion-14B-Base-Int4 Benchmarks ### 3.6.1. Comparison of before and after quantization |Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag| |-------------------------|-------|-----|------|------|------|------|------| | OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 | | OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 | <a name="model-inference"></a><br> # 4. Model Inference Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link is available in the table at the beginning of this document. We demonstrate various inference methods here, and the program will automatically download the necessary resources from Hugging Face. ## 4.1. Python Code ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B") messages = [{"role": "user", "content": "Hello, what is your name? "}] response = model.chat(tokenizer, messages, streaming=False) print(response) ``` In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1). ## 4.2. Command Line Tool ```shell CUDA_VISIBLE_DEVICES=0 python cli_demo.py ``` This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model. ## 4.3. Direct Script Inference ```shell # base model CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello # chat model CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi ``` <a name="vllm"></a><br> ## 4.4. Inference by vllm - Project URL<br> https://github.com/vllm-project/vllm - Pull Request<br> https://github.com/vllm-project/vllm/pull/2539 <a name="llama-cpp"></a><br> ## 4.5. Inference by llama.cpp - Project URL<br> https://github.com/ggerganov/llama.cpp - Pull Request<br> https://github.com/ggerganov/llama.cpp/pull/5118 - How to convert to GGUF model ```shell python convert-hf-to-gguf.py path/to/Orion-14B-Chat --outfile chat.gguf ``` - How to run generation ```shell ./main --frequency-penalty 0.5 --frequency-penalty 0.5 --top-k 5 --top-p 0.9 -m chat.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e ``` ## 4.6 Example Output ### 4.6.1. Casual Chat ````` User: Hello Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask. User: What functions do you have? Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to: Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more. Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases. Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants. Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me. Translating languages: I can translate text from one language to another. Chatting: I can engage in conversation with users and respond to their queries in a conversational manner. Summarizing long texts: I can summarize longer texts like articles, books, and research papers. Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria. Playing games: I can play simple games like Hangman, Trivia, and more. Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences. These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it. User: Tell me a joke. Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything. ````` ### 4.6.2. Japanese & Korean Chat ````` User:自己を紹介してください Orion-14B:こんにちは、私の名前はChatMaxで、OrionStarによって開発されたAIアシスタントです。私はユーザーに正確で、客観的で、有用な回答を提供し、彼らが問題を解決し、情報を取得するのを助けることができます。 User:자기소개를 해주세요. Orion-14B:안녕하세요, 제 이름은 ChatMax입니다. OrionStar에서 개발한 AI 어시스턴트입니다. 저는 사용자들에게 정확하고 객관적이며 유용한 답변을 제공하여 문제를 해결하고 정보를 얻는 데 도움을 줄 수 있습니다. ````` <a name="declarations-license"></a><br> # 5. Declarations, License ## 5.1. Declarations We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law. Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing. We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment. We have done our best to ensure the compliance of the data used in the model training process. However, despite our significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any problems arise due to the use of the Orion-14B open-source model, including but not limited to data security issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility. ## 5.2. License Community use of the Orion-14B series models - For code, please comply with [Apache License Version 2.0](./LICENSE)<br> - For model, please comply with [【Orion-14B Series】 Models Community License Agreement](./ModelsCommunityLicenseAgreement) <a name="company-introduction"></a><br> # 6. Company Introduction OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology, OrionStar aims to make society and the world a better place. OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and visual navigation. It integrates product development capabilities and technological application capabilities. Based on the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini, Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for Truly Useful Robots", OrionStar empowers more people through AI technology. **The core strengths of OrionStar lies in possessing end-to-end AI application capabilities,** including big data preprocessing, large model pretraining, fine-tuning, prompt engineering, agent, etc. With comprehensive end-to-end model training capabilities, including systematic data processing workflows and the parallel model training capability of hundreds of GPUs, it has been successfully applied in various industry scenarios such as government affairs, cloud services, international e-commerce, and fast-moving consumer goods. Companies with demands for deploying large-scale model applications are welcome to contact us.<br> **Enquiry Hotline: 400-898-7779**<br> **E-mail: [email protected]**<br> **Discord Link: https://discord.gg/zumjDWgdAs** <div align="center"> <img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" /> </div>
mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF
mradermacher
2024-05-10T16:46:55Z
368
0
transformers
[ "transformers", "gguf", "en", "base_model:Envoid/Mixtral-Instruct-ITR-8x7B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-06T10:51:02Z
--- base_model: Envoid/Mixtral-Instruct-ITR-8x7B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 10.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 27.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-Instruct-ITR-8x7B-i1-GGUF/resolve/main/Mixtral-Instruct-ITR-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
kkatiz/thai-trocr-thaigov-v2
kkatiz
2024-03-15T16:08:13Z
368
4
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-to-text", "th", "endpoints_compatible", "region:us" ]
image-to-text
2024-03-08T02:21:00Z
--- library_name: transformers metrics: - cer widget: - src: "https://i.ibb.co/QXZFSNx/test7.png" output: text: รมว.ธรรมนัส ลงพื้นที่ language: - th pipeline_tag: image-to-text --- # thai_trocr_thaigov_v2 <!-- Provide a quick summary of what the model is/does. --> Vision Encoder Decoder Models - Use microsoft/trocr-base-handwritten as encoder. - Use airesearch/wangchanberta-base-att-spm-uncased as decoder - Fine-tune on 250k synthetic text images dataset using [ThaiGov V2 Corpus](https://github.com/PyThaiNLP/thaigov-v2-corpus) - Use [SynthTIGER](https://github.com/clovaai/synthtiger) to generate synthetic text image. - It is useful to fine-tune any Thai OCR task. # Usage ``` python from PIL import Image from transformers import TrOCRProcessor, VisionEncoderDecoderModel processor = TrOCRProcessor.from_pretrained("kkatiz/thai-trocr-thaigov-v2") model = VisionEncoderDecoderModel.from_pretrained("kkatiz/thai-trocr-thaigov-v2") image = Image.open("... your image path").convert("RGB") pixel_values = processor(image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) ```
mradermacher/Covasna-0.1-GGUF
mradermacher
2024-05-06T06:15:37Z
368
1
transformers
[ "transformers", "gguf", "en", "base_model:Mihaiii/Covasna-0.1", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-12T03:18:13Z
--- base_model: Mihaiii/Covasna-0.1 language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About static quants of https://huggingface.co/Mihaiii/Covasna-0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q2_K.gguf) | Q2_K | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.IQ3_XS.gguf) | IQ3_XS | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q3_K_S.gguf) | Q3_K_S | 18.4 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.IQ3_S.gguf) | IQ3_S | 18.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.IQ3_M.gguf) | IQ3_M | 19.1 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q3_K_M.gguf) | Q3_K_M | 20.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q3_K_L.gguf) | Q3_K_L | 22.2 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.IQ4_XS.gguf) | IQ4_XS | 22.9 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q4_K_S.gguf) | Q4_K_S | 24.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q4_K_M.gguf) | Q4_K_M | 25.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q5_K_S.gguf) | Q5_K_S | 29.0 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q5_K_M.gguf) | Q5_K_M | 29.8 | | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q6_K.gguf) | Q6_K | 34.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF/resolve/main/Covasna-0.1.Q8_0.gguf) | Q8_0 | 44.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/LewdGem-40B-i1-GGUF
mradermacher
2024-05-07T16:24:43Z
368
2
transformers
[ "transformers", "gguf", "not-for-all-audiences", "nsfw", "merge", "en", "base_model:Undi95/LewdGem-40B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-15T21:40:04Z
--- base_model: Undi95/LewdGem-40B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - not-for-all-audiences - nsfw - merge --- ## About weighted/imatrix quants of https://huggingface.co/Undi95/LewdGem-40B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/LewdGem-40B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ1_S.gguf) | i1-IQ1_S | 8.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ1_M.gguf) | i1-IQ1_M | 9.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ2_S.gguf) | i1-IQ2_S | 12.2 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ2_M.gguf) | i1-IQ2_M | 13.3 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q2_K.gguf) | i1-Q2_K | 14.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 15.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ3_S.gguf) | i1-IQ3_S | 16.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 16.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ3_M.gguf) | i1-IQ3_M | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 18.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 20.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 20.8 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q4_0.gguf) | i1-Q4_0 | 21.9 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 22.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 23.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 26.8 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 27.6 | | | [GGUF](https://huggingface.co/mradermacher/LewdGem-40B-i1-GGUF/resolve/main/LewdGem-40B.i1-Q6_K.gguf) | i1-Q6_K | 31.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->